repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
ramezquitao/pyoptools | doc/notebooks/basic/SimpleComponents.ipynb | gpl-3.0 | from pyoptools.all import *
from numpy import pi,sqrt
"""
Explanation: Creating components with pyOpTools
To be able to simulate an optical system, the first step is to use the predefined surfaces to create components. In this notebook it will be shown how to create some simple components.
End of explanation
"""
S0=Spherical(shape=Circular(radius=50),curvature=1./200.)
S1=Spherical(shape=Circular(radius=50),curvature=1./200.)
S2=Cylinder(radius=50,length=10)
"""
Explanation: Creating a bi-convex lens
A bi-convex lens can described as a piece of glass limited by 2 spherical surfaces (the 2 lens faces), and a a cylindrical surface (the lens border). In the next example, this surfaces will be:
An spherical surface limited by a circular shape with 100 mm diameter (this will be the lens diameter), and a curvature radious of 200 mm.
A second spherical surface with the same characteristics as the previous.
A 100. mm diameter cylinder with 10 mm length.
<div class="alert alert-info">
**Note:** In pyOpTools the dimensions are always given in millimeters.
</div>
End of explanation
"""
L1=Component(surflist=[(S0,(0,0,-5),(0,0,0)),
(S1,(0,0,5),(0,0,0)),
(S2,(0,0,6.5),(0,0,0))
],material=material.schott["N-BK7"])
Plot3D(L1,size=(120,120),scale=3,rot=[(pi/3,0,0)])
"""
Explanation: With the created surfaces we will create the Component. The component constructor receives 2 important arguments:
surflist
List of tuples of the form (surface, (PosX,PosY,PosZ), (RotX,RotY,RotZ) where surface is an instance of a subclass of Surface, PosX,PosY,PosZare the surface’s vertex coordinates, and RotX,RotY,RotZ are the rotation angles of the surface around the X , Y , and Z axes, given in radians. The rotation about the Z axis if applied first, then the rotation about the Y axis, and finally the rotation about the X axis.
material
Instance of the class Material with the material definition, or a floating point number to indicate a constant refraction index material.
End of explanation
"""
width=50
height=50
reflectivity=0.5
a_face= Plane(shape=Rectangular(size=(width,height)))
b_face= Plane(shape=Rectangular(size=(width,height)))
h=sqrt(2.)*width
h_face= Plane (shape=Rectangular(size=(h,height)),reflectivity=reflectivity)
w2=width/2.
e1=Plane (shape=Triangular(((-w2,w2),(-w2,-w2),(w2,-w2))))
e2=Plane (shape=Triangular(((-w2,w2),(-w2,-w2),(w2,-w2))))
P=Component(surflist=[(a_face,(0,0,-width/2),(0,0,0)),
(b_face,(width/2,0,0),(0,pi/2,0)),
(h_face,(0,0,0),(0,-pi/4,0)),
(e1,(0,height/2,0),(pi/2,-pi/2,0)),
(e2,(0,height/2,0),(pi/2,-pi/2,0))
],material=material.schott["N-BK7"])
Plot3D(P,size=(120,120),scale=3,rot=[(pi/6,pi/8,0)])
"""
Explanation: Creting a 90 degree prism
The next code is an example of the creation of a right angle prism. This component is composed as 3 rectangular planes and two triangular planes.
End of explanation
"""
def prism(reflectivity=0):
width=50
height=50
a_face= Plane(shape=Rectangular(size=(width,height)))
b_face= Plane(shape=Rectangular(size=(width,height)))
h=sqrt(2.)*width
h_face= Plane (shape=Rectangular(size=(h,height)),reflectivity=reflectivity)
w2=width/2.
e1=Plane (shape=Triangular(((-w2,w2),(-w2,-w2),(w2,-w2))))
e2=Plane (shape=Triangular(((-w2,w2),(-w2,-w2),(w2,-w2))))
P=Component(surflist=[(a_face,(0,0,-width/2),(0,0,0)),
(b_face,(width/2,0,0),(0,pi/2,0)),
(h_face,(0,0,0),(0,-pi/4,0)),
(e1,(0,height/2,0),(pi/2,-pi/2,0)),
(e2,(0,height/2,0),(pi/2,-pi/2,0))
],material=material.schott["N-BK7"])
return P
P1=prism()
P2=prism(reflectivity=.5)
cube=System(complist=[(P1,(0,0,0),(0,0,0)),(P2,(0,0,0),(0,pi,0))],n=1.)
Plot3D(cube,size=(120,120),scale=3,rot=[(pi/6,pi/8,0)])
"""
Explanation: Creating a beam splitting cube
The next code shows an example to create a beam splitting cube by using 2 components (prisms defined by a function) to create a system (in this case it is really a subsystem that can be used as a component). An extra feature added to this example, is a reflective characteristic added to one of the surfaces in the prism P2, so the subsystem behaves as a beam splitting cube.
<div class="alert alert-warning">
**Warning:**
1. Care must be taken when using reflective surfaces to avoid the creation of resonnant cavities such as 2 parallel semi-reflective surfaces, as pyOpTools will try to propagate the rays for ever and the system will crash. For this reason, when creating the beam splitting cube, only one of the prisms have a reflective surface.
2. pyOpTools can handle 2 surfaces (from different components) in contact, but it can not detect nor handle if the components overlap in space, and will provide an erroneous result.
</div>
End of explanation
"""
|
WilliamHPNielsen/broadbean | docs/Subsequences.ipynb | mit | %matplotlib notebook
import broadbean as bb
from broadbean.plotting import plotter
sine = bb.PulseAtoms.sine
ramp = bb.PulseAtoms.ramp
"""
Explanation: Subsequences
This notebook describes the use of subsequences.
Subsequences can be useful in a wide range of settings.
End of explanation
"""
# Uncompressed
SR = 1e9
t1 = 200e-6 # wait
t2 = 20e-9 # perturb the system
t3 = 250e-6 # read out
bp1 = bb.BluePrint()
bp1.insertSegment(0, ramp, (0, 0), dur=t1)
bp1.insertSegment(1, ramp, (1, 1), dur=t2, name='perturbation')
bp1.insertSegment(2, ramp, (0, 0), dur=t3)
bp1.setSR(SR)
plotter(bp1)
# Now make a variation of the height
elem1 = bb.Element()
elem1.addBluePrint(1, bp1)
elem2 = elem1.copy()
elem2.changeArg(1, 'perturbation', 'start', 0.75)
elem2.changeArg(1, 'perturbation', 'stop', 0.75)
elem3 = elem1.copy()
elem3.changeArg(1, 'perturbation', 'start', 0.5)
elem3.changeArg(1, 'perturbation', 'stop', 0.5)
# And put that together in a sequence
seq = bb.Sequence()
seq.addElement(1, elem1)
seq.addElement(2, elem2)
seq.addElement(3, elem3)
seq.setSR(SR)
plotter(seq)
# The sequence is long and heavy on the memory
seq.points
"""
Explanation: Example 1: Compression
In a waveform with very long "dead" periods, subsequences can -- via the option to repeat elements -- drastically reduce the number of points of the entire sequence.
Here we imagine a pulse sequence where we first wait, then perturb the system, then wait some more for readout. We'd like to vary the height of the perturbation.
Uncompressed
End of explanation
"""
# Let's make a sequence instead of an element
SR = 1e9
t1 = 200e-6 # wait
t2 = 20e-9 # perturb the system
t3 = 250e-6 # read out
compression = 100 # this number has to be chosen with some care
bp1 = bb.BluePrint()
bp1.insertSegment(0, ramp, (0, 0), dur=t1/compression)
bp1.setSR(SR)
elem1 = bb.Element()
elem1.addBluePrint(1, bp1)
#
bp2 = bb.BluePrint()
bp2.insertSegment(0, ramp, (1, 1), dur=t2, name='perturbation')
bp2.setSR(SR)
elem2 = bb.Element()
elem2.addBluePrint(1, bp2)
#
bp3 = bb.BluePrint()
bp3.insertSegment(0, ramp, (0, 0), dur=t3/compression)
bp3.setSR(SR)
elem3 = bb.Element()
elem3.addBluePrint(1, bp3)
seq = bb.Sequence()
seq.addElement(1, elem1)
seq.setSequencingNumberOfRepetitions(1, compression)
seq.addElement(2, elem2)
seq.addElement(3, elem3)
seq.setSequencingNumberOfRepetitions(3, compression)
seq.setSR(SR)
# Now make the variation
seq2 = seq.copy()
seq2.element(2).changeArg(1, 'perturbation', 'start', 0.75)
seq2.element(2).changeArg(1, 'perturbation', 'stop', 0.75)
#
seq3 = seq.copy()
seq3.element(2).changeArg(1, 'perturbation', 'start', 0.5)
seq3.element(2).changeArg(1, 'perturbation', 'stop', 0.5)
#
fullseq = seq + seq2 + seq3
plotter(fullseq)
# The above sequence achieves the same as the uncompresed, but has fewer points
fullseq.points
"""
Explanation: Compressed
End of explanation
"""
mainseq = bb.Sequence()
mainseq.setSR(SR)
mainseq.addSubSequence(1, seq)
mainseq.addSubSequence(2, seq2)
mainseq.addSubSequence(3, seq3)
mainseq.setSequencingNumberOfRepetitions(1, 25)
mainseq.setSequencingNumberOfRepetitions(2, 25)
mainseq.setSequencingNumberOfRepetitions(3, 25)
plotter(mainseq)
# The plotting does not show the details of the subsequence,
# but it DOES show the min and max voltages of a subsequence
# as grey lines
# The number of points is still low
mainseq.points
"""
Explanation: Now using subsequences
Subsequences come into play when we want to, say, repeat each wait-perturb-wait element 25 times.
In the uncompressed case, that can only be achieved by adding each element 24 times more, thus resulting in a very large output file. Using subsequences, we can get away with a much smaller file size.
End of explanation
"""
|
gregunz/ada2017 | exam/data_cluedo/2-icecream.ipynb | mit | # Run the following to import necessary packages and import dataset. Do not use any additional plotting libraries.
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
datafile = "dataset/icecream.csv"
df = pd.read_csv(datafile)
df
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Ice-Cream" data-toc-modified-id="Ice-Cream-1">Ice Cream</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Instructions-/-Notes:" data-toc-modified-id="Instructions-/-Notes:-1.0.1">Instructions / Notes:</a></span></li><li><span><a href="#The-dataset-above-contains-the-ice-cream-sales,-temperature,-number-of-deaths-by-drowning-and-humidity-level-in-a-city-during-a-timespan-of-12-months." data-toc-modified-id="The-dataset-above-contains-the-ice-cream-sales,-temperature,-number-of-deaths-by-drowning-and-humidity-level-in-a-city-during-a-timespan-of-12-months.-1.0.2">The dataset above contains the ice cream sales, temperature, number of deaths by drowning and humidity level in a city during a timespan of 12 months.</a></span></li></ul></li><li><span><a href="#Clue" data-toc-modified-id="Clue-1.1">Clue</a></span><ul class="toc-item"><li><span><a href="#Some-of-the-correlations-you-found-above-may-be-spurious:-https://en.wikipedia.org/wiki/Spurious_relationship----Only-include-meaningful-correlations-to-the-list!" data-toc-modified-id="Some-of-the-correlations-you-found-above-may-be-spurious:-https://en.wikipedia.org/wiki/Spurious_relationship----Only-include-meaningful-correlations-to-the-list!-1.1.1">Some of the correlations you found above may be spurious: <a href="https://en.wikipedia.org/wiki/Spurious_relationship" target="_blank">https://en.wikipedia.org/wiki/Spurious_relationship</a> -- Only include meaningful correlations to the list!</a></span></li></ul></li></ul></li></ul></div>
Ice Cream
Instructions / Notes:
Read these carefully
Read and execute each cell in order, without skipping forward
You may create new Jupyter notebook cells to use for e.g. testing, debugging, exploring, etc.- this is encouraged in fact!- just make sure that your final answer dataframes and answers use the set variables outlined below
Have fun!
End of explanation
"""
# Here are the correlation coefficients between pairs of columns
corr = df.corr()
corr
abs_corr = np.abs(df.corr())
indices = corr.index
corr_pairs = []
for i, idx_i in enumerate(indices):
for j, c in enumerate(abs_corr[idx_i]):
if c > .9 and i < j:
corr_pairs.append((idx_i, indices[j]))
corr_pairs
"""
Explanation: The dataset above contains the ice cream sales, temperature, number of deaths by drowning and humidity level in a city during a timespan of 12 months.
End of explanation
"""
correlations = []
correlations.append(['ice_cream_sales', 'temperature'])
# do not touch
correlations.sort()
print(correlations)
"""
Explanation: Identify strong (i.e., correleation coefficient > 0.9) and meaningful correlations among pairs of columns in this dataset.
Append these pairs of correlated columns in the following form [column_x, column_y] to the variable below.
End of explanation
"""
# meaningful_correlation.append(['column_x', 'column_y'])
correlations_clue = []
# do not touch
correlations_clue.sort()
print(correlations_clue)
"""
Explanation: Clue
Some of the correlations you found above may be spurious: https://en.wikipedia.org/wiki/Spurious_relationship -- Only include meaningful correlations to the list!
If this clue changes your answer, try again below. Otherwise, if you are confident in your answer above, leave the following untouched.
End of explanation
"""
|
bataeves/kaggle | sber/Model-0.31434.ipynb | unlicense | # train_raw = pd.read_csv("data/train.csv")
train_raw = pd.read_csv("data/train_without_noise.csv")
test = pd.read_csv("data/test.csv")
macro = pd.read_csv("data/macro.csv")
train_raw.head()
def preprocess_anomaly(df):
df["full_sq"] = map(lambda x: x if x > 10 else float("NaN"), df["full_sq"])
df["life_sq"] = map(lambda x: x if x > 5 else float("NaN"), df["life_sq"])
df["kitch_sq"] = map(lambda x: x if x > 2 else float("NaN"), df["kitch_sq"])
# superclean
# https://www.kaggle.com/keremt/very-extensive-cleaning-by-sberbank-discussions
df.ix[df[df.life_sq > df.full_sq].index, "life_sq"] = np.NaN
df.ix[df[df.kitch_sq >= df.life_sq].index, "kitch_sq"] = np.NaN
df.ix[df[df.kitch_sq == 0].index, "kitch_sq"] = np.NaN
df.ix[df[df.kitch_sq == 1].index, "kitch_sq"] = np.NaN
df.ix[df[df.build_year < 1500].index, "build_year"] = np.NaN
df.ix[df[df.build_year > 1500].index, "build_year"] = np.NaN
df.ix[df[df.num_room == 0].index, "num_room"] = np.NaN
df.ix[df[df.floor == 0].index, "floor"] = np.NaN
df.ix[df[df.max_floor == 0].index, "max_floor"] = np.NaN
df.ix[df[df.floor > df.max_floor].index, "max_floor"] = np.NaN
df.ix[df[df.state == 33].index, "state"] = np.NaN
return df
def preprocess_categorial(df):
# df = mess_y_categorial(df, 5)
for c in df.columns:
if df[c].dtype == 'object':
lbl = sk.preprocessing.LabelEncoder()
lbl.fit(list(train_raw[c].values) + list(test[c].values))
df[c] = lbl.transform(list(df[c].values))
df = df.select_dtypes(exclude=['object'])
return df
def apply_categorial(test, train):
# test = mess_y_categorial_fold(test, train)
# test = test.select_dtypes(exclude=['object'])
return preprocess_categorial(test)
def smoothed_likelihood(targ_mean, nrows, globalmean, alpha=10):
try:
return (targ_mean * nrows + globalmean * alpha) / (nrows + alpha)
except Exception:
return float("NaN")
def mess_y_categorial(df, nfolds=3, alpha=10):
from sklearn.utils import shuffle
from copy import copy
folds = np.array_split(shuffle(df), nfolds)
newfolds = []
for i in range(nfolds):
fold = folds[i]
other_folds = copy(folds)
other_folds.pop(i)
other_fold = pd.concat(other_folds)
newfolds.append(mess_y_categorial_fold(fold, other_fold, alpha=10))
return pd.concat(newfolds)
def mess_y_categorial_fold(fold_raw, other_fold, cols=None, y_col="price_doc", alpha=10):
fold = fold_raw.copy()
if not cols:
cols = list(fold.select_dtypes(include=["object"]).columns)
globalmean = other_fold[y_col].mean()
for c in cols:
target_mean = other_fold[[c, y_col]].groupby(c).mean().to_dict()[y_col]
nrows = other_fold[c].value_counts().to_dict()
fold[c + "_sll"] = fold[c].apply(
lambda x: smoothed_likelihood(target_mean.get(x), nrows.get(x), globalmean, alpha) if x else float("NaN")
)
return fold
def apply_macro(df):
macro_cols = [
'timestamp', "balance_trade", "balance_trade_growth", "eurrub", "average_provision_of_build_contract",
"micex_rgbi_tr", "micex_cbi_tr", "deposits_rate", "mortgage_value", "mortgage_rate",
"income_per_cap", "rent_price_4+room_bus", "museum_visitis_per_100_cap", "apartment_build"
]
return pd.merge(df, macro, on='timestamp', how='left')
def preprocess(df):
from sklearn.preprocessing import OneHotEncoder, FunctionTransformer
# df = apply_macro(df)
# df["timestamp_year"] = df["timestamp"].apply(lambda x: x.split("-")[0])
# df["timestamp_month"] = df["timestamp"].apply(lambda x: x.split("-")[1])
# df["timestamp_year_month"] = df["timestamp"].apply(lambda x: x.split("-")[0] + "-" + x.split("-")[1])
ecology = ["no data", "poor", "satisfactory", "good", "excellent"]
df["ecology_index"] = map(ecology.index, df["ecology"].values)
bool_feats = [
"thermal_power_plant_raion",
"incineration_raion",
"oil_chemistry_raion",
"radiation_raion",
"railroad_terminal_raion",
"big_market_raion",
"nuclear_reactor_raion",
"detention_facility_raion",
"water_1line",
"big_road1_1line",
"railroad_1line",
"culture_objects_top_25"
]
for bf in bool_feats:
df[bf + "_bool"] = map(lambda x: x == "yes", df[bf].values)
df = preprocess_anomaly(df)
df['rel_floor'] = df['floor'] / df['max_floor'].astype(float)
df['rel_kitch_sq'] = df['kitch_sq'] / df['full_sq'].astype(float)
df['rel_life_sq'] = df['life_sq'] / df['full_sq'].astype(float)
df["material_cat"] = df.material.fillna(0).astype(int).astype(str).replace("0", "")
df["state_cat"] = df.state.fillna(0).astype(int).astype(str).replace("0", "")
df["num_room_cat"] = df.num_room.fillna(0).astype(int).astype(str).replace("0", "")
df = df.drop(["id", "timestamp"], axis=1)
return df
train_pr = preprocess(train_raw)
train = preprocess_categorial(train_pr)
# train = train.fillna(-1)
X = train.drop(["price_doc"], axis=1)
y = train["price_doc"].values
"""
Explanation: Препроцессинг фич
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X.values, y, test_size=0.20, random_state=43)
dtrain_all = xgb.DMatrix(X.values, y, feature_names=X.columns)
dtrain = xgb.DMatrix(X_train, y_train, feature_names=X.columns)
dval = xgb.DMatrix(X_val, y_val, feature_names=X.columns)
xgb_params = {
'max_depth': 5,
'n_estimators': 200,
'learning_rate': 0.01,
'objective': 'reg:linear',
'eval_metric': 'rmse',
'silent': 1
}
# Uncomment to tune XGB `num_boost_rounds`
model = xgb.train(xgb_params, dtrain, num_boost_round=4000, evals=[(dval, 'val')],
early_stopping_rounds=40, verbose_eval=40)
num_boost_round = model.best_iteration
cv_output = xgb.cv(dict(xgb_params, silent=0), dtrain_all, num_boost_round=num_boost_round, verbose_eval=40)
cv_output[['train-rmse-mean', 'test-rmse-mean']].plot()
model = xgb.train(dict(xgb_params, silent=0), dtrain_all, num_boost_round=num_boost_round, verbose_eval=40)
print "predict-train:", rmse(model.predict(dtrain_all), y)
model = xgb.XGBRegressor(max_depth=5, n_estimators=100, learning_rate=0.01, nthread=-1, silent=False)
model.fit(X.values, y, verbose=20)
with open("scores.tsv", "a") as sf:
sf.write("%s\n" % rmsle(model.predict(X.values), y))
!tail scores.tsv
show_weights(model, feature_names=list(X.columns), importance_type="weight")
from sklearn.model_selection import cross_val_score
from sklearn.metrics import make_scorer
def validate(clf):c
cval = np.abs(cross_val_score(clf, X.values, y, cv=3,
scoring=make_scorer(rmsle, False), verbose=2))
return np.mean(cval), cval
print validate(model)
"""
Explanation: Обучение моделей
End of explanation
"""
test_pr = preprocess(test)
test_pr = apply_categorial(test_pr, train_pr)
# test_pr = test_pr.fillna(-1)
dtest = xgb.DMatrix(test_pr.values, feature_names=test_pr.columns)
y_pred = model.predict(dtest)
# y_pred = model.predict(test_pr.values)
# y_pred = np.exp(y_pred) - 1
submdf = pd.DataFrame({"id": test["id"], "price_doc": y_pred})
submdf.to_csv("data/submission.csv", header=True, index=False)
!head data/submission.csv
"""
Explanation: Submission
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_compute_mne_inverse_raw_in_label.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import apply_inverse_raw, read_inverse_operator
print(__doc__)
data_path = sample.data_path()
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_raw = data_path + '/MEG/sample/sample_audvis_raw.fif'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
snr = 1.0 # use smaller SNR for raw data
lambda2 = 1.0 / snr ** 2
method = "sLORETA" # use sLORETA method (could also be MNE or dSPM)
# Load data
raw = mne.io.read_raw_fif(fname_raw)
inverse_operator = read_inverse_operator(fname_inv)
label = mne.read_label(fname_label)
raw.set_eeg_reference() # set average reference.
start, stop = raw.time_as_index([0, 15]) # read the first 15s of data
# Compute inverse solution
stc = apply_inverse_raw(raw, inverse_operator, lambda2, method, label,
start, stop, pick_ori=None)
# Save result in stc files
stc.save('mne_%s_raw_inverse_%s' % (method, label_name))
"""
Explanation: Compute sLORETA inverse solution on raw data
Compute sLORETA inverse solution on raw dataset restricted
to a brain label and stores the solution in stc files for
visualisation.
End of explanation
"""
plt.plot(1e3 * stc.times, stc.data[::100, :].T)
plt.xlabel('time (ms)')
plt.ylabel('%s value' % method)
plt.show()
"""
Explanation: View activation time-series
End of explanation
"""
|
anhaidgroup/py_entitymatching | notebooks/guides/step_wise_em_guides/Performing Blocking Using Built-In Blockers (Overlap Blocker).ipynb | bsd-3-clause | # Import py_entitymatching package
import py_entitymatching as em
import os
import pandas as pd
"""
Explanation: Introduction
This IPython notebook illustrates how to perform blocking using Overlap blocker.
First, we need to import py_entitymatching package and other libraries as follows:
End of explanation
"""
# Get the datasets directory
datasets_dir = em.get_install_path() + os.sep + 'datasets'
# Get the paths of the input tables
path_A = datasets_dir + os.sep + 'person_table_A.csv'
path_B = datasets_dir + os.sep + 'person_table_B.csv'
# Read the CSV files and set 'ID' as the key attribute
A = em.read_csv_metadata(path_A, key='ID')
B = em.read_csv_metadata(path_B, key='ID')
A.head()
"""
Explanation: Then, read the (sample) input tables for blocking purposes.
End of explanation
"""
# Instantiate overlap blocker object
ob = em.OverlapBlocker()
"""
Explanation: Ways To Do Overlap Blocking
There are three different ways to do overlap blocking:
Block two tables to produce a candidate set of tuple pairs.
Block a candidate set of tuple pairs to typically produce a reduced candidate set of tuple pairs.
Block two tuples to check if a tuple pair would get blocked.
Block Tables to Produce a Candidate Set of Tuple Pairs
End of explanation
"""
# Specify the tokenization to be 'word' level and set overlap_size to be 3.
C1 = ob.block_tables(A, B, 'address', 'address', word_level=True, overlap_size=3,
l_output_attrs=['name', 'birth_year', 'address'],
r_output_attrs=['name', 'birth_year', 'address'],
show_progress=False)
# Display first 5 tuple pairs in the candidate set.
C1.head()
"""
Explanation: For the given two tables, we will assume that two persons with no sufficient overlap between their addresses do not refer to the same real world person. So, we apply overlap blocking on address. Specifically, we tokenize the address by word and include the tuple pairs if the addresses have at least 3 overlapping tokens. That is, we block all the tuple pairs that do not share at least 3 tokens in address.
End of explanation
"""
# Set the word_level to be False and set the value of q (using q_val)
C2 = ob.block_tables(A, B, 'address', 'address', word_level=False, q_val=3, overlap_size=3,
l_output_attrs=['name', 'birth_year', 'address'],
r_output_attrs=['name', 'birth_year', 'address'],
show_progress=False)
# Display first 5 tuple pairs
C2.head()
"""
Explanation: In the above, we used word-level tokenizer. Overlap blocker also supports q-gram based tokenizer and it can be used as follows:
End of explanation
"""
# Set the parameter to remove stop words to False
C3 = ob.block_tables(A, B, 'address', 'address', word_level=True, overlap_size=3, rem_stop_words=False,
l_output_attrs=['name', 'birth_year', 'address'],
r_output_attrs=['name', 'birth_year', 'address'],
show_progress=False)
# Display first 5 tuple pairs
C3.head()
"""
Explanation: Updating Stopwords
Commands in the Overlap Blocker removes some stop words by default. You can avoid this by specifying rem_stop_words parameter to False
End of explanation
"""
ob.stop_words
"""
Explanation: You can check what stop words are getting removed like this:
End of explanation
"""
# Include Franciso as one of the stop words
ob.stop_words.append('francisco')
ob.stop_words
# Set the word level tokenizer to be True
C4 = ob.block_tables(A, B, 'address', 'address', word_level=True, overlap_size=3,
l_output_attrs=['name', 'birth_year', 'address'],
r_output_attrs=['name', 'birth_year', 'address'],
show_progress=False)
C4.head()
"""
Explanation: You can update this stop word list (with some domain specific stop words) and do the blocking.
End of explanation
"""
# Introduce some missing value
A1 = em.read_csv_metadata(path_A, key='ID')
A1.loc[0, 'address'] = pd.np.NaN
# Set the word level tokenizer to be True
C5 = ob.block_tables(A1, B, 'address', 'address', word_level=True, overlap_size=3, allow_missing=True,
l_output_attrs=['name', 'birth_year', 'address'],
r_output_attrs=['name', 'birth_year', 'address'],
show_progress=False)
len(C5)
C5
"""
Explanation: Handling Missing Values
If the input tuples have missing values in the blocking attribute, then they are ignored by default. You can set allow_missing_values to be True to include all possible tuple pairs with missing values.
End of explanation
"""
#Instantiate the overlap blocker
ob = em.OverlapBlocker()
"""
Explanation: Block a Candidata Set To Produce Reduced Set of Tuple Pairs
End of explanation
"""
# Specify the tokenization to be 'word' level and set overlap_size to be 1.
C6 = ob.block_candset(C1, 'name', 'name', word_level=True, overlap_size=1, show_progress=False)
C6
"""
Explanation: In the above, we see that the candidate set produced after blocking over input tables include tuple pairs that have at least three tokens in overlap. Adding to that, we will assume that two persons with no overlap of their names cannot refer to the same person. So, we block the candidate set of tuple pairs on name. That is, we block all the tuple pairs that have no overlap of tokens.
End of explanation
"""
# Specify the tokenization to be 'word' level and set overlap_size to be 1.
C7 = ob.block_candset(C1, 'name', 'name', word_level=False, q_val= 3, overlap_size=1, show_progress=False)
C7.head()
"""
Explanation: In the above, we saw that word level tokenization was used to tokenize the names. You can also use q-gram tokenization like this:
End of explanation
"""
# Introduce some missing values
A1.loc[2, 'name'] = pd.np.NaN
C8 = ob.block_candset(C5, 'name', 'name', word_level=True, overlap_size=1, allow_missing=True, show_progress=False)
"""
Explanation: Handling Missing Values
As we saw with block_tables, you can include all the possible tuple pairs with the missing values using allow_missing parameter block the candidate set with the updated set of stop words.
End of explanation
"""
# Display the first tuple from table A
A.loc[[0]]
# Display the first tuple from table B
B.loc[[0]]
# Instantiate Attr. Equivalence Blocker
ob = em.OverlapBlocker()
# Apply blocking to a tuple pair from the input tables on zipcode and get blocking status
status = ob.block_tuples(A.loc[0], B.loc[0],'address', 'address', overlap_size=1)
# Print the blocking status
print(status)
"""
Explanation: Block Two tuples To Check If a Tuple Pair Would Get Blocked
We can apply overlap blocking to a tuple pair to check if it is going to get blocked. For example, we can check if the first tuple from A and B will get blocked if we block on address.
End of explanation
"""
|
ComputationalModeling/spring-2017-danielak | past-semesters/spring_2016/day-by-day/day23-Econophysics/STUDENT-Notebook-for-Econophysics.ipynb | agpl-3.0 | # Use Python to make a filled-in plot
# from the data that got reported out
"""
Explanation: Econophysics
Names of group members
// put your names here!
Goals of this assignment
Witness what we call "emergent behavior"; large patterns manifesting from the simple interactions of tiny agents
Develop a graphical way to show the dispersion of money across a society.
Create a working implementation of an econophysics game you'll design
Playing In For a Penny
Each Student Should Check Their Intuition
Before Playing, Take 60 seconds to think. When we start the game, say with 17 agents, here's what it could look like if we plotted how much money each agent had:
<img src="starting-money-in-for-a-penny.png" width=400 alt="Starting Money for Each Agent in In For a Penny">
Does That Plot Make Sense?
Each Student Should Do This Now
What do you think will that graph look like after one round of the game?
Why?
What do you think that graph will look like after many rounds of the game?
Why?
How much money do you expect to end up with? Don't share your predictions! It'll be more fun that way ;-)
Put Your Answers Here
// right here
Play the Game!
When you are finished, report how many pennies you have in each round in the following spreadsheet (use the "In for a Penny" sheet, listed at the bottom):
https://docs.google.com/spreadsheets/d/1PvX_IdjdDrKTdH6Ic5I6p_eLeveaGNkE45syjxJBgFA/edit?usp=sharing
Each Student Should Fill In This Plot
Use Python to create a bar plot like this in your own individual notebook.
<img src="blank_money_plot.png" width=400 alt="A blank plot of agent_id versus money that students should fill in">
End of explanation
"""
# Use Python to make a filled-in plot
# from the data that got reported out
"""
Explanation: In For A Pound
Play the Game!
When you are finished, report how many pennies you have in each round in the following spreadsheet (use the "In for a Pound" sheet, listed at the bottom):
https://docs.google.com/spreadsheets/d/1PvX_IdjdDrKTdH6Ic5I6p_eLeveaGNkE45syjxJBgFA/edit?usp=sharing
Each Student Should Fill In This Plot
Use Python to create a bar plot like this in your own individual notebook.
<img src="blank_money_plot.png" width=400 alt="A blank plot of agent_id versus money that students should fill in">
End of explanation
"""
|
fggp/ctcsound | cookbook/03-threading.ipynb | lgpl-2.1 | import ctcsound
cs = ctcsound.Csound()
csd = '''
<CsoundSynthesizer>
<CsOptions>
-d -o dac -m0
</CsOptions>
<CsInstruments>
sr = 48000
ksmps = 100
nchnls = 2
0dbfs = 1
instr 1
idur = p3
iamp = p4
icps = cpspch(p5)
irise = p6
idec = p7
ipan = p8
kenv linen iamp, irise, idur, idec
kenv = kenv*kenv
asig poscil kenv, icps
a1, a2 pan2 asig, ipan
outs a1, a2
endin
</CsInstruments>
<CsScore>
f 0 14400 ; a 4 hours session should be enough
</CsScore>
</CsoundSynthesizer>
'''
cs.compileCsdText(csd)
cs.start()
"""
Explanation: Multithreading
In the preceding recipes, there was a single thread running; this is the default way to use Python, due to the GIL (Global Interpreter Lock). Then, the user has the possibility to interact with the Csound instance during the performance loop. This is illustrated in the following diagram:
To use Csound in a more flexible way, one can use multithreading. Because of the GIL limitations, it is better to yield the multithread machinery through C libraries. When a C function is called from Python using ctypes, the GIL is released during the function call.
Csound has an helper class called CsoundPerformanceThread. When there is a running Csound instance, one can start a new thread by creating a new object of type CsoundPerformanceThread with a reference to the Csound instance as argument. Then, the main Python thread will run allowing the user to interract with it, while the performance thread will run concurrently in the C world, outside of the GIL. The user can send messages to the performance thread, each message being sent with a call to a C function through ctypes, releasing the GIL during the function call. Those messages can be: play(), pause(), togglePause(), stop(), record(), stopRecord(), scoreEvent(), inputMessage(), setScoreOffsetSeconds(), join(), or flushMessageQueue().
When a very long score is used, it is thus easy to implement a REPL (read-eval-print loop) system around Csound. This is illustrated in the following diagram:
So let's start a Csound instance from Python, with a four hours long score:
End of explanation
"""
pt = ctcsound.CsoundPerformanceThread(cs.csound())
pt.play()
"""
Explanation: Then, let's start a new thread, passing the opaque pointer of the Csound instance as argument:
End of explanation
"""
pt.scoreEvent(False, 'i', (1, 0, 1, 0.5, 8.06, 0.05, 0.3, 0.5))
pt.scoreEvent(False, 'i', (1, 0.5, 1, 0.5, 9.06, 0.05, 0.3, 0.5))
"""
Explanation: Now, we can send messages to the performance thread:
End of explanation
"""
pt.stop()
pt.join()
cs.reset()
"""
Explanation: When we're done, we stop the performance thread and reset the csound instance:
End of explanation
"""
csd = '''
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
seed 0
instr 1
iPch random 60, 72
chnset iPch, "pch"
kPch init iPch
kNewPch chnget "new_pitch"
if kNewPch > 0 then
kPch = kNewPch
endif
aTone poscil .2, mtof(kPch)
out aTone, aTone
endin
</CsInstruments>
<CsScore>
i 1 0 600
</CsScore>
</CsoundSynthesizer>
'''
cs.compileCsdText(csd)
cs.start()
pt = ctcsound.CsoundPerformanceThread(cs.csound())
pt.play()
"""
Explanation: Note that we can still access the csound instance with other methods, like controlChannel() or setControlChannel():
End of explanation
"""
print(cs.controlChannel('pch'))
"""
Explanation: We can ask for the values in the Csound instance ...
End of explanation
"""
cs.setControlChannel('new_pitch',73)
"""
Explanation: ... or we can set our own values to the Csound instance:
End of explanation
"""
pt.stop()
pt.join()
cs.reset()
"""
Explanation: At the end, stop and reset as usual:
End of explanation
"""
|
zzsza/TIL | python/crawling-google-play.ipynb | mit | import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import json
import re
from bs4 import BeautifulSoup
import warnings
from konlpy.tag import Twitter
from sklearn.feature_extraction.text import CountVectorizer
warnings.filterwarnings('ignore')
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
df = pd.read_json("review-data.json")
df.head(2)
p = re.compile(r'\d+')
def parser(body):
bs = BeautifulSoup(body, 'html.parser')
user_name = bs.find('span', class_='X43Kjb').text
date = bs.find('span', class_='p2TkOb').text
rating = bs.find('div', {'role':'img'})['aria-label']
rating = p.findall(rating)[-1]
review_text = bs.find('span', {'jsname':'bN97Pc'}).text
return user_name, date, rating, review_text
%%time
df['user_name'], df['date'], df['rating'], df['review_text'] = zip(*df['body'].map(parser))
del df["body"]
df['date'] = pd.to_datetime(df['date'], format='%Y년 %m월 %d일')
df.head(2)
df = df.sort_values(by='date', ascending=False).reindex()
print("최소 :", min(df['date'].value_counts().index))
print("최대 :", max(df['date'].value_counts().index))
sns.factorplot('rating',kind='count',data=df)
df['rating'].value_counts()
"""
Explanation: 데이터 수집
파이썬으로 해도 좋지만, 빠르게 진행하기 위해서 구글 개발자 도구에서 진행
https://play.google.com/store/apps/details?id=com.rainist.banksalad2&showAllReviews=true 로 들어간 후, 콘솔에서 아래 명령어 입력
유용한 리뷰 순서라 최근 리뷰 중 누락이 존재할 수 있음
총 1,400개의 리뷰를 수집 (안드로이드)
```
var reviews = document.querySelectorAll('div[class="d15Mdf bAhLNe"]')
var data = []
reviews.forEach(v => data.push({body: v.outerHTML}))
- 별점이랑 이름, 리뷰 등을 따로 가져올 순 있지만 한방에 HTML로 가져와서 파이썬에서 전처리
- list를 콘솔에서 입력하고, 우측 마우스 클릭 후 Store as global variable 클릭
-copy(temp1)을 하면 클립보드로 데이터가 들어감
-review-data.json```으로 저장
End of explanation
"""
low_rate_review = df[df['rating'] <= '3']['review_text']
len(low_rate_review)
low_rate_review[:10]
"""
Explanation: 17년 12월 21일 ~ 18년 8월 19일의 리뷰 평가는 매우 좋음
5점이 압도적으로 많음
3점 이하 평점을 준 유저들의 리뷰를 보자
End of explanation
"""
low_rate_review = low_rate_review.apply(lambda x:re.sub('[^가-힣\s\d]',"",x))
low_rate_review[:10]
"""
Explanation: 전처리 : 글자만 남기고 제거
End of explanation
"""
tagger = Twitter()
def get_word(sentence):
nouns = tagger.nouns(sentence)
return [noun for noun in nouns if len(noun) > 1]
cv = CountVectorizer(tokenizer=get_word, max_features=300)
tdf = cv.fit_transform(low_rate_review)
words = cv.get_feature_names()
words[:5]
count_mat = tdf.sum(axis=0)
count_mat
count = np.squeeze(np.asarray(count_mat))
word_count = list(zip(words, count))
word_count = sorted(word_count, key=lambda t:t[1], reverse=True)
word_count[:15]
"""
Explanation: 자연어 처리
단어 word count 정도만 체크
리뷰라 없을 것 같지만 1단어만 작성한 것은 제외
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/miroc/cmip6/models/miroc-es2h/ocean.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'miroc-es2h', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: MIROC
Source ID: MIROC-ES2H
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
quantopian/research_public | notebooks/lectures/Spearman_Rank_Correlation/answers/notebook.ipynb | apache-2.0 | import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import math
"""
Explanation: Exercises: Spearman Rank Correlation
Lecture Link
This exercise notebook refers to this lecture. Please use the lecture for explanations and sample code.
https://www.quantopian.com/lectures#Spearman-Rank-Correlation
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
End of explanation
"""
n = 100
x = np.linspace(1, n, n)
y = x**5
#Your code goes here
corr = np.corrcoef(x, y)[1][0]
print corr
plt.plot(x, y);
"""
Explanation: Exercise 1: Finding Correlations of Non-Linear Relationships
a. Traditional (Pearson) Correlation
Find the correlation coefficient for the relationship between x and y.
End of explanation
"""
#Your code goes here
xrank = stats.rankdata(x, method='average')
yrank = stats.rankdata(y, method='average')
diffs = xrank - yrank
spr_corr = 1 - 6*np.sum( diffs*diffs )/( n*( n**2 - 1 ) )
print "Because the ranks of the two data sets are perfectly correlated,\
the relationship between x and y has a Spearman rank correlation coefficient of", spr_corr
"""
Explanation: b. Spearman Rank Correlation
Find the Spearman rank correlation coefficient for the relationship between x and y using the stats.rankdata function and the formula
$$r_S = 1 - \frac{6 \sum_{i=1}^n d_i^2}{n(n^2 - 1)}$$
where $d_i$ is the difference in rank of the ith pair of x and y values.
End of explanation
"""
# Your code goes here
stats.spearmanr(x, y)
"""
Explanation: Check your results against scipy's Spearman rank function. stats.spearmanr
End of explanation
"""
n = 100
a = np.random.normal(0, 1, n)
#Your code goes here
b = [0] + list(a[:(n-1)])
results = stats.spearmanr(a, b)
print "Despite the underlying relationship being a perfect correlation,\
the one-step lag led to a Spearman rank correlation coefficient of\n", results.correlation, \
", meaning the test failed to detect the strong relationship."
"""
Explanation: Exercise 2: Limitations of Spearman Rank Correlation
a. Lagged Relationships
First, create a series b that is identical to a but lagged one step (b[i] = a[i-1]). Then, find the Spearman rank correlation coefficient of the relationship between a and b.
End of explanation
"""
n = 100
c = np.random.normal(0, 2, n)
#Your code goes here
d = 10*c**2 - c + 2
results = stats.spearmanr(c, d)
print "Despite an exact underlying relationship of d = 10c^2 - c + 2,\
the non-monotonic nature of the relationship led to a Spearman rank Correlation coefficient of", \
results.correlation, ", meaning the test failed to detect the relationship."
plt.scatter(c, d);
"""
Explanation: b. Non-Monotonic Relationships
First, create a series d using the relationship $d=10c^2 - c + 2$. Then, find the Spearman rank rorrelation coefficient of the relationship between c and d.
End of explanation
"""
#Pipeline Setup
from quantopian.research import run_pipeline
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import CustomFactor, Returns, RollingLinearRegressionOfReturns
from quantopian.pipeline.classifiers.morningstar import Sector
from quantopian.pipeline.filters import QTradableStocksUS
from time import time
#MyFactor is our custom factor, based off of asset price momentum
class MyFactor(CustomFactor):
""" Momentum factor """
inputs = [USEquityPricing.close]
window_length = 60
def compute(self, today, assets, out, close):
out[:] = close[-1]/close[0]
universe = QTradableStocksUS()
pipe = Pipeline(
columns = {
'MyFactor' : MyFactor(mask=universe),
},
screen=universe
)
start_timer = time()
results = run_pipeline(pipe, '2015-01-01', '2015-06-01')
end_timer = time()
results.fillna(value=0);
print "Time to run pipeline %.2f secs" % (end_timer - start_timer)
my_factor = results['MyFactor']
n = len(my_factor)
asset_list = results.index.levels[1].unique()
prices_df = get_pricing(asset_list, start_date='2015-01-01', end_date='2016-01-01', fields='price')
# Compute 10-day forward returns, then shift the dataframe back by 10
forward_returns_df = prices_df.pct_change(10).shift(-10)
# The first trading day is actually 2015-1-2
single_day_factor_values = my_factor['2015-1-2']
# Because prices are indexed over the total time period, while the factor values dataframe
# has a dynamic universe that excludes hard to trade stocks, each day there may be assets in
# the returns dataframe that are not present in the factor values dataframe. We have to filter down
# as a result.
single_day_forward_returns = forward_returns_df.loc['2015-1-2'][single_day_factor_values.index]
#Your code goes here
r = stats.spearmanr(single_day_factor_values,
single_day_forward_returns)
print "A Spearman rank rorrelation test yielded a coefficient of %s" %(r.correlation)
"""
Explanation: Exercise 3: Real World Example
a. Factor and Forward Returns
Here we'll define a simple momentum factor (model). To evaluate it we'd need to look at how its predictions correlate with future returns over many days. We'll start by just evaluating the Spearman rank correlation between our factor values and forward returns on just one day.
Compute the Spearman rank correlation between factor values and 10 trading day forward returns on 2015-1-2.
For help on the pipeline API, see this tutorial: https://www.quantopian.com/tutorials/pipeline
End of explanation
"""
rolling_corr = pd.Series(index=None, data=None)
#Your code goes here
for dt in prices_df.index[:60]:
# The first trading day is actually 2015-1-2
single_day_factor_values = my_factor[dt]
# Because prices are indexed over the total time period, while the factor values dataframe
# has a dynamic universe that excludes hard to trade stocks, each day there may be assets in
# the returns dataframe that are not present in the factor values dataframe. We have to filter down
# as a result.
single_day_forward_returns = forward_returns_df.loc[dt][single_day_factor_values.index]
rolling_corr[dt] = stats.spearmanr(single_day_factor_values,
single_day_forward_returns).correlation
"""
Explanation: b. Rolling Spearman Rank Correlation
Repeat the above correlation for the first 60 days in the dataframe as opposed to just a single day. You should get a time series of Spearman rank correlations. From this we can start getting a better sense of how the factor correlates with forward returns.
What we're driving towards is known as an information coefficient. This is a very common way of measuring how predictive a model is. All of this plus much more is automated in our open source alphalens library. In order to see alphalens in action you can check out these resources:
A basic tutorial:
https://www.quantopian.com/tutorials/getting-started#lesson4
An in-depth lecture:
https://www.quantopian.com/lectures/factor-analysis
End of explanation
"""
# Your code goes here
print 'Spearman rank correlation mean: %s' %(np.mean(rolling_corr))
print 'Spearman rank correlation std: %s' %(np.std(rolling_corr))
plt.plot(rolling_corr);
"""
Explanation: b. Rolling Spearman Rank Correlation
Plot out the rolling correlation as a time series, and compute the mean and standard deviation.
End of explanation
"""
|
seblabbe/MATH2010-Logiciels-mathematiques | Devoirs/devoir2-solutions.ipynb | gpl-3.0 | def somme(A, B):
C = []
for i in range(4):
Ai = A[i]
Bi = B[i]
row = [Ai[j]+Bi[j] for j in range(4)]
C.append(row)
return C
X = [[56, 39, 3, 41],
[23, 78, 11, 62],
[61, 26, 65, 51],
[80, 98, 9, 68]]
Y = [[51, 52, 53, 15],
[ 1, 71, 46, 31],
[99, 7, 92, 12],
[15, 43, 36, 51]]
somme(X, Y)
"""
Explanation: Question 1
End of explanation
"""
from sympy import Matrix
Mx = Matrix(X)
My = Matrix(Y)
Mx + My
"""
Explanation: On vérifie en utilisant sympy que le calcul est correct:
End of explanation
"""
def produit(A, B):
C = []
for i in range(4):
row = []
for j in range(4):
row.append(sum(A[i][k]*B[k][j] for k in range(4)))
C.append(row)
return C
produit(X,Y)
"""
Explanation: Question 2
End of explanation
"""
Mx * My
"""
Explanation: On vérifie en utilisant sympy que ce calcul est correct:
End of explanation
"""
from math import sqrt
def est_premier(n):
if n == 0 or n == 1:
return False
for i in range(2, int(sqrt(n))+1):
if n % i == 0:
return False
return True
"""
Explanation: Question 3
End of explanation
"""
for i in range(100):
if est_premier(i):
print(i, end=', ')
"""
Explanation: On vérifie que la fonction est_premier fonctionne bien:
End of explanation
"""
def triplets_nombre_premier(n):
L = []
p = 3
while len(L) < n:
if est_premier(p) and est_premier(p+6):
if est_premier(p+2):
L.append((p, p+2, p+6))
elif est_premier(p+4):
L.append((p, p+4, p+6))
p += 2
return L
triplets_nombre_premier(10)
"""
Explanation: D'abord, on remarque qu'il n'existe pas de quadruplet de nombre premiers $p$, $p+2$, $p+4$, $p+6$, car l'un d'eux doit être divisible par trois (en effet, leurs valeurs modulo 3 sont $p$, $p+2$, $p+1$). Comme 3 est le seul nombre premier divisible par trois, il faudrait que le quadruplet contienne le nombre 3. Or un tel quadruplet n'existe pas.
End of explanation
"""
def triplets_pythagore(n):
L = []
for c in range(1, n+1):
for b in range(1, c+1):
for a in range(1, b+1):
if a**2+b**2 == c**2:
L.append((a,b,c))
return L
triplets_pythagore(30)
"""
Explanation: Question 4
End of explanation
"""
|
amitkaps/weed | 4-Model.ipynb | mit | # Load the libraries
import numpy as np
import pandas as pd
from scipy import stats
from sklearn import linear_model
# Load the data again!
df = pd.read_csv("data/Weed_Price.csv", parse_dates=[-1])
df.sort(columns=['State','date'], inplace=True)
df1 = df[df.State=="California"].copy()
df1.set_index("date", inplace=True)
print df1.shape
idx = pd.date_range(df1.index.min(), df1.index.max())
df1 = df1.reindex(idx)
df1.fillna(method = "ffill", inplace=True)
print df1.shape
df1.head()
#Reading demographics data
demographics = pd.DataFrame.from_csv("data/Demographics_State.csv",header=0,index_col=False,sep=',')
demographics.rename(columns={'region':'State'}, inplace=True)
demographics.head()
df['State'] = df['State'].str.lower()
df.head()
df_demo = pd.merge(df, demographics, how="inner", on="State")
df_demo.head()
"""
Explanation: 4-Model the Data
"All models are wrong, Some of them are useful"
The power and limits of models
Tradeoff between Prediction Accuracy and Model Interpretability
Assessing Model Accuracy
Regression models (Simple, Multiple)
Classification model
End of explanation
"""
corr_bw_percapita_highq = stats.pearsonr(df_demo.per_capita_income, df_demo.HighQ)[0]
print corr_bw_percapita_highq
"""
Explanation: Correlation
End of explanation
"""
state_location = pd.read_csv("data/State_Location.csv")
state_location.head()
pd.unique(state_location.status)
"""
Explanation: Exercise Find correlation between percent_white and highQ
Impact of de-regulation
End of explanation
"""
df['year'] = pd.DatetimeIndex(df['date']).year
df['month'] = pd.DatetimeIndex(df['date']).month
df['week'] = pd.DatetimeIndex(df['date']).week
df['weekday'] = pd.DatetimeIndex(df['date']).weekday
df_demo_ca = df_demo[df_demo.State=="california"].copy()
df_demo_ca['year'] = pd.DatetimeIndex(df_demo_ca['date']).year
df_demo_ca['month'] = pd.DatetimeIndex(df_demo_ca['date']).month
df_demo_ca['week'] = pd.DatetimeIndex(df_demo_ca['date']).week
df_demo_ca['weekday'] = pd.DatetimeIndex(df_demo_ca['date']).weekday
df_demo_ca.head()
df_demo_ca.groupby("weekday").HighQ.mean()
"""
Explanation: Exercise Find mean prices of HighQ weed for states that are legal and for states that are illegal
Finding good time of the week to buy weed in California
End of explanation
"""
df.groupby(["State", "weekday"]).HighQ.mean()
df_st_wk = df.groupby(["State", "weekday"]).HighQ.mean()
df_st_wk.reset_index()
#Answer:
"""
Explanation: Exercise If I need to buy weed on a wednesday, which state should I be in?
End of explanation
"""
model_data = df1.loc[:,['HighQ']].copy()
idx = pd.date_range(model_data.index.min(), model_data.index.max()+ 30)
model_data.reset_index(inplace=True)
model_data.set_index("index", inplace=True)
model_data = model_data.reindex(idx)
model_data.tail(35)
model_data['IND'] = np.arange(model_data.shape[0])
model_data.tail(35)
model_data['IND_SQ'] = model_data['IND']**2
x = model_data.ix[0:532, ["IND","IND_SQ"]]
y = model_data.ix[0:532, "HighQ"]
x_test = model_data.ix[532:, ["IND","IND_SQ"]]
print x.shape, y.shape
ols = linear_model.LinearRegression(fit_intercept=True)
ols.fit(x, y)
ols_predict = ols.predict(x_test)
ols_predict
ols.coef_
"""
Explanation: Regression
Predicting price of HighQ weed in CA
End of explanation
"""
|
jakobrunge/tigramite | tutorials/tigramite_tutorial_basics.ipynb | gpl-3.0 | # Imports
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
## use `%matplotlib notebook` for interactive figures
# plt.style.use('ggplot')
import sklearn
import tigramite
from tigramite import data_processing as pp
from tigramite.toymodels import structural_causal_processes as toys
from tigramite import plotting as tp
from tigramite.pcmci import PCMCI
from tigramite.independence_tests import ParCorr, GPDC, CMIknn, CMIsymb
"""
Explanation: Causal discovery with TIGRAMITE
TIGRAMITE is a time series analysis python module. It allows to reconstruct graphical models (conditional independence graphs) from discrete or continuously-valued time series based on the PCMCI framework and create high-quality plots of the results.
This tutorial explains the main features in walk-through examples. It covers:
Basic usage
Plotting
Nonlinear conditional independence tests
Symbolic time series
PCMCI is described here:
J. Runge, P. Nowack, M. Kretschmer, S. Flaxman, D. Sejdinovic,
Detecting and quantifying causal associations in large nonlinear time series datasets. Sci. Adv. 5, eaau4996 (2019)
https://advances.sciencemag.org/content/5/11/eaau4996
For further versions of PCMCI (e.g., PCMCI+, LPCMCI, etc.), see the corresponding tutorials.
See the following paper for theoretical background:
Runge, Jakob. 2018. “Causal Network Reconstruction from Time Series: From Theoretical Assumptions to Practical Estimation.” Chaos: An Interdisciplinary Journal of Nonlinear Science 28 (7): 075310.
Last, the following Nature Communications Perspective paper provides an overview of causal inference methods in general, identifies promising applications, and discusses methodological challenges (exemplified in Earth system sciences):
https://www.nature.com/articles/s41467-019-10105-3
1. Basic usage
End of explanation
"""
np.random.seed(42) # Fix random seed
links_coeffs = {0: [((0, -1), 0.7), ((1, -1), -0.8)],
1: [((1, -1), 0.8), ((3, -1), 0.8)],
2: [((2, -1), 0.5), ((1, -2), 0.5), ((3, -3), 0.6)],
3: [((3, -1), 0.4)],
}
T = 1000 # time series length
data, true_parents_neighbors = toys.var_process(links_coeffs, T=T)
T, N = data.shape
# Initialize dataframe object, specify time axis and variable names
var_names = [r'$X^0$', r'$X^1$', r'$X^2$', r'$X^3$']
dataframe = pp.DataFrame(data,
datatime = np.arange(len(data)),
var_names=var_names)
"""
Explanation: Consider time series coming from a data generating process
\begin{align}
X^0_t &= 0.7 X^0_{t-1} - 0.8 X^1_{t-1} + \eta^0_t\
X^1_t &= 0.8 X^1_{t-1} + 0.8 X^3_{t-1} + \eta^1_t\
X^2_t &= 0.5 X^2_{t-1} + 0.5 X^1_{t-2} + 0.6 X^3_{t-3} + \eta^2_t\
X^3_t &= 0.7 X^3_{t-1} + \eta^3_t\
\end{align}
where $\eta$ are independent zero-mean unit variance random variables. Our goal is to reconstruct the drivers of each variable. In Tigramite such a process can be generated with the function toys.var_process.
End of explanation
"""
tp.plot_timeseries(dataframe); plt.show()
"""
Explanation: First, we plot the time series. This can be done with the function tp.plot_timeseries
End of explanation
"""
parcorr = ParCorr(significance='analytic')
pcmci = PCMCI(
dataframe=dataframe,
cond_ind_test=parcorr,
verbosity=1)
"""
Explanation: It's stationary and doesn't contain missing values (covered in other tutorial). Next, we choose a conditional independence test, here we start with ParCorr implementing linear partial correlation. With significance='analytic' the null distribution is assumed to be Student's $t$. Then we initialize the PCMCI method with dataframe, and cond_ind_test:
End of explanation
"""
correlations = pcmci.get_lagged_dependencies(tau_max=20, val_only=True)['val_matrix']
lag_func_matrix = tp.plot_lagfuncs(val_matrix=correlations, setup_args={'var_names':var_names,
'x_base':5, 'y_base':.5}); plt.show()
"""
Explanation: Before running the causal algorithm, it's a good idea to plot the lagged unconditional dependencies, e.g., the lagged correlations. This can help to identify which maximal time lag tau_max to choose in the causal algorithm.
End of explanation
"""
scatter_lags = np.argmax(np.abs(correlations), axis=2)
tp.plot_scatterplots(dataframe=dataframe, add_scatterplot_args={'scatter_lags':scatter_lags}); plt.show()
"""
Explanation: Let's do another check and use the new plot_scatterplots function to see whether the dependencies are really linear and ParCorr is the right conditional independence test. With the argument scatter_lags set to a (N, N) integer numpy array you can choose which lag to use for every pair of variables. Here we choose the lag at which the correlations above have their maximal absolute value. Of course, you might want to use a nonlinear conditional independence test to assess the lags with maximum dependency. I.e., run pcmci.get_lagged_dependencies with PCMCI initialized with a nonlinear measure (e.g., CMIknn or GPDC as introduced below).
End of explanation
"""
pcmci.verbosity = 1
results = pcmci.run_pcmci(tau_max=8, pc_alpha=None, alpha_level=0.01)
"""
Explanation: Since the dependencies in the lag function plot decay beyond a maximum lag of around 8, we choose tau_max=8 for PCMCI. The other main parameter is pc_alpha which sets the significance level in the condition-selection step. Here we let PCMCI choose the optimal value by setting it to pc_alpha=None. Then PCMCI will optimize this parameter in the ParCorr case by the Akaike Information criterion among a reasonable default list of values (e.g., pc_alpha = [0.05, 0.1, 0.2, 0.3, 0.4, 0.5]). The parameter alpha_level=0.01 indicates that we threshold the resulting p-value matrix at this significance level to obtain the graph.
End of explanation
"""
print("p-values")
print (results['p_matrix'].round(3))
print("MCI partial correlations")
print (results['val_matrix'].round(2))
"""
Explanation: As you can see from the output, PCMCI selected different pc_alpha for each variable. The result of run_pcmci is a dictionary containing the matrix of p-values, the matrix of test statistic values (here MCI partial correlations) and optionally its confidence bounds (can be specified upon initializing ParCorr), and the graph matrix. p_matrix and val_matrix are of shape (N, N, tau_max+1) with entry (i, j, \tau) denoting the test for the link $X^i_{t-\tau} \to X^j_t$. The MCI values for $\tau=0$ do not exclude other contemporaneous effects, only past variables are conditioned upon. The graph array of the same shape is obtained from thresholding the p_matrix at the specified alpha_level. It is a string array and denotes significant lagged causal links by --> and contemporaneou links (where the orientation cannot be determined with PCMCI) by o-o. With the PCMCIplus method also contemporaneous links can be oriented.
Note: The test statistic values (e.g., partial correlation) may give a qualitative intuition of the strength of a dependency, but for a proper causal effect analysis please refer to the CausalEffects class and tutorial.
End of explanation
"""
q_matrix = pcmci.get_corrected_pvalues(p_matrix=results['p_matrix'], tau_max=8, fdr_method='fdr_bh')
pcmci.print_significant_links(
p_matrix = q_matrix,
val_matrix = results['val_matrix'],
alpha_level = 0.01)
graph = pcmci.get_graph_from_pmatrix(p_matrix=q_matrix, alpha_level=0.01,
tau_min=0, tau_max=8, selected_links=None)
results['graph'] = graph
"""
Explanation: If we want to control for the $N^2 \tau_\max$ tests conducted here, we can further correct the p-values, e.g., by False Discovery Rate (FDR) control yielding the q_matrix. The graph can then be updated with that adjusted p_matrix and a different alpha_level using get_graph_from_pmatrix().
End of explanation
"""
tp.plot_graph(
val_matrix=results['val_matrix'],
graph=results['graph'],
var_names=var_names,
link_colorbar_label='cross-MCI',
node_colorbar_label='auto-MCI',
); plt.show()
# Plot time series graph
tp.plot_time_series_graph(
figsize=(6, 4),
val_matrix=results['val_matrix'],
graph=results['graph'],
var_names=var_names,
link_colorbar_label='MCI',
); plt.show()
"""
Explanation: 2. Plotting
Tigramite offers several plotting options: The lag function matrix (as shown above), the time series graph, and the process graph which aggregates the information in the time series graph. Both take as arguments the graph array and optionally the val_matrix and further link attributes.
In the process graph, the node color denotes the auto-MCI value and the link colors the cross-MCI value. If links occur at multiple lags between two variables, the link color denotes the strongest one and the label lists all significant lags in order of their strength.
End of explanation
"""
np.random.seed(1)
data = np.random.randn(500, 3)
for t in range(1, 500):
data[t, 0] += 0.4*data[t-1, 1]**2
data[t, 2] += 0.3*data[t-2, 1]**2
var_names = [r'$X^0$', r'$X^1$', r'$X^2$']
dataframe = pp.DataFrame(data, var_names=var_names)
tp.plot_timeseries(dataframe); plt.show()
pcmci_parcorr = PCMCI(
dataframe=dataframe,
cond_ind_test=parcorr,
verbosity=1)
results = pcmci_parcorr.run_pcmci(tau_max=2, pc_alpha=0.2, alpha_level = 0.01)
"""
Explanation: While the process graph is nicer to look at, the time series graph better represents the spatio-temporal dependency structure from which causal pathways can be read off. You can adjust the size and aspect ratio of nodes with node_size and node_aspect parameters, and also modify many other properties, see the parameters of plot_graph and plot_time_series_graph.
3. Nonlinear conditional independence tests
If nonlinear dependencies are present, it is advisable to use a nonparametric test. Consider the following model:
\begin{align}
X^0_t &= 0.4 (X^1_{t-1})^2 + \eta^0_t\
X^1_t &= \eta^1_t \
X^2_t &= 0.3 (X^1_{t-2})^2 + \eta^2_t
\end{align}
End of explanation
"""
gpdc = GPDC(significance='analytic', gp_params=None)
pcmci_gpdc = PCMCI(
dataframe=dataframe,
cond_ind_test=gpdc,
verbosity=0)
"""
Explanation: ParCorr here fails in two ways: (1) It cannot detect the two nonlinear links, (2) it wrongly detects a link $X^0_{t-1} \to X^2_t$ because it also cannot condition out a nonlinear dependency.
GPDC
Tigramite covers nonlinear additive dependencies with a test based on Gaussian process regression and a distance correlation (GPDC) on the residuals. For GPDC no analytical null distribution of the distance correlation (DC) is available. For significance testing, Tigramite with the parameter significance = 'analytic' pre-computes the distribution for each sample size (stored in memory), thereby avoiding computationally expensive permutation tests for each conditional independence test (significance = 'shuffle_test'). GP regression is performed with sklearn default parameters, except for the kernel which here defaults to the radial basis function + a white kernel (both hyperparameters are internally optimized) and the assumed noise level alpha which is set to zero since we added a white kernel. These and other parameters can be set via the gp_params dictionary. See the documentation in sklearn for further discussion. There also exists a module (gpdc_torch.py) which exploits gpytorch for faster computations on GPUs.
End of explanation
"""
results = pcmci_gpdc.run_pcmci(tau_max=2, pc_alpha=0.1, alpha_level = 0.01)
"""
Explanation: In contrast to ParCorr, the nonlinear links are correctly detected with GPDC:
End of explanation
"""
array, dymmy, dummy = gpdc._get_array(X=[(0, -1)], Y=[(2, 0)], Z=[(1, -2)], tau_max=2)
x, meanx = gpdc._get_single_residuals(array, target_var=0, return_means=True)
y, meany = gpdc._get_single_residuals(array, target_var=1, return_means=True)
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(8,3))
axes[0].scatter(array[2], array[0], color='grey')
axes[0].scatter(array[2], meanx, color='black')
axes[0].set_title("GP of %s on %s" % (var_names[0], var_names[1]) )
axes[0].set_xlabel(var_names[1]); axes[0].set_ylabel(var_names[0])
axes[1].scatter(array[2], array[1], color='grey')
axes[1].scatter(array[2], meany, color='black')
axes[1].set_title("GP of %s on %s" % (var_names[2], var_names[1]) )
axes[1].set_xlabel(var_names[1]); axes[1].set_ylabel(var_names[2])
axes[2].scatter(x, y, color='red')
axes[2].set_title("DC of residuals:" "\n val=%.3f / p-val=%.3f" % (gpdc.run_test(
X=[(0, -1)], Y=[(2, 0)], Z=[(1, -2)], tau_max=2)) )
axes[2].set_xlabel("resid. "+var_names[0]); axes[2].set_ylabel("resid. "+var_names[2])
plt.tight_layout()
"""
Explanation: As a short excursion, we can see how GPDC works looking at the scatter plots:
End of explanation
"""
np.random.seed(42)
data = np.random.randn(500, 3)
for t in range(1, 500):
data[t, 0] *= 0.2*data[t-1, 1]
data[t, 2] *= 0.3*data[t-2, 1]
dataframe = pp.DataFrame(data, var_names=var_names)
tp.plot_timeseries(dataframe); plt.show()
"""
Explanation: Let's look at some even more nonlinear dependencies in a model with multiplicative noise:
End of explanation
"""
pcmci_gpdc = PCMCI(
dataframe=dataframe,
cond_ind_test=gpdc)
results = pcmci_gpdc.run_pcmci(tau_max=2, pc_alpha=0.1, alpha_level = 0.01)
"""
Explanation: Since multiplicative noise violates the assumption of additive dependencies underlying GPDC, the spurious link $X^0_{t-1} \to X^2_t$ is wrongly detected because it cannot be conditioned out. In contrast to ParCorr, however, the two true links are detected because DC detects any kind of dependency:
End of explanation
"""
array, dymmy, dummy = gpdc._get_array(X=[(0, -1)], Y=[(2, 0)], Z=[(1, -2)], tau_max=2)
x, meanx = gpdc._get_single_residuals(array, target_var=0, return_means=True)
y, meany = gpdc._get_single_residuals(array, target_var=1, return_means=True)
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(8,3))
axes[0].scatter(array[2], array[0], color='grey')
axes[0].scatter(array[2], meanx, color='black')
axes[0].set_title("GP of %s on %s" % (var_names[0], var_names[1]) )
axes[0].set_xlabel(var_names[1]); axes[0].set_ylabel(var_names[0])
axes[1].scatter(array[2], array[1], color='grey')
axes[1].scatter(array[2], meany, color='black')
axes[1].set_title("GP of %s on %s" % (var_names[2], var_names[1]) )
axes[1].set_xlabel(var_names[1]); axes[1].set_ylabel(var_names[2])
axes[2].scatter(x, y, color='red', alpha=0.3)
axes[2].set_title("DC of residuals:" "\n val=%.3f / p-val=%.3f" % (gpdc.run_test(
X=[(0, -1)], Y=[(2, 0)], Z=[(1, -2)], tau_max=2)) )
axes[2].set_xlabel("resid. "+var_names[0]); axes[2].set_ylabel("resid. "+var_names[2])
plt.tight_layout()
"""
Explanation: Here we can see in the scatter plot, that the Gaussian Process cannot fit the dependencies and the residuals are, thus, not independent.
End of explanation
"""
cmi_knn = CMIknn(significance='shuffle_test', knn=0.1, shuffle_neighbors=5, transform='ranks')
pcmci_cmi_knn = PCMCI(
dataframe=dataframe,
cond_ind_test=cmi_knn,
verbosity=2)
results = pcmci_cmi_knn.run_pcmci(tau_max=2, pc_alpha=0.05, alpha_level = 0.01)
## Significant links at alpha = 0.01:
# Variable $X^0$ has 1 link(s):
# ($X^1$ -1): pval = 0.00000 | val = 0.284
# Variable $X^1$ has 0 link(s):
# Variable $X^2$ has 1 link(s):
# ($X^1$ -2): pval = 0.00000 | val = 0.242
tp.plot_graph(
val_matrix=results['val_matrix'],
graph=results['graph'],
var_names=var_names,
link_colorbar_label='cross-MCI',
node_colorbar_label='auto-MCI',
vmin_edges=0.,
vmax_edges = 0.3,
edge_ticks=0.05,
cmap_edges='OrRd',
vmin_nodes=0,
vmax_nodes=.5,
node_ticks=.1,
cmap_nodes='OrRd',
); plt.show()
"""
Explanation: CMIknn
The most general conditional independence test implemented in Tigramite is CMIknn based on conditional mutual information estimated with a k-nearest neighbor estimator. This test is described in the paper
Runge, Jakob. 2018. “Conditional Independence Testing Based on a Nearest-Neighbor Estimator of Conditional Mutual Information.” In Proceedings of the 21st International Conference on Artificial Intelligence and Statistics.
CMIknn involves no assumptions about the dependencies. The parameter knn determines the size of hypercubes, ie., the (data-adaptive) local length-scale. Now we cannot even pre-compute the null distribution because CMIknn is not residual-based like GPDC and the null distribution depends on many more factors. We, therefore, use significance='shuffle_test' to generate it in each individual test. The shuffle test for testing $I(X;Y|Z)=0$ shuffles $X$ values locally: Each sample point $i$’s $x$-value is mapped randomly
to one of its nearest neigbors (shuffle_neighbors parameter) in subspace $Z$. Another free parameter is transform which specifies whether data is transformed before CMI estimation. The new default is transform=ranks which works better than the old transform=standardize.
The following cell may take some minutes.
End of explanation
"""
np.random.seed(1)
data = np.random.randn(2000, 3)
for t in range(1, 2000):
data[t, 0] += 0.4*data[t-1, 1]**2
data[t, 2] += 0.3*data[t-2, 1]**2
data = pp.quantile_bin_array(data, bins=4)
dataframe = pp.DataFrame(data, var_names=var_names)
tp.plot_timeseries(dataframe, figsize=(10,4)); plt.show()
"""
Explanation: Here CMIknn correctly detects the true links and also unveils the spurious link. While CMIknn may now seem as the best independence test choice, we have to note that the generality comes at the cost of much lower power for the case that the dependencies actually follow some parametric form. Then ParCorr or GPDC are much more powerful measures. Of course, ParCorr also detects linear links better than GPDC.
4. Symbolic time series
Symbolic (or discrete) data may arise naturally or continuously-valued time series can be converted to symbolic data. To accommodate such time series, Tigramite includes the CMIsymb conditional independence test based on conditional mutual information estimated directly from the histogram of discrete values. Usually a (quantile-)binning applied to continuous data in order to use a discrete CMI estimator is not recommended (rather use CMIknn), but here we do it anyway to get some symbolic data. We again consider the nonlinear time series example and convert to a symbolic series with 4 bins.
End of explanation
"""
# cmi_symb = CMIsymb(significance='shuffle_test', n_symbs=None)
# pcmci_cmi_symb = PCMCI(
# dataframe=dataframe,
# cond_ind_test=cmi_symb)
# results = pcmci_cmi_symb.run_pcmci(tau_max=2, pc_alpha=0.2, alpha_level = 0.01)
## Significant links at alpha = 0.01:
# Variable $X^0$ has 1 link(s):
# ($X^1$ -1): pval = 0.00000 | val = 0.040
# Variable $X^1$ has 0 link(s):
# Variable $X^2$ has 1 link(s):
# ($X^1$ -2): pval = 0.00000 | val = 0.065
"""
Explanation: CMIsymb is initialized with n_symbs=None implying that the number of symbols is determined as n_symbs=data.max()+1. Again, we have to use a shuffle test. Symbolic CMI works not very well here, only for 2000 samples was the correct graph reliably detected.
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.1/tutorials/20_21_meshes.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: 2.0 - 2.1 Migration: Meshes
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
import phoebe
b = phoebe.default_binary()
"""
Explanation: In this tutorial we will review the changes in the PHOEBE mesh structures. We will first explain the changes and then demonstrate them in code. As usual, let us import phoebe and create a default binary bundle:
End of explanation
"""
b.add_dataset('mesh')
print b.get_parameter('columns').get_choices()
b.add_dataset('lc')
print b.get_parameter('columns').get_choices()
b['columns'] = ['*@lc01', 'teffs']
b.get_parameter('columns').get_value()
b.get_parameter('columns').expand_value()
"""
Explanation: PHOEBE 2.0 had a mesh dataset along with pbmesh and protomesh options you could send to b.run_compute(). These options were quite convenient, but had a few inherit problems:
The protomesh was exposed at only t0 and was in Roche coordinates, despite using the same qualifiers 'x', 'y', 'z'.
Passband-dependent parameters were exposed in the mesh if pbmesh=True, but only if the times matched exactly with the passband (lc, rv, etc) dataset.
Storing more than a few meshes become very memory intensive due to their large size and the large number of columns.
Addressing these shortcomings required a complete redesign of the mesh dataset. The most important changes are:
pbmesh and protomesh are no longer valid options to b.run_compute(). Everything is done through the mesh dataset itself, i.e. b.add_dataset('mesh').
The default columns that are computed for each mesh include the elements in both Roche and plane-of-sky coordinate systems. These columns cannot be disabled.
The columns parameter in the mesh dataset lists additional columns to be exposed in the model mesh when calling b.run_compute(). See the section on columns below for more details.
You can choose whether to expose coordinates in the Roche coordinate system ('xs', 'ys', 'zs') or the plane-of-sky coordinate system ('us', 'vs', 'ws').
When plotting, the default is the plane-of-sky coordinate system, and the axes will be correctly labeled as uvw, whereas in PHOEBE 2.0.x these were still labeled xyz. Note that this also applies to velocities ('vxs', 'vys', 'vzs' vs 'vus', 'vvs', 'vws').
The include_times parameter allows for importing timestamps from other datasets. It also provides support for important orbital times: 't0' (zero-point), 't0_perpass' (periastron passage), 't0_supconj' (superior conjunction) and 't0_ref' (zero-phase reference point).
By default, the times parameter is empty. If you do not set times or include_times before calling b.run_compute(), your model will be empty.
The 'columns' parameter
This parameter is a SelectParameter (a new type of Parameter introduced in PHOEBE 2.1). Its value is one of the values in a list of allowed options. You can list the options by calling param.get_choices() (same as you would for a ChoiceParameter). The value also accepts wildcards, as long as the expression matches at least one of the choices. This allows you to easily select, say, rvs from all datasets, by passing rvs@*. To see the full list of matched options, use param.expand_value().
To demonstrate, let us add a few datasets and look at the available choices for the columns parameter.
End of explanation
"""
print b.get_parameter('include_times').get_value()
print b.get_parameter('include_times').get_choices()
b['include_times'] = ['lc01', 't0@system']
print b.get_parameter('include_times').get_value()
"""
Explanation: The 'include_times' parameter
Similarly, the include_times parameter is a SelectParameter, with the choices being the existing datasets, as well as the t0s mentioned above.
End of explanation
"""
|
cdalzell/ds-for-wall-street | ds-for-ws-student.ipynb | apache-2.0 | %matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import seaborn as sns
"""
Explanation: Loading and Cleaning the Data
Turn on inline matplotlib plotting and import plotting dependencies.
End of explanation
"""
import numpy as np
import pandas as pd
import tsanalysis.loaddata as ld
import tsanalysis.tsutil as tsutil
import sparkts.timeseriesrdd as tsrdd
import sparkts.datetimeindex as dtindex
from sklearn import linear_model
"""
Explanation: Import analytic depedencies. Doc code for spark-timeseries and source code for tsanalysis.
End of explanation
"""
wiki_obs = ld.load_wiki_df(sqlCtx, '/user/srowen/wiki.tsv')
ticker_obs = ld.load_ticker_df(sqlCtx, '/user/srowen/ticker.tsv')
"""
Explanation: Load wiki page view and stock price data into Spark DataFrames.
wiki_obs is a Spark dataframe of (timestamp, page, views) of types (Timestamp, String, Double). ticker_obs is a Spark dataframe of (timestamp, symbol, price) of types (Timestamp, String, Double).
End of explanation
"""
def print_ticker_info(ticker):
print ('The first ticker symbol is: {} \nThe first 20 elements of the associated ' +\
'series are:\n {}').format(ticker[0], ticker[1][:20])
"""
Explanation: Display the first 5 elements of the wiki_obs RDD.
wiki_obs contains Row objects with the fields (timestamp, page, views).
Display the first 5 elements of the tickers_obs RDD.
ticker_obs contains Row objects with the fields (timestamp, symbol, views).
Create datetime index.
Create time series RDD from observations and index. Remove time instants with NaNs.
Cache the tsrdd.
Examine the first element in the RDD.
Time series have values and a datetime index. We can create a tsrdd for hourly stock prices from an index and a Spark DataFrame of observations. ticker_tsrdd is an RDD of tuples where each tuple has the form (ticker symbol, stock prices) where ticker symbol is a string and stock prices is a 1D np.ndarray. We create a nicely formatted string representation of this pair in print_ticker_info(). Notice how we access the two elements of the tuple.
End of explanation
"""
def count_nans(vec):
return np.count_nonzero(np.isnan(vec))
"""
Explanation: Create a wiki page view tsrdd and set the index to match the index of ticker_tsrdd.
Linearly interpolate to impute missing values.
wiki_tsrdd is an RDD of tuples where each tuple has the form (page title, wiki views) where page title is a string and wiki views is a 1D np.ndarray. We have cached both RDDs because we will be doing many subsequent operations on them.
Filter out symbols with more than the minimum number of NaNs.
Then filter out instants with NaNs.
End of explanation
"""
# a dict from wiki page name to ticker symbol
page_symbols = {}
for line in open('../symbolnames.tsv').readlines():
tokens = line[:-1].split('\t')
page_symbols[tokens[1]] = tokens[0]
def get_page_symbol(page_series):
if page_series[0] in page_symbols:
return [(page_symbols[page_series[0]], page_series[1])]
else:
return []
# reverse keys and values. a dict from ticker symbol to wiki page name.
symbol_pages = dict(zip(page_symbols.values(), page_symbols.keys()))
print page_symbols.items()[0]
print symbol_pages.items()[0]
"""
Explanation: Linking symbols and pages
We need to join together the wiki page and ticker data, but the time series RDDs are not directly joinable on their keys. To overcome this, we have create a dict from wikipage title to stock ticker symbol.
Create a dict from ticker symbols to page names.
Create another from page names to ticker symbols.
End of explanation
"""
from scipy.stats.stats import pearsonr
def wiki_vol_corr(page_key):
# lookup individual time series by key.
ticker = ticker_tsrdd.find_series(page_symbols[page_key]) # numpy array
wiki = wiki_tsrdd.find_series(page_key) # numpy array
return pearsonr(ticker, wiki)
def corr_with_offset(page_key, offset):
"""offset is an integer that describes how many time intervals we have slid
the wiki series ahead of the ticker series."""
ticker = ticker_tsrdd.find_series(page_symbols[page_key]) # numpy array
wiki = wiki_tsrdd.find_series(page_key) # numpy array
return pearsonr(ticker[offset:], wiki[:-offset])
"""
Explanation: Join together wiki_tsrdd and ticker_tsrdd
First, we use this dict to look up the corresponding stock ticker symbol and rekey the wiki page view time series. We then join the data sets together. The result is an RDD of tuples where each element is of the form (ticker_symbol, (wiki_series, ticker_series)). We count the number of elements in the resulting rdd to see how many matches we have.
Correlation and Relationships
Define a function for computing the pearson r correlation of the stock price and wiki page traffic associated with a company.
Here we look up a specific stock and corrsponding wiki page, and provide an example of
computing the pearson correlation locally. We use scipy.stats.stats.pearsonr to compute the pearson correlation and corresponding two sided p value. wiki_vol_corr and corr_with_offset both return this as a tuple of (corr, p_value).
End of explanation
"""
def joint_plot(page_key, ticker, wiki, offset=0):
with sns.axes_style("white"):
sns.jointplot(x=ticker, y=wiki, kind="kde", color="b");
plt.xlabel('Stock Price')
plt.ylabel('Wikipedia Page Views')
plt.title('Joint distribution of {} stock price\n and Wikipedia page views.'\
+'\nWith a {} day offset'.format(page_key, offset), y=1.20)
"""
Explanation: Create a plot of the joint distribution of wiki trafiic and stock prices for a specific company using seaborn's jointplot function.
End of explanation
"""
def regress(X, y):
model = linear_model.LinearRegression()
model.fit(X, y)
score = model.score(X, y)
return (score, model)
lag = 2
lead = 2
joined = regressions = wiki_daily_views.flatMap(get_page_symbol) \
.join(ticker_daily_vol)
models = joined.mapValues(lambda x: regress(tsutil.lead_and_lag(lead, lag, x[0]), x[1][lag:-lead]))
models.cache()
models.count()
"""
Explanation: Find the companies with the highest correlation between stock prices time series and wikipedia page traffic.
Note that comparing a tuple means you compare the composite object lexicographically.
Add in filtering out less than useful correlation results.
There are a lot of invalid correlations that get computed, so lets filter those out.
Find the top 10 correlations as defined by the ordering on tuples.
Create a joint plot of some of the stronger relationships.
Volatility
Compute per-day volatility for each symbol.
Make sure we don't have any NaNs.
Visualize volatility
Plot daily volatility in stocks over time.
What does the distribution of volatility for the whole market look like? Add volatility for individual stocks in a datetime bin.
Find stocks with the highest average daily volatility.
Plot stocks with the highest average daily volatility over time.
We first map over ticker_daily_vol to find the index of the value with the highest volatility. We then relate that back to the index set on the RDD to find the corresponding datetime.
A large number of stock symbols had their most volatile days on August 24th and August 25th of
this year.
Regress volatility against page views
Resample the wiki page view data set so we have total pageviews by day.
Cache the wiki page view RDD.
Resample the wiki page view data set so we have total pageviews by day. This means reindexing the time series and aggregating data together with daily buckets. We use np.nansum to add up numbers while treating nans like zero.
Validate data by checking for nans.
Fit a linear regression model to every pair in the joined wiki-ticker RDD and extract R^2 scores.
End of explanation
"""
|
keras-team/keras-io | guides/ipynb/keras_cv/coco_metrics.ipynb | apache-2.0 | import keras_cv
# import all modules we will need in this example
import tensorflow as tf
from tensorflow import keras
# only consider boxes with areas less than a 32x32 square.
metric = keras_cv.metrics.COCORecall(class_ids=[1, 2, 3], area_range=(0, 32**2))
"""
Explanation: Using KerasCV COCO Metrics
Author: lukewood<br>
Date created: 2022/04/13<br>
Last modified: 2022/04/13<br>
Description: Use KerasCV COCO metrics to evaluate object detection models.
Overview
With KerasCV's COCO metrics implementation, you can easily evaluate your object
detection model's performance all from within the TensorFlow graph. This guide
shows you how to use KerasCV's COCO metrics and integrate it into your own model
evaluation pipeline. Historically, users have evaluated COCO metrics as a post training
step. KerasCV offers an in graph implementation of COCO metrics, enabling users to
evaluate COCO metrics during training!
Let's get started using KerasCV's COCO metrics.
Input format
KerasCV COCO metrics require a specific input format.
The metrics expect y_true and be a float Tensor with the shape [batch,
num_images, num_boxes, 5]. The final axis stores the locational and class
information for each specific bounding box. The dimensions in order are: [left,
top, right, bottom, class].
The metrics expect y_pred and be a float Tensor with the shape [batch,
num_images, num_boxes, 56]. The final axis stores the locational and class
information for each specific bounding box. The dimensions in order are: [left,
top, right, bottom, class, confidence].
Due to the fact that each image may have a different number of bounding boxes,
the num_boxes dimension may actually have a mismatching shape between images.
KerasCV works around this by allowing you to either pass a RaggedTensor as an
input to the KerasCV COCO metrics, or padding unused bounding boxes with -1.
Utility functions to manipulate bounding boxes, transform between formats, and
pad bounding box Tensors with -1s are available at
keras_cv.bounding_box.
Independent metric use
The usage first pattern for KerasCV COCO metrics is to manually call
update_state() and result() methods. This pattern is recommended for users
who want finer grained control of their metric evaluation, or want to use a
different format for y_pred in their model.
Let's run through a quick code example.
1.) First, we must construct our metric:
End of explanation
"""
y_true = tf.ragged.stack(
[
# image 1
tf.constant([[0, 0, 10, 10, 1], [11, 12, 30, 30, 2]], tf.float32),
# image 2
tf.constant([[0, 0, 10, 10, 1]], tf.float32),
]
)
y_pred = tf.ragged.stack(
[
# predictions for image 1
tf.constant([[5, 5, 10, 10, 1, 0.9]], tf.float32),
# predictions for image 2
tf.constant([[0, 0, 10, 10, 1, 1.0], [5, 5, 10, 10, 1, 0.9]], tf.float32),
]
)
"""
Explanation: 2.) Create Some Bounding Boxes:
End of explanation
"""
metric.update_state(y_true, y_pred)
"""
Explanation: 3.) Update metric state:
End of explanation
"""
metric.result()
"""
Explanation: 4.) Evaluate the result:
End of explanation
"""
i = keras.layers.Input((None, 6))
model = keras.Model(i, i)
"""
Explanation: Evaluating COCORecall for your object detection model is as simple as that!
Metric use in a model
You can also leverage COCORecall in your model's training loop. Let's walk through this
process.
1.) Construct your the metric and a dummy model
End of explanation
"""
y_true = tf.constant([[[0, 0, 10, 10, 1], [5, 5, 10, 10, 1]]], tf.float32)
y_pred = tf.constant([[[0, 0, 10, 10, 1, 1.0], [5, 5, 10, 10, 1, 0.9]]], tf.float32)
"""
Explanation: 2.) Create some fake bounding boxes:
End of explanation
"""
recall = keras_cv.metrics.COCORecall(
max_detections=100, class_ids=[1], area_range=(0, 64**2), name="coco_recall"
)
model.compile(metrics=[recall])
"""
Explanation: 3.) Create the metric and compile the model
End of explanation
"""
model.evaluate(y_pred, y_true, return_dict=True)
"""
Explanation: 4.) Use model.evaluate() to evaluate the metric
End of explanation
"""
|
phobson/statsmodels | examples/notebooks/tsa_filters.ipynb | bsd-3-clause | %matplotlib inline
from __future__ import print_function
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
dta = sm.datasets.macrodata.load_pandas().data
index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3'))
print(index)
dta.index = index
del dta['year']
del dta['quarter']
print(sm.datasets.macrodata.NOTE)
print(dta.head(10))
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
dta.realgdp.plot(ax=ax);
legend = ax.legend(loc = 'upper left');
legend.prop.set_size(20);
"""
Explanation: Time Series Filters
End of explanation
"""
gdp_cycle, gdp_trend = sm.tsa.filters.hpfilter(dta.realgdp)
gdp_decomp = dta[['realgdp']]
gdp_decomp["cycle"] = gdp_cycle
gdp_decomp["trend"] = gdp_trend
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
gdp_decomp[["realgdp", "trend"]]["2000-03-31":].plot(ax=ax, fontsize=16);
legend = ax.get_legend()
legend.prop.set_size(20);
"""
Explanation: Hodrick-Prescott Filter
The Hodrick-Prescott filter separates a time-series $y_t$ into a trend $\tau_t$ and a cyclical component $\zeta_t$
$$y_t = \tau_t + \zeta_t$$
The components are determined by minimizing the following quadratic loss function
$$\min_{\{ \tau_{t}\} }\sum_{t}^{T}\zeta_{t}^{2}+\lambda\sum_{t=1}^{T}\left[\left(\tau_{t}-\tau_{t-1}\right)-\left(\tau_{t-1}-\tau_{t-2}\right)\right]^{2}$$
End of explanation
"""
bk_cycles = sm.tsa.filters.bkfilter(dta[["infl","unemp"]])
"""
Explanation: Baxter-King approximate band-pass filter: Inflation and Unemployment
Explore the hypothesis that inflation and unemployment are counter-cyclical.
The Baxter-King filter is intended to explictly deal with the periodicty of the business cycle. By applying their band-pass filter to a series, they produce a new series that does not contain fluctuations at higher or lower than those of the business cycle. Specifically, the BK filter takes the form of a symmetric moving average
$$y_{t}^{*}=\sum_{k=-K}^{k=K}a_ky_{t-k}$$
where $a_{-k}=a_k$ and $\sum_{k=-k}^{K}a_k=0$ to eliminate any trend in the series and render it stationary if the series is I(1) or I(2).
For completeness, the filter weights are determined as follows
$$a_{j} = B_{j}+\theta\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$
$$B_{0} = \frac{\left(\omega_{2}-\omega_{1}\right)}{\pi}$$
$$B_{j} = \frac{1}{\pi j}\left(\sin\left(\omega_{2}j\right)-\sin\left(\omega_{1}j\right)\right)\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$
where $\theta$ is a normalizing constant such that the weights sum to zero.
$$\theta=\frac{-\sum_{j=-K^{K}b_{j}}}{2K+1}$$
$$\omega_{1}=\frac{2\pi}{P_{H}}$$
$$\omega_{2}=\frac{2\pi}{P_{L}}$$
$P_L$ and $P_H$ are the periodicity of the low and high cut-off frequencies. Following Burns and Mitchell's work on US business cycles which suggests cycles last from 1.5 to 8 years, we use $P_L=6$ and $P_H=32$ by default.
End of explanation
"""
fig = plt.figure(figsize=(12,10))
ax = fig.add_subplot(111)
bk_cycles.plot(ax=ax, style=['r--', 'b-']);
"""
Explanation: We lose K observations on both ends. It is suggested to use K=12 for quarterly data.
End of explanation
"""
print(sm.tsa.stattools.adfuller(dta['unemp'])[:3])
print(sm.tsa.stattools.adfuller(dta['infl'])[:3])
cf_cycles, cf_trend = sm.tsa.filters.cffilter(dta[["infl","unemp"]])
print(cf_cycles.head(10))
fig = plt.figure(figsize=(14,10))
ax = fig.add_subplot(111)
cf_cycles.plot(ax=ax, style=['r--','b-']);
"""
Explanation: Christiano-Fitzgerald approximate band-pass filter: Inflation and Unemployment
The Christiano-Fitzgerald filter is a generalization of BK and can thus also be seen as weighted moving average. However, the CF filter is asymmetric about $t$ as well as using the entire series. The implementation of their filter involves the
calculations of the weights in
$$y_{t}^{*}=B_{0}y_{t}+B_{1}y_{t+1}+\dots+B_{T-1-t}y_{T-1}+\tilde B_{T-t}y_{T}+B_{1}y_{t-1}+\dots+B_{t-2}y_{2}+\tilde B_{t-1}y_{1}$$
for $t=3,4,...,T-2$, where
$$B_{j} = \frac{\sin(jb)-\sin(ja)}{\pi j},j\geq1$$
$$B_{0} = \frac{b-a}{\pi},a=\frac{2\pi}{P_{u}},b=\frac{2\pi}{P_{L}}$$
$\tilde B_{T-t}$ and $\tilde B_{t-1}$ are linear functions of the $B_{j}$'s, and the values for $t=1,2,T-1,$ and $T$ are also calculated in much the same way. $P_{U}$ and $P_{L}$ are as described above with the same interpretation.
The CF filter is appropriate for series that may follow a random walk.
End of explanation
"""
|
M0nica/python-foundations-hw | 05/NYT_graded.ipynb | mit | import config
import requests
#imports key from config file
nyt_articles_api = config.nyt_articles_api
nyt_books_api = config.nyt_books_api
nyt_movie_api = config.nyt_movie_api
response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?api-key=' + nyt_articles_api)
data = response.json()
# print(data)
"""
Explanation: Graded = 7/8
NYT
All API's: http://developer.nytimes.com/
Article search API: http://developer.nytimes.com/article_search_v2.json
Best-seller API: http://developer.nytimes.com/books_api.json#/Documentation
Test/build queries: http://developer.nytimes.com/
Tip: Remember to include your API key in all requests! And their interactive web thing is pretty bad. You'll need to register for the API key.
End of explanation
"""
published = "";
# response = requests.get('https://api.nytimes.com/svc/books/v3/lists//.json?api-key=' + nyt_books_api + "&list-name=hardcover-fiction&published-date=2009-10-05")
#mother's day 2009 - 10 - 05
# mother's day 2010 2010-09-05
# father's day 2009 -21-06
# father day 2010 - 20 -06
dates = ['2009-05-10', '2010-05-09', '2009-06-21', '2010-06-20']
for date in dates:
response = requests.get('https://api.nytimes.com/svc/books/v3/lists//.json?api-key=' + nyt_books_api + "&list-name=hardcover-fiction&published-date=" + date)
bestseller_data = response.json()
bestseller_data['results']
results = bestseller_data['results'][0]
# print(type(results))
print("The best selling book on", date, "was", results['book_details'][0]['title'])
# print(bestseller_data)
#print(results['book_details'])
"""
Explanation: 1) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?
End of explanation
"""
response = requests.get('https://api.nytimes.com/svc/books/v3/lists/names.json?api-key=' + nyt_books_api)
bestseller_ldata = response.json()
bestseller_ldata['results']
# print(bestseller_ldata['results'][0])
#The lists
print("On June, 6th, 2009 the NYT published the following bestsellers lists:")
for book in bestseller_ldata['results']:
if book['oldest_published_date'] < '2009-06-06' and book['newest_published_date'] >= '2009-06-06':
print(book['display_name'])
else:
pass
print("\nOn June, 6th, 2015 the NYT published the following bestsellers lists:")
for book in bestseller_ldata['results']:
if book['oldest_published_date'] < '2015-06-06' and book['newest_published_date'] >= '2015-06-06':
print(book['display_name'])
else:
pass
# print("Too young")
# for book in bestseller_ldata:
"""
Explanation: 2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?
End of explanation
"""
ppl = ['Gaddafi','Gadafi', 'Kadafi','Qaddafi']
for person in ppl:
# fq yields a lot more results than just q need to figure out difference b/w hits and times
# response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?api-key=' + nyt_articles_api + '&fq=' + person + ' Libya')
response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?api-key=' + nyt_articles_api + '&q=' + person + ' Libya')
muammar_data = response.json()
print("Muammar was referred to as ", person, muammar_data['response']['meta']['hits'], "times in the New York Times.")
# print(muammar_data)
# print(muammar_data['response']['docs'])
# print(muammar_data['response']['docs'][0]['keywords'])
# print(muammar_data['response']['docs'][0])
# keywords = []
# ppl = ['Gaddafi','Gadafi', 'Kadafi','Qaddafi']
#for article in muammar_data['response']['docs']:
# for keyword in article['keywords']:
# print(keyword['value'])
# for person in ppl:
#print(x)
# if person in keyword:
# print("print", keyword['value'], "was found")
# print(keyword['value'])
# keywords.append(keyword['value'])
#from collections import Counter
#counts = Counter(keywords)
#print(counts)
#for article in muammar_data['response']['docs']:
# print(article["keywords"])
# len(muammar_data['response']['docs'])
"""
Explanation: 3) Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names?
Tip: Add "Libya" to your search to make sure (-ish) you're talking about the right guy.
End of explanation
"""
response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?api-key=' + nyt_articles_api + '&q=hipster&begin_date=19950101&end_date=19951231&sort=oldest')
hipster_data = response.json()
# print(hipster_data['response']['docs'])
hippie = hipster_data['response']['docs']
print("The first story to mention the word 'hipster' in 1995 was titled", hippie[0]['headline']['kicker'] + "; " + hippie[0]['headline']['main'])
"""
Explanation: 4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?
End of explanation
"""
#Ta-Stephan: Beause you added to the start and end date early, the 1950s weren't counted.
start_date = 19500101
end_date = 19591231
for n in [1,2,3,4,5,6]:
if (n <= 5):
start_date = start_date + 100000
end_date = end_date + 100000
else:
start_date = start_date + 100000
end_date = 20160609
response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?api-key=' + nyt_articles_api + '&q="\"gay marriage\""&begin_date=' + str(start_date) + '&end_date=' + str(end_date) + '&sort=oldest')
gay_marriage_data = response.json()
gay_marriage_hits = gay_marriage_data['response']['meta']['hits']
start_str = str(start_date)
start_str = start_str[:4]
end_str = str(end_date)
end_str = end_str[:4]
print("There were", gay_marriage_hits, "mentions of gay marriage between", start_str, "and", end_str)
"""
Explanation: 5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?
Tip: You'll want to put quotes around the search term so it isn't just looking for "gay" and "marriage" in the same article.
Tip: Write code to find the number of mentions between Jan 1, 1950 and Dec 31, 1959.
End of explanation
"""
response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?api-key=' + nyt_articles_api + '&q=motorcycle&facet_field=section_name')
moto_data = response.json()
# print(moto_data['response']['facets']['section_name']['terms'])
# documentation found re: facets in NYT API
# https://data-gov.tw.rpi.edu/wiki/How_to_use_New_York_Times_Article_Search_API
moto_sections = moto_data['response']['facets']['section_name']['terms']
moto_count = 0
most_motos = ""
for section in moto_sections:
if section['count'] > moto_count:
moto_count = section['count']
most_motos = section['term']
print("The section of the New York Times that mentions motorcycles the most is the", most_motos, "section which mentions motorcycles", moto_count, "times.")
"""
Explanation: 6) What section talks about motorcycles the most?
Tip: You'll be using facets
End of explanation
"""
criticPickCount = 0
for offset in [0,1,2,3]:
offset = offset * 20
# print(offset)
response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?api-key=' + nyt_movie_api + '&offset=' + str(offset))
movie_data = response.json()
# print(movie_data)
# print(movie_data['results'])
for movie in movie_data['results']:
if movie['critics_pick'] == 1:
# print(movie['display_title'])
criticPickCount = criticPickCount + 1
if offset == 0:
print("There were", criticPickCount, "Critic' Picks in the last 20 movies that were reviewed.")
if offset == 20:
print("There were", criticPickCount, "Critic' Picks in the last 40 movies that were reviewed.")
if offset == 40:
print("There were", criticPickCount, "Critic' Picks in the last 60 movies that were reviewd.")
if offset == 60:
print("There were", criticPickCount, "Critic' Picks in the last 80 movies that were reviewed.")
# print("There were", criticPickCount, "Critic' Picks.")
"""
Explanation: 7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?
Tip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.
End of explanation
"""
for offset in [0,1,2]:
offset = offset * 20
# print(offset)
response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?api-key=' + nyt_movie_api + '&offset=' + str(offset))
movie_data = response.json()
# print(movie_data)
criticPickCount = 0
authors = []
# print(movie_data['results'])
#the critics name is stored in the byline
for movie in movie_data['results']:
authors.append(movie['byline'])
# print(movie['byline'])
from collections import Counter
counts = Counter(authors)
# print(counts)
print(Counter(authors).most_common(1) , 'has written the most reviews out of the last 40 NYT reviews.')
"""
Explanation: 8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews?
End of explanation
"""
|
liganega/Gongsu-DataSci | previous/y2017/W08-numpy-hypothesis_test/.ipynb_checkpoints/GongSu19-Statistical_Hypothesis_Test-checkpoint.ipynb | gpl-3.0 | import numpy as np
from __future__ import print_function, division
"""
Explanation: 자료 안내: 여기서 다루는 내용은 아래 사이트의 내용을 참고하여 생성되었음.
https://github.com/rouseguy/intro2stats
가설검정
End of explanation
"""
import sympy as sp
sp.factorial(5)
"""
Explanation: 오늘의 주요 예제: 동전던지기
동전을 30번 던져서 앞면(Head)이 24번 나왔을 때, 정상적인 동전이라 할 수 있을까?
정상적인 동전이라는 주장이 참이다라는 주장을 가설로 만들어 그 가설을 받아들일지 말지를 결정하면 된다.
이와같이 주장이 옳다는 가설을 영가설이라 부른다.
영가설(H0): 정상적인 동전이라도 24번 이상 앞면이 나올 수 있다. 우연하게 발생한 사건이다.
이제 영가설이 어느정도의 확률로 맞는지 확인해보자.
그러기 위해 이항분포 확률을 활용한다.
이항분포
동일한 시도를 n번 시도할 때 특정 사건이 r번 나올 확률을 계산할 때 이항분포를 이용한다.
조건
각각의 시도는 상호 독립적이다.
각각의 시도에서 특정 사건이 발생할 확률 p는 언제나 동일하다.
예제
동전을 n번 던져 앞면이 r번 나올 확률들의 분포: p = 1/2
주사위를 n번 던져 2보다는 큰 소수가 r번 나올 확률들이 분포: p = 1/3
이항분포를 따르는 확률분포를 $B(n, p)$로 표기한다.
이때, n번 시도해서 r번 성공할 확률 $P(r)$은 아래 식으로 구할 수 있다.
$$P(n, r, p) = {}^nC_r \cdot p^r \cdot (1-p)^{n-r}$$
위 식에서 ${}^nC_{r}$은 n개에서 r개를 선택하는 조합의 경우의 수를 나타내며, 아래의 식으로 구해진다.
$${}^nC_{r} = \frac{n!}{(n-r)!\cdot r!}$$
예제
정상적인 동일한 주사위 반복적으로 던져서 숫자 1이 나왔는지 여부를 따지는 행위는 이항분포를 따르며,
n번 던질 경우 $B(n, 1/6)$이 성립한다.
아래 확률을 구하라.
주사위를 세 번 던져 숫자 1이 두 번 나올 확률은
이 경우엔 이항분포 B(3, 1/6)를 이용하면 된다.
여기서 1/6는 숫자 1이 나오는 사건의 확률이다.
$$P(3, 2, 1/6) = {}^3C_{2}\cdot \left(\frac 1 6\right)^2 \cdot \left(\frac 5 6\right)
= \frac{15}{6^3} = \frac{5}{72}$$
이항분포 확률 계산함수
동전을 30번 던져서 앞면이 24번 이상 나올 확률은을 계산해보자.
이 문제를 풀기 위해 이항분포 공식을 함수로 선언하자.
먼저 팩토리얼 함수가 필요한데, sympy 모듈에 정의되어 있다.
End of explanation
"""
def binom(n, r):
return sp.factorial(n) / (sp.factorial(r) * sp.factorial(n-r))
"""
Explanation: sp.factorial() 함수를 이용하여 조합의 경우의 수인 ${}^nC_{r}$을 계산하는 함수를 정의한다.
End of explanation
"""
def binom_distribution(n, r, p):
return binom(n, r) * p**r * (1-p)**(n-r)
"""
Explanation: 이제 이항분포 확률를 구하는 함수는 다음과 같다.
n, r, p 세 개의 인자를 사용하며 p는 한 번 실행할 때 특정 사건이 발생할 확률이다.
End of explanation
"""
binom_distribution(30, 24, 1/2.)
"""
Explanation: 위 함수를 이용하여 동전을 30번 던져서 앞면이 정확히 24번 나올 확률을 계산할 수 있다.
End of explanation
"""
probability = 0.0
for x in range(24, 31):
probability += binom_distribution(30, x, 1/2)
print(probability)
"""
Explanation: 이제 동전을 30번 던져서 앞면이 24번 이상 나올 확률은 다음과 같다.
End of explanation
"""
np.random.randint(0, 10, 5)
"""
Explanation: 위 계산결과에 의하면 동전을 30번 던져서 앞면이 24번 이상 나올 확률은 0.0007이다.
즉, 동전을 30번 던지는 실험을 만 번 반복하면 7번 정도, 앞면이 24번 이상 나온다는 의미이다.
달리 말하면, 동전을 30번씩 던지는 모의실험을 1500번 정도 해야 앞면이 24번 이상 나오는 경우를 볼 수 있다는 말이다.
이를 실험적으로 확인해볼 수 있다.
프로그래밍을 이용한 이항분포 모의실험
모의실험을 이용하여 동전을 30번 던져서 앞면이 24번 이상 나올 확률이 얼마나 되는지 확인해보자.
일반적으로 그 확률이 5% 이하라고 밝혀지면 앞서 언급한 동전은 정상적이지 않다고 판단하여 영가설(H0)을 기각한다.
즉, 편향된 동전이었다고 결론짓는다.
앞서 이미 이론적으로 그 확률이 5%에 훨씬 미치지 않는다는 것을 확인하였다.
여기서는 모의실험을 이용하여 동일한 결론에 도달할 수 있음을 보이고자 한다.
모의실험: 동전 30번 던지기
먼저 정상적인 동전을 30번 던지는 모의실험을 코드로 구현하기 위해 아래 아이디어를 활용한다.
모의실험에서 1은 앞면(H)를, 0은 뒷면(T)을 의미한다.
정상적인 동전을 던진 결과는 임의적으로 결정된다.
np.random 모듈의 randint 함수를 이용하여 무작위적으로 0과 1로 구성된, 길이가 30인 어레이를 생성할 수 있다.
randint 함수는 해당 구간의 숫자들을 무작위적으로, 하지만 균등하게 생성한다.
np.random.randint 함수는 주어진 구간에서 정수를 지정된 길이만큼 생성해서 어레이로 리턴한다.
아래 코드는 0과 10 사이의 정수를 무작위적으로 5개 생성하여 어레이로 리턴한다.
주의:
0은 포함됨
10은 포함되지 않음
End of explanation
"""
num_tosses = 30
experiment = np.random.randint(0, 2, num_tosses)
experiment
"""
Explanation: 동전을 30번 던지는 것을 구현하기 위해 이제 0과 1을 무작위적으로 30개 생성하자.
End of explanation
"""
mask = experiment == 1
mask
heads = experiment[mask]
heads
"""
Explanation: 앞면이 나온 것만 모아서 끄집어 내기 위해 마스크 인덱싱을 활용한다.
End of explanation
"""
len(heads)
"""
Explanation: 앞면이 나온 회수는 위 어레이의 길이에 해당한다.
End of explanation
"""
heads.shape
"""
Explanation: 길이 정보를 아래와 같이 모양의 정보로부터 가져올 수 있다.
End of explanation
"""
heads.shape[0]
"""
Explanation: (13,)은 길이가 1인 튜플 자료형이며, 이는 heads가 1차원 어레이이며, 어레이의 길이가 13이란 의미이다.
튜플의 인덱싱을 이용하여 heads의 길이, 즉, 앞면이 나온 횟수를 알 수 있다.
End of explanation
"""
def coin_experiment(num_repeat):
heads_count_array = np.empty([num_repeat,], dtype=int)
for times in np.arange(num_repeat):
experiment = np.random.randint(0,2,num_tosses)
heads_count_array[times] = experiment[experiment==1].shape[0]
return heads_count_array
"""
Explanation: 모의실험: 동전 30번 던지기 모의실험 반복하기
앞서 구현한 동전 30번 던지기를 계속해서 반복시키는 모의실험을 구현하자.
End of explanation
"""
heads_count_10 = coin_experiment(10)
heads_count_10
"""
Explanation: 위 코드를 설명하면 다음과 같다.
먼저 num_repeat 만큼 반복한 30번 동전던지기의 결과를 저장할 (num_repeat,) 모양의
1차원 어레이를 생성한다.
이를 위해 np.empty 함수를 활용한다.
np.zeros 함수와 유사하게 작동하지만 생성될 때 항목 값들은 의미 없이 임의로 생성된다.
이후에 각각의 항목은 반드시 지정되어야 하며, 그렇지 않으면 오류가 발생한다.
heads_count_array = np.empty([num_repeat,], dtype=int)
num_repeat 만큼 30번 동전던지기 모의실험을 실행하여 앞서 생성한 어레이에 차례대로 저장한다.
저장은 어레이 인덱싱을 활용한다.
for times in np.arange(num_repeat):
experiment = np.random.randint(0,2,num_tosses)
heads_count_array[times] = experiment[experiment==1].shape[0]
예제
동전 30번 던지기를 10번 모의실험한 결과를 예제로 살펴보자.
End of explanation
"""
heads_count_1500 = coin_experiment(1500)
"""
Explanation: 예제
30번 동전던지기를 1500번 모의실험한 결과를 예제로 살펴보자.
End of explanation
"""
heads_count_1500[:10]
"""
Explanation: 100번의 모의실험 결과 중에 처음 10개의 결과를 확인해보자.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(color_codes = True)
"""
Explanation: 모의실험 결과 그래프로 확인하기
모의실험 결과를 히스트그램으로 확인해볼 수 있다.
여기서는 seaborn 이란 모듈을 활용하여 보다 멋진 그래프를 그리는 법을 기억해두면 좋다.
End of explanation
"""
sns.distplot(heads_count_1500, kde=False)
"""
Explanation: 아래 그래프는 동전 30번 던지기를 1500번 모의실험했을 때 앞면이 나온 횟수를 히스토그램으로 보여준다.
End of explanation
"""
sns.distplot(heads_count_1500, kde=True)
"""
Explanation: 아래 그래프는 커널밀도추정(kde = kernel density estimation) 기법을 적용하여 데이터를
보다 이해하기 쉽도록 도와주는 그래프를 함께 보여준다.
End of explanation
"""
# 앞면이 24회 이상 나오는 경우들의 어레이
mask = heads_count_1500>=24
heads_count_1500[mask]
"""
Explanation: 1500번의 모의실험에서 앞면이 24회 이상 나온 실험이 몇 번인지를 확인해보자.
앞서 사용한 기술인 마스크 인덱싱 기술을 활용한다.
주의: 앞서 이론적으로 살펴보았을 때 1500번 정도 실험해야 한 번 정도 볼 수 있다고 결론지었음을 기억하라.
End of explanation
"""
heads_count_10000 = coin_experiment(10000)
sns.distplot(heads_count_10000, kde=False)
# 앞면이 24회 이상 나오는 경우들의 어레이
mask = heads_count_10000>=24
heads_count_10000[mask].shape[0]
"""
Explanation: 이론적으로 예상한 대로 동전을 30번 던져서 앞면이 24회 이상 나온 경우가 한 번 나왔다.
이제 위 모의실험을 10,000번 반복해보자.
End of explanation
"""
def coin_experiment_2(num_repeat):
experiment = np.random.randint(0,2,[num_repeat, num_tosses])
return experiment.sum(axis=1)
heads_count = coin_experiment_2(100000)
sns.distplot(heads_count, kde=False)
mask = heads_count>=24
heads_count[mask].shape[0]/100000
"""
Explanation: 앞서 확률적으로 0.0007%, 즉 10,000번에 7번 정도 나와야 한다고 이론적으로 계산한 결과와 비슷한 결과가 나온다는 것을 위 실험을 반복적으로 실행하면서 확인해볼 수 있다.
정상적인 동전인가?
모의실험의 결과 역시 동전을 30번 던져서 24번 이상 앞면이 나올 확률이 5%에 크게 미치지 못한다.
이런 경우 우리는 사용한 동전이 정상적인 동전이라는 영가설(H0)을 받아들일 수 없다고 말한다.
즉, 기각해야 한다.
가설검정을 위해 지금까지 다룬 내용을 정리하면 다음과 같다.
가설검정 6단계
1) 검정할 가설을 결정한다.
* 영가설: 여기서는 "정상적인 동전이다" 라는 가설 사용
2) 가설을 검증할 때 사용할 통계방식을 선택한다.
* 기서는 이항분포 확률 선택
3) 기각역을 정한다.
* 여기서는 앞면이 나올 횟수를 기준으로 상위 5%로 정함
* 앞면이 24번 나올 확률이 5% 이상되어야 인정한다는 의미임.
4) 검정통계를 위한 p-값을 찾는다.
* 여기서는 모의실험을 이용하여 가설에 사용된 사건이 발생한 확률을 계산.
* 경우에 따라 이론적으로도 계산 가능
5) 표본결과가 기각역 안에 들어오는지 확인한다.
* 여기서는 5% 이하인지 확인
6) 결정을 내린다.
* 여기서는 "정상적인 동전이다" 라는 영가설을 기각함.
연습문제
연습
모의실험 반복을 구현하는 coin_experiment 함수를 for문을 사용하지 않고 구현해보자.
견본답안:
End of explanation
"""
from numpy.random import binomial
"""
Explanation: 연습
numpy.random 모듈에 지금까지 다룬 이항분포 확률을 계산해주는 함수인 binomial이 이미 구현되어 있다.
End of explanation
"""
an_experiment = binomial(30, 0.5, 10000)
an_experiment
"""
Explanation: 아래 코드는 B(30, 1.5)를 따르는 확률변수를 10,000번 반복한 결과를 보여준다.
End of explanation
"""
an_experiment[an_experiment>=24].shape[0]
"""
Explanation: 위 결과를 이용하여 앞서 분석한 결과와 유사한 결과를 얻는다는 것을 확인할 수 있다.
End of explanation
"""
|
ozorich/phys202-2015-work | assignments/assignment04/MatplotlibExercises.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Visualization 1: Matplotlib Basics Exercises
End of explanation
"""
y=np.random.randn(30)
x=np.random.randn(30)
plt.scatter(x,y, color="r",s=50, marker='x',alpha=.9)
plt.xlabel('Random Values for X')
plt.ylabel('Randome Values for Y')
plt.title("My Random Values")
"""
Explanation: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
End of explanation
"""
data=np.random.randn(50)
plt.hist(data, bins=10,color='g',align='left')
plt.xlabel('Value')
plt.ylabel('Number of Random Numbers')
plt.title('My Histogram')
"""
Explanation: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title.
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive/04_advanced_preprocessing/labs/taxicab_traffic/deploy.ipynb | apache-2.0 | !gsutil cp -r $MODEL_PATH/* gs://$BUCKET/taxifare/model/
"""
Explanation: Deploy for Online Prediction
To get our predictions, in addition to the features provided by the client, we also need to fetch the latest traffic information from BigQuery. We then combine these and invoke our tensorflow model. This is visualized by the 'on-demand' portion (red arrows) in the below diagram:
<img src="../../taxicab_traffic/assets/architecture.png" >
To do this we'll take advantage of AI Platforms Custom Prediction Routines which allows us to execute custom python code in response to every online prediction request. There are 5 steps to creating a custom prediction routine:
Upload Model Artifacts to GCS
Implement Predictor interface
Package the prediction code and dependencies
Deploy
Invoke API
1. Upload Model Artifacts to GCS
Here we upload our model weights so that AI Platform can access them.
End of explanation
"""
%%writefile predictor.py
import tensorflow as tf
from google.cloud import bigquery
PROJECT_ID = 'will_be_replaced'
class TaxifarePredictor(object):
def __init__(self, predict_fn):
self.predict_fn = predict_fn
def predict(self, instances, **kwargs):
bq = bigquery.Client(PROJECT_ID)
query_string = """
###TODO###
"""
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instances['trips_last_5min'] = [trips for _ in range(len(list(instances.items())[0][1]))]
predictions = self.predict_fn(instances)
return predictions['predictions'].tolist() # convert to list so it is JSON serialiable (requirement)
@classmethod
def from_path(cls, model_dir):
predict_fn = tf.contrib.predictor.from_saved_model(model_dir,'predict')
return cls(predict_fn)
!sed -i -e 's/will_be_replaced/{PROJECT_ID}/g' predictor.py
"""
Explanation: 2. Implement Predictor Interface
Interface Spec: https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routines#predictor-class
This tells AI Platform how to load the model artifacts, and is where we specify our custom prediction code.
Excercise 1: Complete the SQL query_string to return the latest (proxy) traffic information. To check your answer reference the solution.
Note: the correct PROJECT_ID will automatically be inserted using the bash sed command in the subsequent cell.
End of explanation
"""
import predictor
instances = {'dayofweek' : [6,5],
'hourofday' : [12,11],
'pickuplon' : [-73.99,-73.99],
'pickuplat' : [40.758,40.758],
'dropofflat' : [40.742,40.758],
'dropofflon' : [-73.97,-73.97]}
predictor = predictor.TaxifarePredictor.from_path(MODEL_PATH)
predictor.predict(instances)
"""
Explanation: Test Predictor Class Works Locally
End of explanation
"""
%%writefile setup.py
from setuptools import setup
setup(
name='taxifare_custom_predict_code',
version='0.1',
scripts=['predictor.py'],
install_requires=[
'google-cloud-bigquery==1.16.0',
])
!python setup.py sdist --formats=gztar
!gsutil cp dist/taxifare_custom_predict_code-0.1.tar.gz gs://$BUCKET/taxifare/predict_code/
"""
Explanation: 3. Package Predictor Class and Dependencies
We must package the predictor as a tar.gz source distribution package. Instructions for this are specified here. The AI Platform runtime comes preinstalled with several packages listed here. However it does not come with google-cloud-bigquery so we list that as a dependency below.
End of explanation
"""
!gcloud beta ai-platform models create $MODEL_NAME --regions us-central1 --enable-logging --enable-console-logging
#!gcloud ai-platform versions delete $VERSION_NAME --model taxifare --quiet
!gcloud beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--origin gs://$BUCKET/taxifare/model \
--service-account $(gcloud projects list --filter="$PROJECT_ID" --format="value(PROJECT_NUMBER)")-compute@developer.gserviceaccount.com \
--runtime-version 1.14 \
--python-version 3.5 \
--package-uris gs://$BUCKET/taxifare/predict_code/taxifare_custom_predict_code-0.1.tar.gz \
--prediction-class predictor.TaxifarePredictor
"""
Explanation: 4. Deploy
This is similar to how we deploy standard models to AI Platform, with a few extra command line arguments.
Note the use of the --service-acount parameter below.
The default service account does not have permissions to read from BigQuery, so we specify a different service account that does have permission.
Specifically we use the Compute Engine default service account which has the IAM project editor role.
End of explanation
"""
import googleapiclient.discovery
instances = {'dayofweek' : [6],
'hourofday' : [12],
'pickuplon' : [-73.99],
'pickuplat' : [40.758],
'dropofflat' : [40.742],
'dropofflon' : [-73.97]}
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, VERSION_NAME)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'])
"""
Explanation: 5. Invoke API
Warning: You will see ImportError: file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth when you run this. While it looks like an error this is actually just a warning and is safe to ignore, the subsequent cell will still work.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nasa-giss/cmip6/models/sandbox-3/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-3', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
lenovor/notes-on-dirichlet-processes | 2015-09-02-fitting-a-mixture-model.ipynb | mit | %matplotlib inline
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
from scipy import stats
from collections import namedtuple, Counter
"""
Explanation: Fitting a Mixture Model with Gibbs Sampling
End of explanation
"""
data = pd.Series.from_csv("clusters.csv")
_=data.hist(bins=20)
data.size
"""
Explanation: Suppose we receive some data that looks like the following:
End of explanation
"""
SuffStat = namedtuple('SuffStat', 'theta N')
def update_suffstats(state):
for cluster_id, N in Counter(state['assignment']).iteritems():
points_in_cluster = [x
for x, cid in zip(state['data_'], state['assignment'])
if cid == cluster_id
]
mean = np.array(points_in_cluster).mean()
state['suffstats'][cluster_id] = SuffStat(mean, N)
def initial_state():
num_clusters = 3
alpha = 1.0
cluster_ids = range(num_clusters)
state = {
'cluster_ids_': cluster_ids,
'data_': data,
'num_clusters_': num_clusters,
'cluster_variance_': .01,
'alpha_': alpha,
'hyperparameters_': {
"mean": 0,
"variance": 1,
},
'suffstats': [None, None, None],
'assignment': [random.choice(cluster_ids) for _ in data],
'pi': [alpha / num_clusters for _ in cluster_ids],
'cluster_means': [-1, 0, 1]
}
update_suffstats(state)
return state
state = initial_state()
for k, v in state.items():
print k
"""
Explanation: It appears that these data exist in three separate clusters. We want to develop a method for finding these latent clusters. One way to start developing a method is to attempt to describe the process that may have generated these data.
For simplicity and sanity, let's assume that each data point is generated independently of the other. Moreover, we will assume that within each cluster, the data points are identically distributed. In this case, we will assume each cluster is normally distributed and that each cluster has the same variance, $\sigma^2$.
Given these assumptions, our data could have been generated by the following process. For each data point, randomly select 1 of 3 clusters from the distribution $\text{Discrete}(\pi_1, \pi_2, \pi_3)$. Each cluster $k$ corresponds to a parameter $\theta_k$ for that cluster, sample a data point from $\mathcal{N}(\theta_k, \sigma^2)$.
Equivalently, we could consider these data to be generated from a probability distribution with this probability density function:
$$
p(x_i \,|\, \pi, \theta_1, \theta_2, \theta_3, \sigma)=
\sum_{k=1}^3 \pi_k\cdot
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left{
\frac{-(x_i-\theta_k)^2}{2\sigma^2}
\right}
$$
where $\pi$ is a 3-dimensional vector giving the mixing proportions. In other words, $\pi_k$ describes the proportion of points that occur in cluster $k$.
That is, the probability distribution describing $x$ is a linear combination of normal distributions.
We want to use this generative model to formulate an algorithm for determining the particular parameters that generated the dataset above. The $\pi$ vector is unknown to us, as is each cluster mean $\theta_k$.
We would also like to know $z_i\in{1, 2, 3}$, the latent cluster for each point. It turns out that introducing $z_i$ into our model will help us solve for the other values.
The joint distribution of our observed data (data) along with the assignment variables is given by:
\begin{align}
p(\mathbf{x}, \mathbf{z} \,|\, \pi, \theta_1, \theta_2, \theta_3, \sigma)&=
p(\mathbf{z} \,|\, \pi)
p(\mathbf{x} \,|\, \mathbf{z}, \theta_1, \theta_2, \theta_3, \sigma)\
&= \prod_{i=1}^N p(z_i \,|\, \pi)
\prod_{i=1}^N p(x_i \,|\, z_i, \theta_1, \theta_2, \theta_3, \sigma) \
&= \prod_{i=1}^N \pi_{z_i}
\prod_{i=1}^N
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left{
\frac{-(x_i-\theta_{z_i})^2}{2\sigma^2}
\right}\
&= \prod_{i=1}^N
\left(
\pi_{z_i}
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left{
\frac{-(x_i-\theta_{z_i})^2}{2\sigma^2}
\right}
\right)\
&=
\prod_i^n
\prod_k^K
\left(
\pi_k
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left{
\frac{-(x_i-\theta_k)^2}{2\sigma^2}
\right}
\right)^{\delta(z_i, k)}
\end{align}
Keeping Everything Straight
Before moving on, we need to devise a way to keep all our data and parameters straight. Following ideas suggested by Keith Bonawitz, let's define a "state" object to store all of this data.
It won't yet be clear why we are defining some components of state, however we will use each part eventually! As an attempt at clarity, I am using a trailing underscore in the names of members that are fixed. We will update the other parameters as we try to fit the model.
End of explanation
"""
def log_assignment_score(data_id, cluster_id, state):
"""log p(z_i=k \,|\, \cdot)
We compute these scores in log space for numerical stability.
"""
x = state['data_'][data_id]
theta = state['cluster_means'][cluster_id]
var = state['cluster_variance_']
log_pi = np.log(state['pi'][cluster_id])
return log_pi + stats.norm.logpdf(x, theta, var)
def assigment_probs(data_id, state):
"""p(z_i=cid \,|\, \cdot) for cid in cluster_ids
"""
scores = [log_assignment_score(data_id, cid, state) for cid in state['cluster_ids_']]
scores = np.exp(np.array(scores))
return scores / scores.sum()
def sample_assignment(data_id, state):
"""Sample cluster assignment for data_id given current state
cf Step 1 of Algorithm 2.1 in Sudderth 2006
"""
p = assigment_probs(data_id, state)
return np.random.choice(state['cluster_ids_'], p=p)
def update_assignment(state):
"""Update cluster assignment for each data point given current state
cf Step 1 of Algorithm 2.1 in Sudderth 2006
"""
for data_id, x in enumerate(state['data_']):
state['assignment'][data_id] = sample_assignment(data_id, state)
update_suffstats(state)
"""
Explanation: Gibbs Sampling
The theory of Gibbs sampling tells us that given some data $\bf y$ and a probability distribution $p$ parameterized by $\gamma_1, \ldots, \gamma_d$, we can successively draw samples from the distribution by sampling from
$$\gamma_j^{(t)}\sim p(\gamma_j \,|\, \gamma_{\neg j}^{(t-1)})$$
where $\gamma_{\neg j}^{(t-1)}$ is all current values of $\gamma_i$ except for $\gamma_j$. If we sample long enough, these $\gamma_j$ values will be random samples from $p$.
In deriving a Gibbs sampler, it is often helpful to observe that
$$
p(\gamma_j \,|\, \gamma_{\neg j})
= \frac{
p(\gamma_1,\ldots,\gamma_d)
}{
p(\gamma_{\neg j})
} \propto p(\gamma_1,\ldots,\gamma_d).
$$
The conditional distribution is proportional to the joint distribution. We will get a lot of mileage from this simple observation by dropping constant terms from the joint distribution (relative to the parameters we are conditioned on).
The $\gamma$ values in our model are each of the $\theta_k$ values, the $z_i$ values, and the $\pi_k$ values. Thus, we need to derive the conditional distributions for each of these.
Many derivation of Gibbs samplers that I have seen rely on a lot of handwaving and casual appeals to conjugacy. I have tried to add more mathematical details here. I would gladly accept feedback on how to more clearly present the derivations! I have also tried to make the derivations more concrete by immediately providing code to do the computations in this specific case.
Conditional Distribution of Assignment
For berevity, we will use
$$
p(z_i=k \,|\, \cdot)=
p(z_i=k \,|\,
z_{\neg i}, \pi,
\theta_1, \theta_2, \theta_3, \sigma, \bf x
).
$$
Because cluster assignements are conditionally independent given the cluster weights and paramters,
\begin{align}
p(z_i=k \,|\, \cdot)
&\propto
\prod_i^n
\prod_k^K
\left(
\pi_k
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left{
\frac{-(x_i-\theta_k)^2}{2\sigma^2}
\right}
\right)^{\delta(z_i, k)} \
&\propto
\pi_k \cdot
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left{
\frac{-(x_i-\theta_k)^2}{2\sigma^2}
\right}
\end{align}
This equation intuitively makes sense: point $i$ is more likely to be in cluster $k$ if $k$ is itself probable ($\pi_k\gg 0$) and $x_i$ is close to the mean of the cluster $\theta_k$.
For each data point $i$, we can compute $p(z_i=k \,|\, \cdot)$ for each of cluster $k$. These values are the unnormalized parameters to a discrete distribution from which we can sample assignments.
Below, we define functions for doing this sampling. sample_assignment will generate a sample from the posterior assignment distribution for the specified data point. update_assignment will sample from the posterior assignment for each data point and update the state object.
End of explanation
"""
def sample_mixture_weights(state):
"""Sample new mixture weights from current state according to
a Dirichlet distribution
cf Step 2 of Algorithm 2.1 in Sudderth 2006
"""
ss = state['suffstats']
alpha = [ss[cid].N + state['alpha_'] / state['num_clusters_']
for cid in state['cluster_ids_']]
return stats.dirichlet(alpha).rvs(size=1).flatten()
def update_mixture_weights(state):
"""Update state with new mixture weights from current state
sampled according to a Dirichlet distribution
cf Step 2 of Algorithm 2.1 in Sudderth 2006
"""
state['pi'] = sample_mixture_weights(state)
"""
Explanation: Conditional Distribution of Mixture Weights
We can similarly derive the conditional distributions of mixture weights by an application of Bayes theorem. Instead of updating each component of $\pi$ separately, we update them together (this is called blocked Gibbs).
\begin{align}
p(\pi \,|\, \cdot)&=
p(\pi \,|\,
\bf{z},
\theta_1, \theta_2, \theta_3,
\sigma, \mathbf{x}, \alpha
)\
&\propto
p(\pi \,|\,
\mathbf{x},
\theta_1, \theta_2, \theta_3,
\sigma, \alpha
)
p(\bf{z}\ \,|\,
\mathbf{x},
\theta_1, \theta_2, \theta_3,
\sigma, \pi, \alpha
)\
&=
p(\pi \,|\,
\alpha
)
p(\bf{z}\ \,|\,
\mathbf{x},
\theta_1, \theta_2, \theta_3,
\sigma, \pi, \alpha
)\
&=
\prod_{i=1}^K \pi_k^{\alpha/K - 1}
\prod_{i=1}^K \pi_k^{\sum_{i=1}^N \delta(z_i, k)} \
&=\prod_{k=1}^3 \pi_k^{\alpha/K+\sum_{i=1}^N \delta(z_i, k)-1}\
&\propto \text{Dir}\left(
\sum_{i=1}^N \delta(z_i, 1)+\alpha/K,
\sum_{i=1}^N \delta(z_i, 2)+\alpha/K,
\sum_{i=1}^N \delta(z_i, 3)+\alpha/K
\right)
\end{align}
Here are Python functions to sample from the mixture weights given the current state and to update the mixture weights in the state object.
End of explanation
"""
def sample_cluster_mean(cluster_id, state):
cluster_var = state['cluster_variance_']
hp_mean = state['hyperparameters_']['mean']
hp_var = state['hyperparameters_']['variance']
ss = state['suffstats'][cluster_id]
numerator = hp_mean / hp_var + ss.theta * ss.N / cluster_var
denominator = (1.0 / hp_var + ss.N / cluster_var)
posterior_mu = numerator / denominator
posterior_var = 1.0 / denominator
return stats.norm(posterior_mu, np.sqrt(posterior_var)).rvs()
def update_cluster_means(state):
state['cluster_means'] = [sample_cluster_mean(cid, state)
for cid in state['cluster_ids_']]
"""
Explanation: Conditional Distribution of Cluster Means
Finally, we need to compute the conditional distribution for the cluster means.
We assume the unknown cluster means are distributed according to a normal distribution with hyperparameter mean $\lambda_1$ and variance $\lambda_2^2$. The final step in this derivation comes from the normal-normal conjugacy. For more information see section 2.3 of this and section 6.2 this.)
\begin{align}
p(\theta_k \,|\, \cdot)&=
p(\theta_k \,|\,
\bf{z}, \pi,
\theta_{\neg k},
\sigma, \bf x, \lambda_1, \lambda_2
) \
&\propto p(\left{x_i \,|\, z_i=k\right} \,|\, \bf{z}, \pi,
\theta_1, \theta_2, \theta_3,
\sigma, \lambda_1, \lambda_2) \cdot\
&\phantom{==}p(\theta_k \,|\, \bf{z}, \pi,
\theta_{\neg k},
\sigma, \lambda_1, \lambda_2)\
&\propto p(\left{x_i \,|\, z_i=k\right} \,|\, \mathbf{z},
\theta_k, \sigma)
p(\theta_k \,|\, \lambda_1, \lambda_2)\
&= \mathcal{N}(\theta_k \,|\, \mu_n, \sigma_n)\
\end{align}
$$ \sigma_n^2 = \frac{1}{
\frac{1}{\lambda_2^2} + \frac{N_k}{\sigma^2}
} $$
and
$$\mu_n = \sigma_n^2
\left(
\frac{\lambda_1}{\lambda_2^2} +
\frac{n\bar{x_k}}{\sigma^2}
\right)
$$
Here is the code for sampling those means and for updating our state accordingly.
End of explanation
"""
def gibbs_step(state):
update_assignment(state)
update_mixture_weights(state)
update_cluster_means(state)
"""
Explanation: Doing each of these three updates in sequence makes a complete Gibbs step for our mixture model. Here is a function to do that:
End of explanation
"""
def plot_clusters(state):
gby = pd.DataFrame({
'data': state['data_'],
'assignment': state['assignment']}
).groupby(by='assignment')['data']
hist_data = [gby.get_group(cid).tolist()
for cid in gby.groups.keys()]
plt.hist(hist_data,
bins=20,
histtype='stepfilled', alpha=.5 )
plot_clusters(state)
"""
Explanation: Initially, we assigned each data point to a random cluster. We can see this by plotting a histogram of each cluster.
End of explanation
"""
for _ in range(5):
gibbs_step(state)
plot_clusters(state)
"""
Explanation: Each time we run gibbs_step, our state is updated with newly sampled assignments. Look what happens to our histogram after 5 steps:
End of explanation
"""
def log_likelihood(state):
"""Data log-likeliehood
Equation 2.153 in Sudderth
"""
ll = 0
for x in state['data_']:
pi = state['pi']
mean = state['cluster_means']
sd = np.sqrt(state['cluster_variance_'])
ll += np.log(np.dot(pi, stats.norm(mean, sd).pdf(x)))
return ll
state = initial_state()
ll = [log_likelihood(state)]
for _ in range(20):
gibbs_step(state)
ll.append(log_likelihood(state))
pd.Series(ll).plot()
"""
Explanation: Suddenly, we are seeing clusters that appear very similar to what we would intuitively expect: three Gaussian clusters.
Another way to see the progress made by the Gibbs sampler is to plot the change in the model's log-likelihood after each step. The log likehlihood is given by:
$$
\log p(\mathbf{x} \,|\, \pi, \theta_1, \theta_2, \theta_3)
\propto \sum_x \log \left(
\sum_{k=1}^3 \pi_k \exp
\left{
-(x-\theta_k)^2 / (2\sigma^2)
\right}
\right)
$$
We can define this as a function of our state object:
End of explanation
"""
pd.Series(ll).plot(ylim=[-150, -100])
"""
Explanation: See that the log likelihood improves with iterations of the Gibbs sampler. This is what we should expect: the Gibbs sampler finds state configurations that make the data we have seem "likely". However, the likelihood isn't strictly monotonic: it jitters up and down. Though it behaves similarly, the Gibbs sampler isn't optimizing the likelihood function. In its steady state, it is sampling from the posterior distribution. The state after each step of the Gibbs sampler is a sample from the posterior.
End of explanation
"""
|
espressomd/espresso | doc/tutorials/ferrofluid/ferrofluid_part2.ipynb | gpl-3.0 | import espressomd
import espressomd.magnetostatics
espressomd.assert_features(['DIPOLES', 'DP3M', 'LENNARD_JONES'])
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 18})
import numpy as np
import tqdm
"""
Explanation: Ferrofluid - Part 2
Table of Contents
Applying an external magnetic field
Magnetization curve
Remark: The equilibration and sampling times used in this tutorial would be not sufficient for scientific purposes, but they are long enough to get at least a qualitative insight of the behaviour of ferrofluids. They have been shortened so we achieve reasonable computation times for the purpose of a tutorial.
Applying an external magnetic field
In this part we want to investigate the influence of a homogeneous external magnetic field exposed to a ferrofluid system.
We import all necessary packages and check for the required ESPResSo features
End of explanation
"""
# Lennard-Jones parameters
LJ_SIGMA = 1.
LJ_EPSILON = 1.
LJ_CUT = 2**(1. / 6.) * LJ_SIGMA
# Particles
N_PART = 700
# Area fraction of the mono-layer
PHI = 0.06
# Dipolar interaction parameter lambda = MU_0 m^2 /(4 pi sigma^3 kT)
DIP_LAMBDA = 4.
# Temperature
KT = 1.0
# Friction coefficient
GAMMA = 1.0
# Time step
TIME_STEP = 0.01
# Langevin parameter ALPHA = MU_0 m H / kT
ALPHA = 10.
# vacuum permeability
MU_0 = 1.
"""
Explanation: and set up the simulation parameters where we introduce a new dimensionless parameter
\begin{equation}
\alpha = \frac{\mu B}{k_{\text{B}}T} = \frac{\mu \mu_0 H}{k_{\text{B}}T}
\end{equation}
which is called Langevin parameter. We intentionally choose a relatively high volume fraction $\phi$ and dipolar interaction parameter $\lambda$ to clearly see the influence of the dipole-dipole interaction
End of explanation
"""
# System setup
box_size = (N_PART * np.pi * (LJ_SIGMA / 2.)**2. / PHI)**0.5
print("Box size", box_size)
# Note that the dipolar P3M and dipolar layer correction need a cubic
# simulation box for technical reasons.
system = espressomd.System(box_l=(box_size, box_size, box_size))
system.time_step = TIME_STEP
# Lennard-Jones interaction
system.non_bonded_inter[0, 0].lennard_jones.set_params(epsilon=LJ_EPSILON, sigma=LJ_SIGMA, cutoff=LJ_CUT, shift="auto")
# Random dipole moments
np.random.seed(seed=1)
dip_phi = 2. * np.pi * np.random.random((N_PART, 1))
dip_cos_theta = 2 * np.random.random((N_PART, 1)) - 1
dip_sin_theta = np.sin(np.arccos(dip_cos_theta))
dip = np.hstack((
dip_sin_theta * np.sin(dip_phi),
dip_sin_theta * np.cos(dip_phi),
dip_cos_theta))
# Random positions in the monolayer
pos = box_size * np.hstack((np.random.random((N_PART, 2)), np.zeros((N_PART, 1))))
# Add particles
particles = system.part.add(pos=pos, rotation=N_PART * [(True, True, True)], dip=dip, fix=N_PART * [(False, False, True)])
# Remove overlap between particles by means of the steepest descent method
system.integrator.set_steepest_descent(
f_max=0, gamma=0.1, max_displacement=0.05)
while system.analysis.energy()["total"] > 5 * KT * N_PART:
system.integrator.run(20)
# Switch to velocity Verlet integrator
system.integrator.set_vv()
system.thermostat.set_langevin(kT=KT, gamma=GAMMA, seed=1)
# tune verlet list skin
system.cell_system.tune_skin(min_skin=0.4, max_skin=2., tol=0.2, int_steps=100)
# Setup dipolar P3M and dipolar layer correction (DLC)
dp3m = espressomd.magnetostatics.DipolarP3M(accuracy=5E-4, prefactor=DIP_LAMBDA * LJ_SIGMA**3 * KT)
mdlc = espressomd.magnetostatics.DLC(actor=dp3m, maxPWerror=1E-4, gap_size=box_size - LJ_SIGMA)
system.actors.add(mdlc)
# tune verlet list skin again
system.cell_system.tune_skin(min_skin=0.4, max_skin=2., tol=0.2, int_steps=100)
# print skin value
print(f'tuned skin = {system.cell_system.skin:.2f}')
"""
Explanation: Now we set up the system. As in part I, the orientation of the dipole moments is set directly on the particles, whereas the magnitude of the moments is taken into account when determining the prefactor of the dipolar P3M (for more details see part I).
Hint:
It should be noted that we seed both the Langevin thermostat and the random number generator of numpy. The latter means that the initial configuration of our system is the same every time this script will be executed. As the time evolution of the system depends not solely on the Langevin thermostat but also on the numeric accuracy and DP3M as well as DLC (the tuned parameters are slightly different every time) it is only partly predefined. You can change the seeds to simulate with a different initial configuration and a guaranteed different time evolution.
End of explanation
"""
# magnetic field times dipole moment
H_dipm = ALPHA * KT
H_field = [H_dipm, 0, 0]
"""
Explanation: We now apply the external magnetic field which is
\begin{equation}
B = \mu_0 H = \frac{\alpha~k_{\text{B}}T}{\mu}
\end{equation}
As only the current orientation of the dipole moments, i.e. the unit vector of the dipole moments, is saved in the particle list but not their magnitude, we have to use $B\cdot \mu$ as the strength of the external magnetic field.
We will apply the field in x-direction using the class <tt>constraints</tt> of ESPResSo
End of explanation
"""
# Equilibrate
print("Equilibration...")
equil_rounds = 10
equil_steps = 200
for i in tqdm.trange(equil_rounds):
system.integrator.run(equil_steps)
"""
Explanation: Exercise:
Define a homogenous magnetic field constraint using H_field and add it to system's contraints.
python
H_constraint = espressomd.constraints.HomogeneousMagneticField(H=H_field)
system.constraints.add(H_constraint)
Equilibrate the system.
End of explanation
"""
plt.figure(figsize=(10, 10))
plt.xlim(0, box_size)
plt.ylim(0, box_size)
plt.xlabel('x-position', fontsize=20)
plt.ylabel('y-position', fontsize=20)
plt.plot(particles.pos_folded[:, 0], particles.pos_folded[:, 1], 'o')
plt.show()
"""
Explanation: Now we can visualize the current state and see that the particles mostly create chains oriented in the direction of the external magnetic field. Also some monomers should be present.
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import tempfile
import base64
VIDEO_TAG = """<video controls>
<source src="data:video/x-m4v;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>"""
def anim_to_html(anim):
if not hasattr(anim, '_encoded_video'):
with tempfile.NamedTemporaryFile(suffix='.mp4') as f:
anim.save(f.name, fps=20, extra_args=['-vcodec', 'libx264'])
with open(f.name, "rb") as g:
video = g.read()
anim._encoded_video = base64.b64encode(video).decode('ascii')
plt.close(anim._fig)
return VIDEO_TAG.format(anim._encoded_video)
animation.Animation._repr_html_ = anim_to_html
def init():
# Set x and y range
ax.set_ylim(0, box_size)
ax.set_xlim(0, box_size)
xdata, ydata = [], []
part.set_data(xdata, ydata)
return part,
def run(i):
system.integrator.run(50)
# Save current system state as a plot
xdata, ydata = particles.pos_folded[:, 0], particles.pos_folded[:, 1]
ax.figure.canvas.draw()
part.set_data(xdata, ydata)
print(f'progress: {(i + 1):3.0f}%', end='\r')
return part,
"""
Explanation: Video of the development of the system
You may want to get an insight of how the system develops in time. Thus we now create a function which will save a video and embed it in an html string to create a video of the systems development
End of explanation
"""
fig, ax = plt.subplots(figsize=(10, 10))
part, = ax.plot([], [], 'o')
animation.FuncAnimation(fig, run, frames=100, blit=True, interval=0, repeat=False, init_func=init)
"""
Explanation: We now can start the sampling over the <tt>animation</tt> class of <tt>matplotlib</tt>
End of explanation
"""
# Dipolar interaction parameter lambda = MU_0 m^2 /(4 pi sigma^3 kT)
DIP_LAMBDA = 1.
# increase time step
TIME_STEP = 0.02
# dipole moment
dipm = np.sqrt(4 * np.pi * DIP_LAMBDA * LJ_SIGMA**3. * KT / MU_0)
# remove all particles
system.part.clear()
system.actors.clear()
system.thermostat.turn_off()
# Random dipole moments
dip_phi = 2. * np.pi * np.random.random((N_PART, 1))
dip_cos_theta = 2 * np.random.random((N_PART, 1)) - 1
dip_sin_theta = np.sin(np.arccos(dip_cos_theta))
dip = np.hstack((
dip_sin_theta * np.sin(dip_phi),
dip_sin_theta * np.cos(dip_phi),
dip_cos_theta))
# Random positions in the monolayer
pos = box_size * np.hstack((np.random.random((N_PART, 2)), np.zeros((N_PART, 1))))
# Add particles
particles = system.part.add(pos=pos, rotation=N_PART * [(True, True, True)], dip=dip, fix=N_PART * [(False, False, True)])
# Remove overlap between particles by means of the steepest descent method
system.integrator.set_steepest_descent(f_max=0, gamma=0.1, max_displacement=0.05)
while system.analysis.energy()["total"] > 5 * KT * N_PART:
system.integrator.run(20)
# Switch to velocity Verlet integrator
system.integrator.set_vv()
system.thermostat.set_langevin(kT=KT, gamma=GAMMA, seed=1)
# tune verlet list skin
system.cell_system.tune_skin(min_skin=0.4, max_skin=2., tol=0.2, int_steps=100)
# Setup dipolar P3M and dipolar layer correction
dp3m = espressomd.magnetostatics.DipolarP3M(accuracy=5E-4, prefactor=DIP_LAMBDA * LJ_SIGMA**3 * KT)
mdlc = espressomd.magnetostatics.DLC(actor=dp3m, maxPWerror=1E-4, gap_size=box_size - LJ_SIGMA)
system.actors.add(mdlc)
# tune verlet list skin again
system.cell_system.tune_skin(min_skin=0.4, max_skin=2., tol=0.2, int_steps=100)
"""
Explanation: In the visualization video we can see that the single chains break and connect to each other during time. Also some monomers are present which break from and connect to chains. If you want to have some more frames, i.e. a longer video, just adjust the <tt>frames</tt> parameter in <tt>FuncAnimation</tt>.
Magnetization curve
An important observable of a ferrofluid system is the magnetization $M$ of the system in direction of an external magnetic field $H$
\begin{equation}
M = \frac{\sum_i \mu_i^H}{V}
\end{equation}
where the index $H$ means the component of $\mu_i$ in direction of the external magnetic field $H$ and the sum runs over all particles. $V$ is the system's volume.
The magnetization plotted over the external field $H$ is called magnetization curve. For particles with non-interacting dipole moments there is an analytical solution
\begin{equation}
M = M_{\text{sat}}\cdot L(\alpha)
\end{equation}
with $L(\alpha)$ the Langevin function
\begin{equation}
L(\alpha) = \coth(\alpha)-\frac{1}{\alpha}
\end{equation}
and $\alpha$ the Langevin parameter
\begin{equation}
\alpha=\frac{\mu_0\mu}{k_{\text{B}}T}H = \frac{\mu}{k_{\text{B}}T}B
\end{equation}
$M_{sat}$ is the so called saturation magnetization which is the magnetization of a system where all dipole moments are aligned to each other. Thus it is the maximum of the magnetization. In our case all dipole moments are equal, thus
\begin{equation}
M_{\text{sat}} = \frac{N_{\text{part}}\cdot\mu}{V}
\end{equation}
For better comparability we now introduce a dimensionless magnetization
\begin{equation}
M^* = \frac{M}{M_{sat}} = \frac{\sum_i \mu_i^H}{N_{\text{part}}\cdot \mu}
\end{equation}
Thus the analytical solution for non-interacting dipole moments $M^*$ is simply the Langevin function.
For interacting dipole moments there are only approximations for the magnetization curve available.
Here we want to use the approximation of Ref. <a href='#[1]'>[1]</a> for a quasi two dimensional system, which reads with adjusted coefficients (Ref. <a href='#[1]'>[1]</a> used a different dipole-dipole interaction prefactor $\gamma = 1$)
\begin{equation}
M_{\parallel}^{\text{q2D}} = M_{\text{sat}} L(\alpha) \left( 1 + \mu_0\frac{1}{8} M_{\text{sat}} \frac{\mathrm{d} L(\alpha)}{\mathrm{d}B} \right)
\end{equation}
and
\begin{equation}
M_{\perp}^{\text{q2D}} = M_{\text{sat}} L(\alpha) \left( 1 - \mu_0\frac{1}{4} M_{\text{sat}} \frac{\mathrm{d} L(\alpha)}{\mathrm{d}B} \right)
\end{equation}
for the magnetization with an external magnetic field parallel and perpendicular to the monolayer plane, respectively. Here the dipole-dipole interaction is approximated as a small perturbation and
\begin{equation}
\frac{\mathrm{d} L(\alpha)}{\mathrm{d}B} = \left( \frac{1}{\alpha^2} - \frac{1}{\sinh^2(\alpha)} \right) \cdot \frac{\mu}{k_{\text{B}}T}
\end{equation}
By comparing the magnetization curve parallel $M_{\parallel}^{\text{q2D}}$ and perpendicular $M_{\perp}^{\text{q2D}}$ to the monolayer plane we can see that the magnetization is increased in the case of an external field parallel to the monolayer plane and decreased in the case of an external field perpendicular to the monolayer plane. The latter can be explained by the fact that an orientation of all single dipole moments perpendicular to the monolayer plane results in a configuration with a repulsive dipole-dipole interaction as the particles have no freedom of movement in the direction perpendicular to the monolayer plane. This counteracts the magnetization perpendicular to the monolayer plane.
We now want to use ESPResSo to get an estimation of how the magnetization curve is affected by the dipole-dipole interaction parallel and perpendicular to the monolayer plane and compare the results with the Langevin curve and the magnetization curves of Ref. <a href='#[1]'>[1]</a>.
For the sampling of the magnetization curve we set up a new system, where we decrease the dipolar interaction parameter $\lambda$ drastically. We do this as we want to compare our results with the approximation of Ref. <a href='#[1]'>[1]</a> which is only valid for small dipole-dipole interaction between the particles (decreasing the volume fraction $\phi$ would also be an appropriate choice). For smaller dipolar interaction parameters it is possible to increase the time step. We do this to get more uncorrelated measurements.
End of explanation
"""
alphas = np.array([0, 0.25, 0.5, 1, 2, 3, 4, 8])
"""
Explanation: To increase the performance we use the built-in function <tt>MagneticDipoleMoment</tt> to calculate the dipole moment of the whole system. In our case this is only the orientation as we never set the strength of the dipole moments on our particles.
Exercise:
Import the magnetic dipole moment observable and define an observable object dipm_tot.
Use particle slicing to pass all particle ids.
python
import espressomd.observables
dipm_tot = espressomd.observables.MagneticDipoleMoment(ids=particles.id)
We use the dimensionless Langevin parameter $\alpha$ as the parameter for the external magnetic field. As the interesting part of the magnetization curve is the one for small external magnetic field strengths—for large external magnetic fields the magnetization goes into saturation in all cases—we increase the spacing between the Langevin parameters $\alpha$ up to higher values and write them into a list
End of explanation
"""
# dipole moment
dipm = np.sqrt(DIP_LAMBDA * 4 * np.pi * LJ_SIGMA**3. * KT / MU_0)
print(f'dipole moment = {dipm:.4f}')
"""
Explanation: For both the magnetization perpendicular and parallel to the monolayer plane we use the same system for every value of the Langevin parameter $\alpha$. Thus we use that the system is already more or less equilibrated from the previous run so we save some equilibration time. For scientific purposes one would use a new system for every value for the Langevin parameter to ensure that the systems are independent and no correlation effects are measured. Also one would perform more than just one simulation for each value of $\alpha$ to increase the precision of the results.
Now we sample the magnetization for increasing $\alpha$ (increasing magnetic field strength) in direction perpendicular to the monolayer plane.
Exercise:
Complete the loop such that for every alpha a magnetic field of strength of the respective H_dipm is applied:
```python
sampling with magnetic field perpendicular to monolayer plane (in z-direction)
remove all constraints
system.constraints.clear()
list of magnetization in field direction
magnetization_perp = np.full_like(alphas, np.nan)
number of loops for sampling
loops = 500
for ndx, alpha in enumerate(pbar := tqdm.tqdm(alphas)):
pbar.set_description(f"Sampling for α={alpha:.2f}")
# set magnetic field constraint
H_dipm = alpha * KT
# < exercise >
# equilibration
for i in range(equil_rounds):
system.integrator.run(equil_steps)
# sampling
magn_temp = 0.
for i in range(loops):
system.integrator.run(20)
magn_temp += dipm_tot.calculate()[2]
# save average magnetization
magnetization_perp[ndx] = magn_temp / loops
# remove constraint
system.constraints.clear()
```
```python
sampling with magnetic field perpendicular to monolayer plane (in z-direction)
remove all constraints
system.constraints.clear()
list of magnetization in field direction
magnetization_perp = np.full_like(alphas, np.nan)
number of loops for sampling
loops = 500
for ndx, alpha in enumerate(pbar := tqdm.tqdm(alphas)):
pbar.set_description(f"Sampling for α={alpha:.2f}")
# set magnetic field constraint
H_dipm = alpha * KT
H_field = [0, 0, H_dipm]
H_constraint = espressomd.constraints.HomogeneousMagneticField(H=H_field)
system.constraints.add(H_constraint)
# equilibration
for i in range(equil_rounds):
system.integrator.run(equil_steps)
# sampling
magn_temp = 0.
for i in range(loops):
system.integrator.run(20)
magn_temp += dipm_tot.calculate()[2]
# save average magnetization
magnetization_perp[ndx] = magn_temp / loops
# remove constraint
system.constraints.clear()
```
and now we sample the magnetization for increasing $\alpha$ or increasing magnetic field in direction parallel to the monolayer plane.
Exercise:
Use the code from the previous exercise as a template.
Now sample the magnetization curve for a magnetic field parallel to the quasi-2D layer and store the calculated magnetizations in a list named magnetization_para (analogous to magnetization_perp).
Hint: Set up the field in $x$- or $y$-direction and sample the magnetization along the same axis.
```python
sampling with magnetic field parallel to monolayer plane (in x-direction)
remove all constraints
system.constraints.clear()
list of magnetization in field direction
magnetization_para = np.full_like(alphas, np.nan)
number of loops for sampling
loops = 500
for ndx, alpha in enumerate(pbar := tqdm.tqdm(alphas)):
pbar.set_description(f"Sampling for α={alpha:.2f}")
# set magnetic field constraint
H_dipm = alpha * KT
H_field = [H_dipm, 0, 0]
H_constraint = espressomd.constraints.HomogeneousMagneticField(H=H_field)
system.constraints.add(H_constraint)
# equilibration
for i in range(equil_rounds):
system.integrator.run(equil_steps)
# sampling
magn_temp = 0
for i in range(loops):
system.integrator.run(20)
magn_temp += dipm_tot.calculate()[0]
# save average magnetization
magnetization_para[ndx] = magn_temp / loops
# remove constraint
system.constraints.clear()
```
Now we can compare the resulting magnetization curves with the Langevin curve and the more advanced ones of Ref. <a href='#[1]'>[1]</a> by plotting all of them in one figure.
For the approximations of $M_{\parallel}^{\text{q2D}}$ and $M_{\perp}^{\text{q2D}}$ of Ref. <a href='#[1]'>[1]</a> we need the dipole moment of a single particle. Thus we calculate it from our dipolar interaction parameter $\lambda$
End of explanation
"""
M_sat = PHI * 4. / np.pi * 1. / (LJ_SIGMA**2.) * dipm
"""
Explanation: and the saturation magnetization by using
\begin{equation}
M_{\text{sat}} = \rho \mu = \phi \frac{4}{\pi \sigma^2} \mu
\end{equation}
thus
End of explanation
"""
def dL_dB(alpha):
return (1. / (alpha**2.) - 1. / ((np.sinh(alpha))**2.)) * dipm / (KT)
"""
Explanation: Further we need the derivation of the Langevin function after the external field $B$ thus we define the function
End of explanation
"""
# approximated magnetization curve for a field parallel to the monolayer plane
def magnetization_approx_para(alpha):
return L(alpha) * (1. + MU_0 / 8. * M_sat * dL_dB(alpha))
# approximated magnetization curve for a field perpendicular to the monolayer plane
def magnetization_approx_perp(alpha):
return L(alpha) * (1. - MU_0 / 4. * M_sat * dL_dB(alpha))
"""
Explanation: Now we define the approximated magnetization curves parallel and perpendicular to the monolayer plane
End of explanation
"""
# Langevin function
def L(x):
return (np.cosh(x) / np.sinh(x)) - 1. / x
"""
Explanation: Now we define the Langevin function
End of explanation
"""
|
tritemio/pybroom | doc/notebooks/pybroom-example-multi-datasets.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format='retina' # for hi-dpi displays
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from lmfit import Model
import lmfit
print('lmfit: %s' % lmfit.__version__)
sns.set_style('whitegrid')
import pybroom as br
"""
Explanation: PyBroom Example - Multiple Datasets
This notebook is part of pybroom.
This notebook demonstrate using pybroom when fitting a set of curves (curve fitting) using lmfit.Model.
We will show that pybroom greatly simplifies comparing, filtering and plotting fit results
from multiple datasets.
End of explanation
"""
N = 20
x = np.linspace(-10, 10, 101)
peak1 = lmfit.models.GaussianModel(prefix='p1_')
peak2 = lmfit.models.GaussianModel(prefix='p2_')
model = peak1 + peak2
#params = model.make_params(p1_amplitude=1.5, p2_amplitude=1,
# p1_sigma=1, p2_sigma=1)
Y_data = np.zeros((N, x.size))
Y_data.shape
for i in range(Y_data.shape[0]):
Y_data[i] = model.eval(x=x, p1_center=-1, p2_center=2,
p1_sigma=0.5, p2_sigma=1.5,
p1_height=1, p2_height=0.5)
Y_data += np.random.randn(*Y_data.shape)/10
plt.plot(x, Y_data.T, '-k', alpha=0.1);
"""
Explanation: Create Noisy Data
We start simulating N datasets which are identical except for the additive noise.
End of explanation
"""
model1 = lmfit.models.GaussianModel()
Results1 = [model1.fit(y, x=x) for y in Y_data]
"""
Explanation: Model Fitting
Single-peak model
Define and fit a single Gaussian model to the $N$ datasets:
End of explanation
"""
params = model.make_params(p1_center=0, p2_center=3,
p1_sigma=0.5, p2_sigma=1,
p1_amplitude=1, p2_amplitude=2)
Results = [model.fit(y, x=x, params=params) for y in Y_data]
"""
Explanation: Two-peaks model
Here, instead, we use a more appropriate Gaussian mixture model.
To fit the noisy data, the residuals (the difference between model and data)
is minimized in the least-squares sense.
End of explanation
"""
#print(Results[0].fit_report())
#Results[0].params.pretty_print()
"""
Explanation: Fit results from an lmfit Model can be inspected with
with fit_report or params.pretty_print():
End of explanation
"""
dg = br.glance(Results, var_names='dataset')
dg.drop('model', 1).drop('message', 1).head()
"""
Explanation: This is good for peeking at the results. However,
extracting these data from lmfit objects is quite a chore
and requires good knowledge of lmfit objects structure.
pybroom helps in this task: it extracts data from fit results and
returns familiar pandas DataFrame (in tidy format).
Thanks to the tidy format these data can be
much more easily manipulated, filtered and plotted.
Glance
A summary of the two-peaks model fit:
End of explanation
"""
dg1 = br.glance(Results1, var_names='dataset')
dg1.drop('model', 1).drop('message', 1).head()
"""
Explanation: A summary of the one-peak model fit:
End of explanation
"""
dt = br.tidy(Results, var_names='dataset')
"""
Explanation: Tidy
Tidy fit results for all the parameters:
End of explanation
"""
dt.query('dataset == 0')
"""
Explanation: Let's see the results for a single dataset:
End of explanation
"""
dt.query('name == "p1_center"').head()
dt.query('name == "p1_center"')['value'].std()
dt.query('name == "p2_center"')['value'].std()
"""
Explanation: or for a single parameter across datasets:
End of explanation
"""
dt.query('name == "p1_center"')['value'].hist()
dt.query('name == "p2_center"')['value'].hist(ax=plt.gca());
"""
Explanation: Note that there is a much larger error in fitting p2_center
than p1_center.
End of explanation
"""
da = br.augment(Results, var_names='dataset')
da1 = br.augment(Results1, var_names='dataset')
r = Results[0]
"""
Explanation: Augment
Tidy dataframe with data function of the independent variable ('x'). Columns include
the data being fitted, best fit, best fit components, residuals, etc.
End of explanation
"""
da.query('dataset == 0').head()
"""
Explanation: Let's see the results for a single dataset:
End of explanation
"""
da0 = da.query('dataset == 0')
plt.plot('x', 'data', data=da0, marker='o', ls='None')
plt.plot('x', "Model(gaussian, prefix='p1_')", data=da0, lw=2, ls='--')
plt.plot('x', "Model(gaussian, prefix='p2_')", data=da0, lw=2, ls='--')
plt.plot('x', 'best_fit', data=da0, lw=2);
plt.legend()
"""
Explanation: Plotting a single dataset is simplified compared to a manual plot:
End of explanation
"""
Results[0].plot_fit();
"""
Explanation: But keep in mind that, for a single dataset, we could
use the lmfit method as well (which is even simpler):
End of explanation
"""
grid = sns.FacetGrid(da.query('dataset < 6'), col="dataset", hue="dataset", col_wrap=3)
grid.map(plt.plot, 'x', 'data', marker='o', ls='None', ms=3, color='k')
grid.map(plt.plot, 'x', "Model(gaussian, prefix='p1_')", ls='--')
grid.map(plt.plot, 'x', "Model(gaussian, prefix='p2_')", ls='--')
grid.map(plt.plot, "x", "best_fit");
"""
Explanation: However, things become much more interesting when we want to plot multiple
datasets or models as in the next section.
Comparison of different datasets
End of explanation
"""
da['model'] = 'twopeaks'
da1['model'] = 'onepeak'
da_tot = pd.concat([da, da1], ignore_index=True)
"""
Explanation: Comparison of one- or two-peaks models
Here we plot a comparison of the two fitted models (one or two peaks)
for different datasets.
First we create a single tidy DataFrame with data from the two models:
End of explanation
"""
grid = sns.FacetGrid(da_tot.query('dataset < 6'), col="dataset", hue="model", col_wrap=3)
grid.map(plt.plot, 'x', 'data', marker='o', ls='None', ms=3, color='k')
grid.map(plt.plot, "x", "best_fit")
grid.add_legend();
"""
Explanation: Then we perfom a facet plot with seaborn:
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.13.2/examples/notebooks/generated/tsa_arma_0.ipynb | bsd-3-clause | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.api as sm
from scipy import stats
from statsmodels.tsa.arima.model import ARIMA
from statsmodels.graphics.api import qqplot
"""
Explanation: Autoregressive Moving Average (ARMA): Sunspots data
End of explanation
"""
print(sm.datasets.sunspots.NOTE)
dta = sm.datasets.sunspots.load_pandas().data
dta.index = pd.Index(sm.tsa.datetools.dates_from_range("1700", "2008"))
dta.index.freq = dta.index.inferred_freq
del dta["YEAR"]
dta.plot(figsize=(12, 8))
fig = plt.figure(figsize=(12, 8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2)
arma_mod20 = ARIMA(dta, order=(2, 0, 0)).fit()
print(arma_mod20.params)
arma_mod30 = ARIMA(dta, order=(3, 0, 0)).fit()
print(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic)
print(arma_mod30.params)
print(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic)
"""
Explanation: Sunspots Data
End of explanation
"""
sm.stats.durbin_watson(arma_mod30.resid.values)
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax = arma_mod30.resid.plot(ax=ax)
resid = arma_mod30.resid
stats.normaltest(resid)
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
fig = qqplot(resid, line="q", ax=ax, fit=True)
fig = plt.figure(figsize=(12, 8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(resid.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)
r, q, p = sm.tsa.acf(resid.values.squeeze(), fft=True, qstat=True)
data = np.c_[np.arange(1, 25), r[1:], q, p]
table = pd.DataFrame(data, columns=["lag", "AC", "Q", "Prob(>Q)"])
print(table.set_index("lag"))
"""
Explanation: Does our model obey the theory?
End of explanation
"""
predict_sunspots = arma_mod30.predict("1990", "2012", dynamic=True)
print(predict_sunspots)
def mean_forecast_err(y, yhat):
return y.sub(yhat).mean()
mean_forecast_err(dta.SUNACTIVITY, predict_sunspots)
"""
Explanation: This indicates a lack of fit.
In-sample dynamic prediction. How good does our model do?
End of explanation
"""
from statsmodels.tsa.arima_process import ArmaProcess
np.random.seed(1234)
# include zero-th lag
arparams = np.array([1, 0.75, -0.65, -0.55, 0.9])
maparams = np.array([1, 0.65])
"""
Explanation: Exercise: Can you obtain a better fit for the Sunspots model? (Hint: sm.tsa.AR has a method select_order)
Simulated ARMA(4,1): Model Identification is Difficult
End of explanation
"""
arma_t = ArmaProcess(arparams, maparams)
arma_t.isinvertible
arma_t.isstationary
"""
Explanation: Let's make sure this model is estimable.
End of explanation
"""
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax.plot(arma_t.generate_sample(nsample=50))
arparams = np.array([1, 0.35, -0.15, 0.55, 0.1])
maparams = np.array([1, 0.65])
arma_t = ArmaProcess(arparams, maparams)
arma_t.isstationary
arma_rvs = arma_t.generate_sample(nsample=500, burnin=250, scale=2.5)
fig = plt.figure(figsize=(12, 8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(arma_rvs, lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(arma_rvs, lags=40, ax=ax2)
"""
Explanation: What does this mean?
End of explanation
"""
lags = int(10 * np.log10(arma_rvs.shape[0]))
arma11 = ARIMA(arma_rvs, order=(1, 0, 1)).fit()
resid = arma11.resid
r, q, p = sm.tsa.acf(resid, nlags=lags, fft=True, qstat=True)
data = np.c_[range(1, lags + 1), r[1:], q, p]
table = pd.DataFrame(data, columns=["lag", "AC", "Q", "Prob(>Q)"])
print(table.set_index("lag"))
arma41 = ARIMA(arma_rvs, order=(4, 0, 1)).fit()
resid = arma41.resid
r, q, p = sm.tsa.acf(resid, nlags=lags, fft=True, qstat=True)
data = np.c_[range(1, lags + 1), r[1:], q, p]
table = pd.DataFrame(data, columns=["lag", "AC", "Q", "Prob(>Q)"])
print(table.set_index("lag"))
"""
Explanation: For mixed ARMA processes the Autocorrelation function is a mixture of exponentials and damped sine waves after (q-p) lags.
The partial autocorrelation function is a mixture of exponentials and dampened sine waves after (p-q) lags.
End of explanation
"""
macrodta = sm.datasets.macrodata.load_pandas().data
macrodta.index = pd.Index(sm.tsa.datetools.dates_from_range("1959Q1", "2009Q3"))
cpi = macrodta["cpi"]
"""
Explanation: Exercise: How good of in-sample prediction can you do for another series, say, CPI
End of explanation
"""
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax = cpi.plot(ax=ax)
ax.legend()
"""
Explanation: Hint:
End of explanation
"""
print(sm.tsa.adfuller(cpi)[1])
"""
Explanation: P-value of the unit-root test, resoundingly rejects the null of a unit-root.
End of explanation
"""
|
Heroes-Academy/OOP_Spring_2016 | notebooks/giordani/Python_3_OOP_Part_3__Delegation__composition_and_inheritance.ipynb | mit | class Door:
colour = 'brown'
def __init__(self, number, status):
self.number = number
self.status = status
@classmethod
def knock(cls):
print("Knock!")
@classmethod
def paint(cls, colour):
cls.colour = colour
def open(self):
self.status = 'open'
def close(self):
self.status = 'closed'
class SecurityDoor(Door):
pass
"""
Explanation: The Delegation Run
If classes are objects what is the difference between types and instances?
When I talk about "my cat" I am referring to a concrete instance of the "cat" concept, which is a subtype of "animal". So, despite being both objects, while types can be specialized, instances cannot.
Usually an object B is said to be a specialization of an object A when:
B has all the features of A
B can provide new features
B can perform some or all the tasks performed by A in a different way
Those targets are very general and valid for any system and the key to achieve them with the maximum reuse of already existing components is delegation. Delegation means that an object shall perform only what it knows best, and leave the rest to other objects.
Delegation can be implemented with two different mechanisms: composition and inheritance. Sadly, very often only inheritance is listed among the pillars of OOP techniques, forgetting that it is an implementation of the more generic and fundamental mechanism of delegation; perhaps a better nomenclature for the two techniques could be explicit delegation (composition) and implicit delegation (inheritance).
Please note that, again, when talking about composition and inheritance we are talking about focusing on a behavioural or structural delegation. Another way to think about the difference between composition and inheritance is to consider if the object knows who can satisfy your request or if the object is the one that satisfy the request.
Please, please, please do not forget composition: in many cases, composition can lead to simpler systems, with benefits on maintainability and changeability.
Usually composition is said to be a very generic technique that needs no special syntax, while inheritance and its rules are strongly dependent on the language of choice. Actually, the strong dynamic nature of Python softens the boundary line between the two techniques.
Inheritance Now
In Python a class can be declared as an extension of one or more different classes, through the class inheritance mechanism. The child class (the one that inherits) has the same internal structure of the parent class (the one that is inherited), and for the case of multiple inheritance the language has very specific rules to manage possible conflicts or redefinitions among the parent classes. A very simple example of inheritance is
End of explanation
"""
sdoor = SecurityDoor(1, 'closed')
"""
Explanation: where we declare a new class SecurityDoor that, at the moment, is a perfect copy of the Door class. Let us investigate what happens when we access attributes and methods. First we instance the class
End of explanation
"""
print(SecurityDoor.colour is Door.colour)
print(sdoor.colour is Door.colour)
"""
Explanation: The first check we can do is that class attributes are still global and shared
End of explanation
"""
print(sdoor.__dict__)
print(type(sdoor.__class__.__dict__))
print(sdoor.__class__.__dict__)
print(type(Door.__dict__))
print(Door.__dict__)
"""
Explanation: This shows us that Python tries to resolve instance members not only looking into the class the instance comes from, but also investigating the parent classes. In this case sdoor.colour becomes SecurityDoor.colour, that in turn becomes Door.colour. SecurityDoor is a Door.
If we investigate the content of __dict__ we can catch a glimpse of the inheritance mechanism in action
End of explanation
"""
print(SecurityDoor.__bases__)
"""
Explanation: As you can see the content of __dict__ for SecurityDoor is very narrow compared to that of Door. The inheritance mechanism takes care of the missing elements by climbing up the classes tree. Where does Python get the parent classes? A class always contains a __bases__ tuple that lists them
End of explanation
"""
print(sdoor.__class__.__bases__[0].__dict__['knock'].__get__(sdoor))
print(sdoor.knock)
"""
Explanation: So an example of what Python does to resolve a class method call through the inheritance tree is
End of explanation
"""
class SecurityDoor(Door):
colour = 'gray'
locked = True
def open(self):
if not self.locked:
self.status = 'open'
"""
Explanation: Please note that this is just an example that does not consider multiple inheritance.
Let us try now to override some methods and attributes. In Python you can override (redefine) a parent class member simply by redefining it in the child class.
End of explanation
"""
print(type(SecurityDoor.__dict__))
print(SecurityDoor.__dict__)
"""
Explanation: As you can forecast, the overridden members now are present in the __dict__ of the SecurityDoor class
End of explanation
"""
class SecurityDoor(Door):
colour = 'gray'
locked = True
def open(self):
if self.locked:
return
Door.open(self)
"""
Explanation: So when you override a member, the one you put in the child class is used instead of the one in the parent class simply because the former is found before the latter while climbing the class hierarchy. This also shows you that Python does not implicitly call the parent implementation when you override a method. So, overriding is a way to block implicit delegation.
If we want to call the parent implementation we have to do it explicitly. In the former example we could write
End of explanation
"""
sdoor = SecurityDoor(1, 'closed')
print(sdoor.status)
sdoor.open()
print(sdoor.status)
sdoor.locked = False
sdoor.open()
print(sdoor.status)
"""
Explanation: You can easily test that this implementation is working correctly.
End of explanation
"""
class SecurityDoor(Door):
colour = 'gray'
locked = True
def open(self):
if self.locked:
return
super().open()
"""
Explanation: This form of explicit parent delegation is heavily discouraged, however.
The first reason is because of the very high coupling that results from explicitly naming the parent class again when calling the method. Coupling, in the computer science lingo, means to link two parts of a system, so that changes in one of them directly affect the other one, and is usually avoided as much as possible. In this case if you decide to use a new parent class you have to manually propagate the change to every method that calls it. Moreover, since in Python the class hierarchy can be dynamically changed (i.e. at runtime), this form of explicit delegation could be not only annoying but also wrong.
The second reason is that in general you need to deal with multiple inheritance, where you do not know a priori which parent class implements the original form of the method you are overriding.
To solve these issues, Python supplies the super() built-in function, that climbs the class hierarchy and returns the correct class that shall be called. The syntax for calling super() is
End of explanation
"""
class SecurityDoor:
colour = 'gray'
locked = True
def __init__(self, number, status):
self.door = Door(number, status)
def open(self):
if self.locked:
return
self.door.open()
def close(self):
self.door.close()
"""
Explanation: The output of super() is not exactly the Door class. It returns a super object which representation is <super: <class 'SecurityDoor'>, <SecurityDoor object>>. This object however acts like the parent class, so you can safely ignore its custom nature and use it just like you would do with the Door class in this case.
Enter the Composition
Composition means that an object knows another object, and explicitly delegates some tasks to it. While inheritance is implicit, composition is explicit: in Python, however, things are far more interesting than this =).
First of all let us implement classic composition, which simply makes an object part of the other as an attribute
End of explanation
"""
class SecurityDoor:
locked = True
def __init__(self, number, status):
self.door = Door(number, status)
def open(self):
if self.locked:
return
self.door.open()
def __getattr__(self, attr):
return getattr(self.door, attr)
"""
Explanation: The primary goal of composition is to relax the coupling between objects. This little example shows that now SecurityDoor is an object and no more a Door, which means that the internal structure of Door is not copied. For this very simple example both Door and SecurityDoor are not big classes, but in a real system objects can very complex; this means that their allocation consumes a lot of memory and if a system contains thousands or millions of objects that could be an issue.
The composed SecurityDoor has to redefine the colour attribute since the concept of delegation applies only to methods and not to attributes, doesn't it?
Well, no. Python provides a very high degree of indirection for objects manipulation and attribute access is one of the most useful. As you already discovered, accessing attributes is ruled by a special method called __getattribute__() that is called whenever an attribute of the object is accessed. Overriding __getattribute__(), however, is overkill; it is a very complex method, and, being called on every attribute access, any change makes the whole thing slower.
The method we have to leverage to delegate attribute access is __getattr__(), which is a special method that is called whenever the requested attribute is not found in the object. So basically it is the right place to dispatch all attribute and method access our object cannot handle. The previous example becomes
End of explanation
"""
class ComposedDoor:
def __init__(self, number, status):
self.door = Door(number, status)
def __getattr__(self, attr):
return getattr(self.door, attr)
"""
Explanation: Using __getattr__() blends the separation line between inheritance and composition since after all the former is a form of automatic delegation of every member access.
End of explanation
"""
|
Hebali/learning_machines | tensorflow_tutorials/Tutorial_01_GraphsAndSessions.ipynb | mit | # Create input constants:
X = 2.0
Y = 3.0
# Perform addition:
Z = X + Y
# Print output:
print Z
"""
Explanation: Learning Machines
Taught by Patrick Hebron at NYU/ITP, Fall 2017
TensorFlow Basics: "Graphs and Sessions"
Let's look at a simple arithmetic procedure in pure Python:
End of explanation
"""
# Import TensorFlow library:
import tensorflow as tf
# Print TensorFlow version, just for good measure:
print( 'TensorFlow Version: ' + tf.VERSION )
# Create input constants:
opX = tf.constant( 2.0 )
opY = tf.constant( 3.0 )
# Create addition operation:
opZ = tf.add( opX, opY )
# Print operation:
print opZ
"""
Explanation: Now let's try to do the same thing using TensorFlow:
End of explanation
"""
# Create input constants:
opX = tf.constant( 2.0 )
opY = tf.constant( 3.0 )
# Create addition operation:
opZ = tf.add( opX, opY )
# Create session:
with tf.Session() as sess:
# Run session:
Z = sess.run( opZ )
# Print output:
print Z
"""
Explanation: Where's the resulting value?
Notice that in the pure Python code, calling:
python
print Z
prints the resulting value:
python
5.0
But in the TensorFlow code, the print call gives us:
python
Tensor("Add:0", shape=(), dtype=float32)
TensorFlow uses a somewhat different programming model from what we're used to in conventional Python code.
Here's a brief overview from the TensorFlow Basic Usage tutorial:
Overview:
TensorFlow is a programming system in which you represent computations as graphs. Nodes in the graph are called ops (short for operations). An op takes zero or more Tensors, performs some computation, and produces zero or more Tensors. A Tensor is a typed multi-dimensional array. For example, you can represent a mini-batch of images as a 4-D array of floating point numbers with dimensions [batch, height, width, channels].
A TensorFlow graph is a description of computations. To compute anything, a graph must be launched in a Session. A Session places the graph ops onto Devices, such as CPUs or GPUs, and provides methods to execute them. These methods return tensors produced by ops as numpy ndarray objects in Python, and as tensorflow::Tensor instances in C and C++.
TensorFlow programs are usually structured into a construction phase, that assembles a graph, and an execution phase that uses a session to execute ops in the graph.
For example, it is common to create a graph to represent and train a neural network in the construction phase, and then repeatedly execute a set of training ops in the graph in the execution phase.
In other words:
The TensorFlow code above only assembles the graph to perform an addition operation on our two input constants.
To actually run the graph and retrieve the output, we need to create a session and run the addition operation through it:
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/b99fcf919e5d2f612fcfee22adcfc330/40_autogenerate_metadata.ipynb | bsd-3-clause | from pathlib import Path
import matplotlib.pyplot as plt
import mne
data_dir = Path(mne.datasets.erp_core.data_path())
infile = data_dir / 'ERP-CORE_Subject-001_Task-Flankers_eeg.fif'
raw = mne.io.read_raw(infile, preload=True)
raw.filter(l_freq=0.1, h_freq=40)
raw.plot(start=60)
# extract events
all_events, all_event_id = mne.events_from_annotations(raw)
"""
Explanation: Auto-generating Epochs metadata
This tutorial shows how to auto-generate metadata for ~mne.Epochs, based on
events via mne.epochs.make_metadata.
We are going to use data from the erp-core-dataset (derived from
:footcite:Kappenman2021). This is EEG data from a single participant
performing an active visual task (Eriksen flanker task).
<div class="alert alert-info"><h4>Note</h4><p>If you wish to skip the introductory parts of this tutorial, you may jump
straight to `tut-autogenerate-metadata-ern` after completing the data
import and event creation in the
`tut-autogenerate-metadata-preparation` section.</p></div>
This tutorial is loosely divided into two parts:
We will first focus on producing ERP time-locked to the visual
stimulation, conditional on response correctness and response time in
order to familiarize ourselves with the ~mne.epochs.make_metadata
function.
After that, we will calculate ERPs time-locked to the responses – again,
conditional on response correctness – to visualize the error-related
negativity (ERN), i.e. the ERP component associated with incorrect
behavioral responses.
Preparation
Let's start by reading, filtering, and producing a simple visualization of the
raw data. The data is pretty clean and contains very few blinks, so there's no
need to apply sophisticated preprocessing and data cleaning procedures.
We will also convert the ~mne.Annotations contained in this dataset to events
by calling mne.events_from_annotations.
End of explanation
"""
# metadata for each epoch shall include events from the range: [0.0, 1.5] s,
# i.e. starting with stimulus onset and expanding beyond the end of the epoch
metadata_tmin, metadata_tmax = 0.0, 1.5
# auto-create metadata
# this also returns a new events array and an event_id dictionary. we'll see
# later why this is important
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'])
# let's look at what we got!
metadata
"""
Explanation: Creating metadata from events
The basics of make_metadata
Now it's time to think about the time windows to use for epoching and
metadata generation. It is important to understand that these time windows
need not be the same! That is, the automatically generated metadata might
include information about events from only a fraction of the epochs duration;
or it might include events that occurred well outside a given epoch.
Let us look at a concrete example. In the Flankers task of the ERP CORE
dataset, participants were required to respond to visual stimuli by pressing
a button. We're interested in looking at the visual evoked responses (ERPs)
of trials with correct responses. Assume that based on literature
studies, we decide that responses later than 1500 ms after stimulus onset are
to be considered invalid, because they don't capture the neuronal processes
of interest here. We can approach this in the following way with the help of
mne.epochs.make_metadata:
End of explanation
"""
row_events = ['stimulus/compatible/target_left',
'stimulus/compatible/target_right',
'stimulus/incompatible/target_left',
'stimulus/incompatible/target_right']
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'],
row_events=row_events)
metadata
"""
Explanation: Specifying time-locked events
We can see that the generated table has 802 rows, each one corresponding to
an individual event in all_events. The first column, event_name,
contains the name of the respective event around which the metadata of that
specific column was generated – we'll call that the "time-locked event",
because we'll assign it time point zero.
The names of the remaining columns correspond to the event names specified in
the all_event_id dictionary. These columns contain floats; the values
represent the latency of that specific event in seconds, relative to
the time-locked event (the one mentioned in the event_name column).
For events that didn't occur within the given time window, you'll see
a value of NaN, simply indicating that no event latency could be
extracted.
Now, there's a problem here. We want investigate the visual ERPs only,
conditional on responses. But the metadata that was just created contains
one row for every event, including responses. While we could create
epochs for all events, allowing us to pass those metadata, and later subset
the created events, there's a more elegant way to handle things:
~mne.epochs.make_metadata has a row_events parameter that
allows us to specify for which events to create metadata rows, while
still creating columns for all events in the event_id dictionary.
Because the metadata, then, only pertains to a subset of our original events,
it's important to keep the returned events and event_id around for
later use when we're actually going to create our epochs, to ensure that
metadata, events, and event descriptions stay in sync.
End of explanation
"""
keep_first = 'response'
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'],
row_events=row_events,
keep_first=keep_first)
# visualize response times regardless of side
metadata['response'].plot.hist(bins=50, title='Response Times')
# the "first_response" column contains only "left" and "right" entries, derived
# from the initial event named "response/left" and "response/right"
print(metadata['first_response'])
"""
Explanation: Keeping only the first events of a group
The metadata now contains 400 rows – one per stimulation – and the same
number of columns as before. Great!
We have two types of responses in our data: response/left and
response/right. We would like to map those to "correct" and "incorrect".
To make this easier, we can ask ~mne.epochs.make_metadata to generate an
entirely new column that refers to the first response observed during the
given time interval. This works by passing a subset of the
:term:hierarchical event descriptors (HEDs, inspired by
:footcite:BigdelyShamloEtAl2013) used to name events via the keep_first
parameter. For example, in the case of the HEDs response/left and
response/right, we could pass keep_first='response' to generate a new
column, response, containing the latency of the respective event. This
value pertains only the first (or, in this specific example: the only)
response, regardless of side (left or right). To indicate which event
type (here: response side) was matched, a second column is added:
first_response. The values in this column are the event types without the
string used for matching, as it is already encoded as the column name, i.e.
in our example, we expect it to only contain 'left' and 'right'.
End of explanation
"""
metadata.loc[metadata['stimulus/compatible/target_left'].notna() &
metadata['stimulus/compatible/target_right'].notna(),
:]
"""
Explanation: We're facing a similar issue with the stimulus events, and now there are not
only two, but four different types: stimulus/compatible/target_left,
stimulus/compatible/target_right, stimulus/incompatible/target_left,
and stimulus/incompatible/target_right. Even more, because in the present
paradigm stimuli were presented in rapid succession, sometimes multiple
stimulus events occurred within the 1.5 second time window we're using to
generate our metadata. See for example:
End of explanation
"""
keep_first = ['stimulus', 'response']
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'],
row_events=row_events,
keep_first=keep_first)
# all times of the time-locked events should be zero
assert all(metadata['stimulus'] == 0)
# the values in the new "first_stimulus" and "first_response" columns indicate
# which events were selected via "keep_first"
metadata[['first_stimulus', 'first_response']]
"""
Explanation: This can easily lead to confusion during later stages of processing, so let's
create a column for the first stimulus – which will always be the time-locked
stimulus, as our time interval starts at 0 seconds. We can pass a list of
strings to keep_first.
End of explanation
"""
# left-side stimulation
metadata.loc[metadata['first_stimulus'].isin(['compatible/target_left',
'incompatible/target_left']),
'stimulus_side'] = 'left'
# right-side stimulation
metadata.loc[metadata['first_stimulus'].isin(['compatible/target_right',
'incompatible/target_right']),
'stimulus_side'] = 'right'
# first assume all responses were incorrect, then mark those as correct where
# the stimulation side matches the response side
metadata['response_correct'] = False
metadata.loc[metadata['stimulus_side'] == metadata['first_response'],
'response_correct'] = True
correct_response_count = metadata['response_correct'].sum()
print(f'Correct responses: {correct_response_count}\n'
f'Incorrect responses: {len(metadata) - correct_response_count}')
"""
Explanation: Adding new columns to describe stimulation side and response correctness
Perfect! Now it's time to define which responses were correct and incorrect.
We first add a column encoding the side of stimulation, and then simply
check whether the response matches the stimulation side, and add this result
to another column.
End of explanation
"""
epochs_tmin, epochs_tmax = -0.1, 0.4 # epochs range: [-0.1, 0.4] s
reject = {'eeg': 250e-6} # exclude epochs with strong artifacts
epochs = mne.Epochs(raw=raw, tmin=epochs_tmin, tmax=epochs_tmax,
events=events, event_id=event_id, metadata=metadata,
reject=reject, preload=True)
"""
Explanation: Creating Epochs with metadata, and visualizing ERPs
It's finally time to create our epochs! We set the metadata directly on
instantiation via the metadata parameter. Also it is important to
remember to pass events and event_id as returned from
~mne.epochs.make_metadata, as we only created metadata for a subset of
our original events by passing row_events. Otherwise, the length
of the metadata and the number of epochs would not match and MNE-Python
would raise an error.
End of explanation
"""
vis_erp = epochs['response_correct'].average()
vis_erp_slow = epochs['(not response_correct) & '
'(response > 0.3)'].average()
fig, ax = plt.subplots(2, figsize=(6, 6))
vis_erp.plot(gfp=True, spatial_colors=True, axes=ax[0])
vis_erp_slow.plot(gfp=True, spatial_colors=True, axes=ax[1])
ax[0].set_title('Visual ERPs – All Correct Responses')
ax[1].set_title('Visual ERPs – Slow Correct Responses')
fig.tight_layout()
fig
"""
Explanation: Lastly, let's visualize the ERPs evoked by the visual stimulation, once for
all trials with correct responses, and once for all trials with correct
responses and a response time greater than 0.5 seconds
(i.e., slow responses).
End of explanation
"""
metadata_tmin, metadata_tmax = -1.5, 0
row_events = ['response/left', 'response/right']
keep_last = ['stimulus', 'response']
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'],
row_events=row_events,
keep_last=keep_last)
"""
Explanation: Aside from the fact that the data for the (much fewer) slow responses looks
noisier – which is entirely to be expected – not much of an ERP difference
can be seen.
Applying the knowledge: visualizing the ERN component
In the following analysis, we will use the same dataset as above, but
we'll time-lock our epochs to the response events, not to the stimulus
onset. Comparing ERPs associated with correct and incorrect behavioral
responses, we should be able to see the error-related negativity (ERN) in
the difference wave.
Since we want to time-lock our analysis to responses, for the automated
metadata generation we'll consider events occurring up to 1500 ms before
the response trigger.
We only wish to consider the last stimulus and response in each time
window: Remember that we're dealing with rapid stimulus presentations in
this paradigm; taking the last response – at time point zero – and the last
stimulus – the one closest to the response – ensures we actually create
the right stimulus-response pairings. We can achieve this by passing the
keep_last parameter, which works exactly like keep_first we got to
know above, only that it keeps the last occurrences of the specified
events and stores them in columns whose names start with last_.
End of explanation
"""
# left-side stimulation
metadata.loc[metadata['last_stimulus'].isin(['compatible/target_left',
'incompatible/target_left']),
'stimulus_side'] = 'left'
# right-side stimulation
metadata.loc[metadata['last_stimulus'].isin(['compatible/target_right',
'incompatible/target_right']),
'stimulus_side'] = 'right'
# first assume all responses were incorrect, then mark those as correct where
# the stimulation side matches the response side
metadata['response_correct'] = False
metadata.loc[metadata['stimulus_side'] == metadata['last_response'],
'response_correct'] = True
metadata
"""
Explanation: Exactly like in the previous example, create new columns stimulus_side
and response_correct.
End of explanation
"""
epochs_tmin, epochs_tmax = -0.6, 0.4
baseline = (-0.4, -0.2)
reject = {'eeg': 250e-6}
epochs = mne.Epochs(raw=raw, tmin=epochs_tmin, tmax=epochs_tmax,
baseline=baseline, reject=reject,
events=events, event_id=event_id, metadata=metadata,
preload=True)
"""
Explanation: Now it's already time to epoch the data! When deciding upon the epochs
duration for this specific analysis, we need to ensure we see quite a bit of
signal from before and after the motor response. We also must be aware of
the fact that motor-/muscle-related signals will most likely be present
before the response button trigger pulse appears in our data, so the time
period close to the response event should not be used for baseline
correction. But at the same time, we don't want to use a baseline
period that extends too far away from the button event. The following values
seem to work quite well.
End of explanation
"""
epochs.metadata.loc[epochs.metadata['last_stimulus'].isna(), :]
"""
Explanation: Let's do a final sanity check: we want to make sure that in every row, we
actually have a stimulus. We use epochs.metadata (and not metadata)
because when creating the epochs, we passed the reject parameter, and
MNE-Python always ensures that epochs.metadata stays in sync with the
available epochs.
End of explanation
"""
epochs = epochs['last_stimulus.notna()']
"""
Explanation: Bummer! It seems the very first two responses were recorded before the
first stimulus appeared: the values in the stimulus column are None.
There is a very simple way to select only those epochs that do have a
stimulus (i.e., are not None):
End of explanation
"""
resp_erp_correct = epochs['response_correct'].average()
resp_erp_incorrect = epochs['not response_correct'].average()
mne.viz.plot_compare_evokeds({'Correct Response': resp_erp_correct,
'Incorrect Response': resp_erp_incorrect},
picks='FCz', show_sensors=True,
title='ERPs at FCz, time-locked to response')
# topoplot of average field from time 0.0-0.1 s
resp_erp_incorrect.plot_topomap(times=0.05, average=0.05, size=3,
title='Avg. topography 0–100 ms after '
'incorrect responses')
"""
Explanation: Time to calculate the ERPs for correct and incorrect responses.
For visualization, we'll only look at sensor FCz, which is known to show
the ERN nicely in the given paradigm. We'll also create a topoplot to get an
impression of the average scalp potentials measured in the first 100 ms after
an incorrect response.
End of explanation
"""
# difference wave: incorrect minus correct responses
resp_erp_diff = mne.combine_evoked([resp_erp_incorrect, resp_erp_correct],
weights=[1, -1])
fig, ax = plt.subplots()
resp_erp_diff.plot(picks='FCz', axes=ax, selectable=False, show=False)
# make ERP trace bolder
ax.lines[0].set_linewidth(1.5)
# add lines through origin
ax.axhline(0, ls='dotted', lw=0.75, color='gray')
ax.axvline(0, ls=(0, (10, 10)), lw=0.75, color='gray',
label='response trigger')
# mark trough
trough_time_idx = resp_erp_diff.copy().pick('FCz').data.argmin()
trough_time = resp_erp_diff.times[trough_time_idx]
ax.axvline(trough_time, ls=(0, (10, 10)), lw=0.75, color='red',
label='max. negativity')
# legend, axis labels, title
ax.legend(loc='lower left')
ax.set_xlabel('Time (s)', fontweight='bold')
ax.set_ylabel('Amplitude (µV)', fontweight='bold')
ax.set_title('Channel: FCz')
fig.suptitle('ERN (Difference Wave)', fontweight='bold')
fig
"""
Explanation: We can see a strong negative deflection immediately after incorrect
responses, compared to correct responses. The topoplot, too, leaves no doubt:
what we're looking at is, in fact, the ERN.
Some researchers suggest to construct the difference wave between ERPs for
correct and incorrect responses, as it more clearly reveals signal
differences, while ideally also improving the signal-to-noise ratio (under
the assumption that the noise level in "correct" and "incorrect" trials is
similar). Let's do just that and put it into a publication-ready
visualization.
End of explanation
"""
|
markdewing/qmc_algorithms | Variational/Variational_Hydrogen.ipynb | mit | beta = Symbol('beta')
R_T = exp(-r - beta*r*r)
R_T
"""
Explanation: Energy of the Hydrogen Atom
The variational principle states a trial wavefunction will have an energy greater than or equal to the ground state energy.
$$\frac{\int \psi H \psi}{ \int \psi^2} \ge E_0$$
First consider the hydogen atom. Let us use a trial wavefunction that is not the exact ground state.
End of explanation
"""
def del_spherical(e, r):
"""Compute Laplacian for expression e with respect to symbol r.
Currently works only with radial dependence"""
t1 = r*r*diff(e, r)
t2 = diff(t1, r)/(r*r)
return simplify(t2)
del1 = del_spherical(R_T, r)
"""
Explanation: The Hamiltonian for this system is
$$-\frac{1}{2} \nabla^2 - \frac{1}{r}$$
The first term is the kinetic energy of the electron, and the second term is the Coulomb attraction between the electron and proton.
The first step is to compute the derivative of the trial wavefunction in spherical coordinates
End of explanation
"""
H = -1/S(2) * R_T * del1 - R_T*R_T/r
simplify(H)
"""
Explanation: Construct $\psi H \psi$
End of explanation
"""
h1 = simplify(r*r*H)
"""
Explanation: The integration occurs in 3D over the electron coordinates. Because the integrand only has a dependence on $r$, it can be converted to spherical coordinates, and reduced to a 1D integral over $r$. (There should be an additional factor of $4 \pi$, but it will cancel since it occurs in the numerator and denominator)
End of explanation
"""
h2 = h1.subs(beta, 0.1)
"""
Explanation: Substitute a concrete value for $\beta$.
End of explanation
"""
num = integrate(h2, (r, 0, oo)).evalf()
num
"""
Explanation: Perform the integral
End of explanation
"""
norm1 = r*r*R_T*R_T
norm2 = norm1.subs(beta, 0.1)
norm3 = simplify(norm2)
denom = integrate(norm3, (r, 0, oo)).evalf()
simplify(denom).evalf()
E = num/denom
E
"""
Explanation: Also construct and integrate the denominator (the normalization).
End of explanation
"""
def compute_energy(R_T, beta_val):
"""Energy given a value for beta"""
# Normalization integrand (denominator)
norm1 = r*r*R_T*R_T
norm2 = norm1.subs(beta, beta_val)
norm3 = simplify(norm2)
# Integrand for the numerator
del1 = del_spherical(R_T, r)
# Construct psi * H * psi
H = -1/S(2) * R_T * del1 - R_T*R_T/r
h1 = simplify(r*r*H)
h2 = h1.subs(beta, beta_val)
lim = 20.0
denom_func = lambdify([r], norm3)
denom_res = quad(denom_func, 0.0, lim)
num_func = lambdify([r], h2)
num_res = quad(num_func, 0.0, lim)
e = num_res[0]/denom_res[0]
return e
"""
Explanation: And, as expected, energy is greater than the exact ground state energy of -0.5 Hartree.
Find the minimum energy
Collect all the steps for computing the energy into a single function. Even though this particular integral could be done symbolically, use numerical integration instead.
End of explanation
"""
energies = []
betas = []
for i in range(10):
beta_val = i*.01
e = compute_energy(R_T, beta_val)
betas.append(beta_val)
energies.append(e)
plt.plot(betas, energies)
"""
Explanation: Now the energy can be computed vs. $\beta$, and we can find the minimum energy. In this case, the minimum occurs at $\beta = 0$, which we know is the exact wavefunction for the hydrogen atom.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.13/_downloads/plot_define_target_events.ipynb | bsd-3-clause | # Authors: Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.event import define_target_events
from mne.datasets import sample
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
"""
Explanation: ============================================================
Define target events based on time lag, plot evoked response
============================================================
This script shows how to define higher order events based on
time lag between reference and target events. For
illustration, we will put face stimuli presented into two
classes, that is 1) followed by an early button press
(within 590 milliseconds) and followed by a late button
press (later than 590 milliseconds). Finally, we will
visualize the evoked responses to both 'quickly-processed'
and 'slowly-processed' face stimuli.
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: EEG + STI 014 - bad channels (modify to your needs)
include = [] # or stim channels ['STI 014']
raw.info['bads'] += ['EEG 053'] # bads
# pick MEG channels
picks = mne.pick_types(raw.info, meg='mag', eeg=False, stim=False, eog=True,
include=include, exclude='bads')
"""
Explanation: Set parameters
End of explanation
"""
reference_id = 5 # presentation of a smiley face
target_id = 32 # button press
sfreq = raw.info['sfreq'] # sampling rate
tmin = 0.1 # trials leading to very early responses will be rejected
tmax = 0.59 # ignore face stimuli followed by button press later than 590 ms
new_id = 42 # the new event id for a hit. If None, reference_id is used.
fill_na = 99 # the fill value for misses
events_, lag = define_target_events(events, reference_id, target_id,
sfreq, tmin, tmax, new_id, fill_na)
print(events_) # The 99 indicates missing or too late button presses
# besides the events also the lag between target and reference is returned
# this could e.g. be used as parametric regressor in subsequent analyses.
print(lag[lag != fill_na]) # lag in milliseconds
# #############################################################################
# Construct epochs
tmin_ = -0.2
tmax_ = 0.4
event_id = dict(early=new_id, late=fill_na)
epochs = mne.Epochs(raw, events_, event_id, tmin_,
tmax_, picks=picks, baseline=(None, 0),
reject=dict(mag=4e-12))
# average epochs and get an Evoked dataset.
early, late = [epochs[k].average() for k in event_id]
"""
Explanation: Find stimulus event followed by quick button presses
End of explanation
"""
times = 1e3 * epochs.times # time in milliseconds
title = 'Evoked response followed by %s button press'
plt.clf()
ax = plt.subplot(2, 1, 1)
early.plot(axes=ax)
plt.title(title % 'late')
plt.ylabel('Evoked field (fT)')
ax = plt.subplot(2, 1, 2)
late.plot(axes=ax)
plt.title(title % 'early')
plt.ylabel('Evoked field (fT)')
plt.show()
"""
Explanation: View evoked response
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive/02_generalization/labs/create_datasets.ipynb | apache-2.0 | from google.cloud import bigquery
import seaborn as sns
import pandas as pd
import numpy as np
import shutil
"""
Explanation: <h1> Explore and create ML datasets </h1>
In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected.
<div id="toc"></div>
Let's start off with the Python imports that we need.
End of explanation
"""
# TODO: write a BigQuery query for the above fields
# Store it into a Pandas dataframe named "trips" that contains about 10,000 records.
"""
Explanation: <h3> Extract sample data from BigQuery </h3>
The dataset that we will use is <a href="https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows.
Write a SQL query to pick up the following fields
<pre>
pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
</pre>
from the dataset and explore a random subsample of the data. Sample size should be about 10,000 records. Make sure to pick a repeatable subset of the data so that if someone reruns this notebook, they will get the same results.
<p>
<b>Hint (highlight to see)</b>
<pre style="color: white">
Set the query string to be:
SELECT above_fields FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
Then, use the BQ library:
trips = bigquery.Client().query(query).execute().result().to_dataframe()
</pre>
End of explanation
"""
ax = sns.regplot(x = "trip_distance", y = "fare_amount", ci = None, truncate = True, data = trips)
"""
Explanation: <h3> Exploring data </h3>
Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.
End of explanation
"""
tollrides = trips[trips['tolls_amount'] > 0]
tollrides[tollrides['pickup_datetime'] == '2014-05-20 23:09:00']
"""
Explanation: Hmm ... do you see something wrong with the data that needs addressing?
It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).
What's up with the streaks at \$45 and \$50? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.
Let's examine whether the toll amount is captured in the total amount.
End of explanation
"""
trips.describe()
"""
Explanation: Looking a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool.
Let's also look at the distribution of values within the columns.
End of explanation
"""
def showrides(df, numlines):
import matplotlib.pyplot as plt
lats = []
lons = []
goodrows = df[df['pickup_longitude'] < -70]
for iter, row in goodrows[:numlines].iterrows():
lons.append(row['pickup_longitude'])
lons.append(row['dropoff_longitude'])
lons.append(None)
lats.append(row['pickup_latitude'])
lats.append(row['dropoff_latitude'])
lats.append(None)
sns.set_style("darkgrid")
plt.plot(lons, lats)
showrides(trips, 10)
showrides(tollrides, 10)
"""
Explanation: Hmm ... The min, max of longitude look strange.
Finally, let's actually look at the start and end of a few of the trips.
End of explanation
"""
def sample_between(a, b):
basequery = """
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers
FROM
`nyc-tlc.yellow.trips`
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
sampler = "AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N)) = 1"
sampler2 = "AND {0} >= {1}\n AND {0} < {2}".format(
"ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100))",
"(EVERY_N * {})".format(a), "(EVERY_N * {})".format(b)
)
return "{}\n{}\n{}".format(basequery, sampler, sampler2)
def create_query(phase, EVERY_N):
"""Phase: train (70%) valid (15%) or test (15%)"""
query = ""
if phase == 'train':
# Training
query = sample_between(0, 70)
elif phase == 'valid':
# Validation
query = sample_between(70, 85)
else:
# Test
query = sample_between(85, 100)
return query.replace("EVERY_N", str(EVERY_N))
# TODO: try out train, test and valid here
print (create_query('train', 100000))
def to_csv(df, filename):
outdf = df.copy(deep = False)
outdf.loc[:, 'key'] = np.arange(0, len(outdf)) # rownumber as key
# Reorder columns so that target is first column
cols = outdf.columns.tolist()
cols.remove('fare_amount')
cols.insert(0, 'fare_amount')
print (cols) # new order of columns
outdf = outdf[cols]
outdf.to_csv(filename, header = False, index_label = False, index = False)
print ("Wrote {} to {}".format(len(outdf), filename))
for phase in ['train', 'valid', 'test']:
query = create_query(phase, 100000)
df = bigquery.Client().query(query).to_dataframe()
to_csv(df, 'taxi-{}.csv'.format(phase))
"""
Explanation: As you'd expect, rides that involve a toll are longer than the typical ride.
<h3> Quality control and other preprocessing </h3>
We need to do some clean-up of the data:
<ol>
<li>New York city longitudes are around -74 and latitudes are around 41.</li>
<li>We shouldn't have zero passengers.</li>
<li>Clean up the total_amount column to reflect only fare_amount and tolls_amount, and then remove those two columns.</li>
<li>Before the ride starts, we'll know the pickup and dropoff locations, but not the trip distance (that depends on the route taken), so remove it from the ML dataset</li>
<li>Discard the timestamp</li>
</ol>
Let's change the BigQuery query appropriately. In production, we'll have to carry out the same preprocessing on the real-time input data.
End of explanation
"""
!ls -l *.csv
"""
Explanation: <h3> Verify that datasets exist </h3>
End of explanation
"""
!head taxi-train.csv
"""
Explanation: We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data.
End of explanation
"""
from google.cloud import bigquery
import pandas as pd
import numpy as np
import shutil
def distance_between(lat1, lon1, lat2, lon2):
# Haversine formula to compute distance "as the crow flies". Taxis can't fly of course.
dist = np.degrees(np.arccos(np.sin(np.radians(lat1)) * np.sin(np.radians(lat2)) + np.cos(np.radians(lat1)) * np.cos(np.radians(lat2)) * np.cos(np.radians(lon2 - lon1)))) * 60 * 1.515 * 1.609344
return dist
def estimate_distance(df):
return distance_between(df['pickuplat'], df['pickuplon'], df['dropofflat'], df['dropofflon'])
def compute_rmse(actual, predicted):
return np.sqrt(np.mean((actual - predicted)**2))
def print_rmse(df, rate, name):
print ("{1} RMSE = {0}".format(compute_rmse(df['fare_amount'], rate * estimate_distance(df)), name))
FEATURES = ['pickuplon','pickuplat','dropofflon','dropofflat','passengers']
TARGET = 'fare_amount'
columns = list([TARGET])
columns.extend(FEATURES) # in CSV, target is the first column, after the features
columns.append('key')
df_train = pd.read_csv('taxi-train.csv', header = None, names = columns)
df_valid = pd.read_csv('taxi-valid.csv', header = None, names = columns)
df_test = pd.read_csv('taxi-test.csv', header = None, names = columns)
rate = df_train['fare_amount'].mean() / estimate_distance(df_train).mean()
print ("Rate = ${0}/km".format(rate))
print_rmse(df_train, rate, 'Train')
print_rmse(df_valid, rate, 'Valid')
print_rmse(df_test, rate, 'Test')
"""
Explanation: Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them.
<h3> Benchmark </h3>
Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark.
My model is going to be to simply divide the mean fare_amount by the mean trip_distance to come up with a rate and use that to predict. Let's compute the RMSE of such a model.
End of explanation
"""
|
km-Poonacha/python4phd | Session 2/ipython/.ipynb_checkpoints/Lesson 5- Crawl and scrape-Worksheet-checkpoint.ipynb | gpl-3.0 | <h1 id="HEADING" property="name" class="heading_name ">
<div class="heading_height"></div>
"
Le Jardin Napolitain
"
</h1>
"""
Explanation: Lesson 5 - Crawl and Scrape
Making the request
Using 'requests' module
Use the requests module to make a HTTP request to http://www.tripadvisor.com
- Check the status of the request
- Display the response header information
Get the '/robots.txt' file contents
Get the HTML content from the website
Scraping websites
Sometimes, you may want a little bit of information - a movie rating, stock price, or product availability - but the information is available only in HTML pages, surrounded by ads and extraneous content.
To do this we build an automated web fetcher called a crawler or spider. After the HTML contents have been retrived from the remote web servers, a scraper parses it to find the needle in the haystack.
BeautifulSoup Module
The bs4 module can be used for searching a webpage (HTML file) and pulling required data from it. It does three things to make a HTML page searchable-
* First, converts the HTML page to Unicode, and HTML entities are converted to Unicode characters
* Second, parses (analyses) the HTML page using the best available parser. It will use an HTML parser unless you specifically tell it to use an XML parser
* Finally transforms a complex HTML document into a complex tree of Python objects.
This module takes the HTML page and creates four kinds of objects: Tag, NavigableString, BeautifulSoup, and Comment.
* The BeautifulSoup object itself represents the webpage as a whole
* A Tag object corresponds to an XML or HTML tag in the webpage
* The NavigableString class to contains the bit of text within a tag
Read more about BeautifulSoup : https://www.crummy.com/software/BeautifulSoup/bs4/doc/
End of explanation
"""
<div class="entry">
<p class="partial_entry">
Popped in on way to Eiffel Tower for lunch, big mistake.
Pizza was disgusting and service was poor.
It’s a shame Trip Advisor don’t let you score venues zero....
<span class="taLnk ulBlueLinks" onclick="widgetEvCall('handlers.clickExpand',event,this);">More
</span>
</p>
</div>
"""
Explanation: Step 1: Making the soup
First we need to use the BeautifulSoup module to parse the HTML data into Python readable Unicode Text format.
*Let us write the code to parse a html page. We will use the trip advisor URL for an infamous restaurant - https://www.tripadvisor.com/Restaurant_Review-g187147-d1751525-Reviews-Cafe_Le_Dome-Paris_Ile_de_France.html *
Step 2: Inspect the element you want to scrape
In this step we will inspect the HTML data of the website to understand the tags and attributes that matches the element. Let us inspect the HTML data of the URL and understand where (under which tag) the review data is located.
End of explanation
"""
#Enter your code here
"""
Explanation: Step 3: Searching the soup for the data
Beautiful Soup defines a lot of methods for searching the parse tree (soup), the two most popular methods are: find() and find_all().
The simplest filter is a tag. Pass a tag to a search method and Beautiful Soup will perform a match against that exact string.
Let us try and find all the < p > (paragraph) tags in the soup:
Step 4: Enable pagination
Automatically access subsequent pages
Using yesterdays sentiment analysis code and the corpus of sentiment found in the word_sentiment.csv file, calculate the sentiment of the reviews.
End of explanation
"""
header = {
'cookie': 'TAUnique=%1%enc%3AHvAwOscAcmfzIwJbsS10GnXn4FrCUpCm%2Bnw21XKuzXoV7vSwMEnyTA%3D%3D; fbm_162729813767876=base_domain=.tripadvisor.com; TACds=B.3.11419.1.2019-03-31; TASSK=enc%3AABCGM1r6xBekOjRaaQZ3QVS7dP4cwZ8sombvPTq8xK6xN55i7TN8puwZdwvXvG1i%2FJ2UQXYG1CwsU%2BXLwLs5qIxnmW5qbLt4I48DfK5FhHpwUw3ZgrbskK%2FjDc4ENfcCXw%3D%3D; ServerPool=C; TART=%1%enc%3A8yMCW7EtdBqPX0oluvfOS5mBk6DRMHXwNEAPJlcpaDumiCWsxs%2BxfBbTYsxpa%2F9l%2FJzCllshf9g%3D; VRMCID=%1%V1*id.10568*llp.%2FRestaurant_Review-g187147-d3405673-Reviews-La_Terrasse_Vedettes_de_Paris-Paris_Ile_de_France%5C.html*e.1557691551614; PMC=V2*MS.36*MD.20190505*LD.20190506; PAC=ALNtqHPT2KJjQwExTPJt3gCvzvDYH_x63ZOT4b3LetvkHuHXcEUY4eLx0TqKGzOIpoXF3K_j57rNigUkWJzSv7TtTna4L3DKcfiaeK9zT9ixGEevH6QwZVd-PdMyr9y5aRzjEVAfid42zC4WXeTcQTJkPVwGMCW2mB2k3xxfB78GgJFIR_I9vf6Bzhq89x_UTTUcQgFpCr8GEFV9GpJWG8UNGeriJSbmPtCXA10oXl5ox7U9TQvSILLSH8PdrP8nwUQMRnfUA_fKbXTaRgH4tzBwZQpbd1vlOOg7fKyfIN9V95PzNOXBEQCJIo3z09Nux0tyZZVX0PX_zI_moLpr9Od3eSi1E8Hm5QcLyG9QNfA1C5WckG9GOV5VKEL0bxDY5TG1smCaQDXpRLkvp8w2bD7vyI2e27WFbtuYvJDJ126v2_KyZmVbG3laZlvWrX2kWGL13IyhVS2Ivjr_9uJAwMpBKuNByH0FBU3ziJcRdqkXiz6lnYMSRSQ1Y8Dmkjkrc0DNTABvuHjbZ7Fh0LOINswW_wrkVsP4PjDq1IVh7IY0hLE_W1G1DKlROc5BZEOjcw%3D%3D; BEPIN=%1%16a8c46770b%3Bbak92b.b.tripadvisor.com%3A10023%3B; TATravelInfo=V2*A.2*MG.-1*HP.2*FL.3*DSM.1557131589173*RS.1*RY.2019*RM.5*RD.6*RH.20*RG.2; CM=%1%RestAds%2FRPers%2C%2C-1%7CRCPers%2C%2C-1%7Csesstch15%2C%2C-1%7CCYLPUSess%2C%2C-1%7Ctvsess%2C%2C-1%7CPremiumMCSess%2C%2C-1%7CRestPartSess%2C%2C-1%7CUVOwnersSess%2C%2C-1%7CRestPremRSess%2C%2C-1%7CPremRetPers%2C%2C-1%7CViatorMCPers%2C%2C-1%7Csesssticker%2C%2C-1%7C%24%2C%2C-1%7Ct4b-sc%2C%2C-1%7CMC_IB_UPSELL_IB_LOGOS2%2C%2C-1%7CPremMCBtmSess%2C%2C-1%7CLaFourchette+Banners%2C%2C-1%7Csesshours%2C%2C-1%7CTARSWBPers%2C%2C-1%7CTheForkORSess%2C%2C-1%7CTheForkRRSess%2C%2C-1%7CRestAds%2FRSess%2C%2C-1%7CPremiumMobPers%2C%2C-1%7CLaFourchette+MC+Banners%2C%2C-1%7Csesslaf%2C%2C-1%7CRestPartPers%2C%2C-1%7CCYLPUPers%2C%2C-1%7CCCUVOwnSess%2C%2C-1%7Cperslaf%2C%2C-1%7CUVOwnersPers%2C%2C-1%7Csh%2C%2C-1%7CTheForkMCCSess%2C%2C-1%7CCCPers%2C%2C-1%7Cb2bmcsess%2C%2C-1%7CSPMCPers%2C%2C-1%7Cperswifi%2C%2C-1%7CPremRetSess%2C%2C-1%7CViatorMCSess%2C%2C-1%7CPremiumMCPers%2C%2C-1%7CPremiumRRPers%2C%2C-1%7CRestAdsCCPers%2C%2C-1%7CTrayssess%2C%2C-1%7CPremiumORPers%2C%2C-1%7CSPORPers%2C%2C-1%7Cperssticker%2C%2C-1%7Cbooksticks%2C%2C-1%7CSPMCWBSess%2C%2C-1%7Cbookstickp%2C%2C-1%7CPremiumMobSess%2C%2C-1%7Csesswifi%2C%2C-1%7Ct4b-pc%2C%2C-1%7CWShadeSeen%2C%2C-1%7CTheForkMCCPers%2C%2C-1%7CHomeASess%2C9%2C-1%7CPremiumSURPers%2C%2C-1%7CCCUVOwnPers%2C%2C-1%7CTBPers%2C%2C-1%7Cperstch15%2C%2C-1%7CCCSess%2C2%2C-1%7CCYLSess%2C%2C-1%7Cpershours%2C%2C-1%7CPremiumORSess%2C%2C-1%7CRestAdsPers%2C%2C-1%7Cb2bmcpers%2C%2C-1%7CTrayspers%2C%2C-1%7CPremiumSURSess%2C%2C-1%7CMC_IB_UPSELL_IB_LOGOS%2C%2C-1%7Csess_rev%2C%2C-1%7Csessamex%2C%2C-1%7CPremiumRRSess%2C%2C-1%7CTADORSess%2C%2C-1%7CAdsRetPers%2C%2C-1%7CMCPPers%2C%2C-1%7CSPMCSess%2C%2C-1%7Cpers_rev%2C%2C-1%7Cmdpers%2C%2C-1%7Cmds%2C1557131565748%2C1557217965%7CSPMCWBPers%2C%2C-1%7CRBAPers%2C%2C-1%7CHomeAPers%2C%2C-1%7CRCSess%2C%2C-1%7CRestAdsCCSess%2C%2C-1%7CRestPremRPers%2C%2C-1%7Cpssamex%2C%2C-1%7CCYLPers%2C%2C-1%7Ctvpers%2C%2C-1%7CTBSess%2C%2C-1%7CAdsRetSess%2C%2C-1%7CMCPSess%2C%2C-1%7CTADORPers%2C%2C-1%7CTheForkORPers%2C%2C-1%7CPremMCBtmPers%2C%2C-1%7CTheForkRRPers%2C%2C-1%7CTARSWBSess%2C%2C-1%7CRestAdsSess%2C%2C-1%7CRBASess%2C%2C-1%7Cmdsess%2C%2C-1%7C; fbsr_162729813767876=wtGNSIucBSm5EusyRkPyX_GfZwxNkyHLxTRli46iHoM.eyJjb2RlIjoiQVFBUHV3SlZpOVNXQXVkMDh1bUdaYjZ2R3hBMkdfdFBZdm9Bb2l2cDEzSDNvaG1ESjRkamo1V1A3dnB5WloxWmwzeWxFTmdCT0dCbTB6dzc1S2pwUHFKak5nQVNKMGNqOEtvUVY1YzZXNHhNQ1FlMURNNXJOUUpMeEJldjlBS2xKNnhVVjVXQ1ZaajZjN1k4X1ZWeGdxbzlIclhKT3BvUDZSLTVzNkVUZ3Q5Q0xMNmg0ZnZIY0pMSm1KdXJwN0lGVFBSOUdvX0Z4M0FiM0VWQ1RnVFNGNzc2NFFuU29fdER5VFk3TWY0V0VKSFZXZi11ME1pa2ZWS1ZzUHdHQlBOOE1xZkVQNjZfZHpZMVdnSEVfcWR4d2FHN2xNODNyR1BWaDVwdDdodlFQQmFBbGtzU21IYjZiSktEaGVGajM4WTg3TGxUUF9hNEVGUjVjOVdoOVNhY2RmV04iLCJ1c2VyX2lkIjoiMTY1NjQ2NDcxNSIsImFsZ29yaXRobSI6IkhNQUMtU0hBMjU2IiwiaXNzdWVkX2F0IjoxNTU3MTMzMTgxfQ; TAReturnTo=%1%%2FRestaurants-g304551-New_Delhi_National_Capital_Territory_of_Delhi.html; roybatty=TNI1625!APyGsDM6tcKypRo49myenvbO5Zyk367lJP3JEhTSBrfno%2F4Bbienyfvs6Q2DU%2F2UmkzjN1pKquiSNGeY2cXQm8s8oX1jKwXT8hgK3GL%2B6psZHdp4k7TF4F52uoI2kQ1e9Ni2k9Ub8D5ak%2FXgN%2F9as9m2HZIB0G6SZnZMT%2FPD73Fo%2C1; SRT=%1%enc%3A8yMCW7EtdBqPX0oluvfOS5mBk6DRMHXwNEAPJlcpaDumiCWsxs%2BxfBbTYsxpa%2F9l%2FJzCllshf9g%3D; TASession=V2ID.2C4059CFCBC27797DA97994A5CF94A28*SQ.233*LS.PageMoniker*GR.7*TCPAR.44*TBR.80*EXEX.60*ABTR.87*PHTB.57*FS.2*CPU.54*HS.recommended*ES.popularity*DS.5*SAS.popularity*FPS.oldFirst*LF.en*FA.1*DF.0*IR.4*TRA.false*LD.304551; TAUD=LA-1557055610999-1*RDD-1-2019_05_05*RD-75954750-2019_05_06.9784431*HDD-75978369-2019_05_19.2019_05_20.1*HC-76743574*LG-77588176-2.1.F.*LD-77588177-.....',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'
}
response = requests.post("https://www.tripadvisor.com/RestaurantSearch?Action=PAGE&geo=304551&ajax=1&itags=10591&sortOrder=relevance&o=a30&availSearchEnabled=false", headers=header)
"""
Explanation: Expanding this further
To add additional details we can inspect the tags further and add the reviewer rating and reviwer details.
Using the review data and the ratings available is there any way we can improve the corpus of sentiments "word_sentiment.csv" file?
Dynamic Pages
Some websites make request in the background to fetch the data from the server and load it into the page dynamically (often an AJAX request). In this case, the url will not indicate the location of the data. To find such requests, open the Chrome or Firefox Developer Tools, you can load the page, go to the “Network” tab and then look through the all of the requests that are being sent in the background to find the one that’s returning the data you’re looking for. Start by filtering the requests to only XHR or JS to make this easier.
Once you find the AJAX request that returns the data you’re hoping to scrape, then you can make your scraper send requests to this URL, instead of to the parent page’s URL. If you’re lucky, the response will be encoded with JSON which is even easier to parse than HTML.
Spoofing the User Agent
By default, the requests library sets the User-Agent header on each request to something like “python-requests/3.xx.x”. You can change it to identify your web scraper, perhaps providing a contact email address so that an admin from the target website can reach out if they see you in their logs.
More commonly, this can be used to make it appear that the request is coming from a normal web browser, and not a web scraping program.
End of explanation
"""
|
james-prior/euler | euler-204-generalised-hamming-numbers.ipynb | mit | MAX_PRIME = 100
MAX_SMOOTH = 10**9
def is_prime(x):
return all(x % divisor != 0 for divisor in range(2, x))
primes = tuple(x for x in range(2, MAX_PRIME+1) if is_prime(x))
def n_smooth_numbers(n, x, primes, max_smooth):
if not primes:
return n
while True:
n = n_smooth_numbers(n, x, primes[1:], max_smooth)
x *= primes[0]
if x > max_smooth:
return n
n += 1
n_smooth_numbers(1, 1, (2, 3, 5), 30)
def foo():
return n_smooth_numbers(1, 1, primes, MAX_SMOOTH)
%timeit foo()
foo()
"""
Explanation: Project Euler
Generalised Hamming Numbers
Problem 204
A Hamming number is a positive number which has no prime factor larger than 5.
So the first few Hamming numbers are 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15.
There are 1105 Hamming numbers not exceeding 10<sup>8</sup>.
We will call a positive number a generalised Hamming number of type n, if it has no prime factor larger than n.
Hence the Hamming numbers are the generalised Hamming numbers of type 5.
How many generalised Hamming numbers of type 100 are there which don't exceed 10<sup>9</sup>?
The recent email starting with
this one,
led to someone mentioning Project Euler Problem #204,
so I am playing with Project Euler once again.
End of explanation
"""
def n_smooth_numbers(primes, max_smooth, n=1, x=1):
if not primes:
return n
while True:
n = n_smooth_numbers(primes[1:], max_smooth, n, x)
x *= primes[0]
if x > max_smooth:
return n
n += 1
n_smooth_numbers((2, 3, 5), 30)
def foo():
return n_smooth_numbers(primes, MAX_SMOOTH)
%timeit foo()
foo()
"""
Explanation: Using default values for n and x makes the first call easier
and the execution speed slower.
End of explanation
"""
def n_smooth_numbers(primes, max_smooth, n=1, x=1):
if not primes:
return n
first_prime, *other_primes = primes
while True:
n = n_smooth_numbers(other_primes, max_smooth, n, x)
x *= first_prime
if x > max_smooth:
return n
n += 1
n_smooth_numbers((2, 3, 5), 30)
def foo():
return n_smooth_numbers(primes, MAX_SMOOTH)
%timeit foo()
foo()
"""
Explanation: Using tuple unpacking below makes the code easier to readn and slower.
End of explanation
"""
|
geoneill12/phys202-2015-work | assignments/assignment10/ODEsEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
"""
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
"""
def solve_euler(derivs, y0, x):
"""Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
"""
# YOUR CODE HERE
raise NotImplementedError()
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
"""
Explanation: Euler's method
Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
$$ \frac{dy}{dx} = f(y(x), x) $$
with the initial condition:
$$ y(x_0)=y_0 $$
Euler's method performs updates using the equations:
$$ y_{n+1} = y_n + h f(y_n,x_n) $$
$$ h = x_{n+1} - x_n $$
Write a function solve_euler that implements the Euler method for a 1d ODE and follows the specification described in the docstring:
End of explanation
"""
def solve_midpoint(derivs, y0, x):
"""Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
"""
# YOUR CODE HERE
raise NotImplementedError()
assert np.allclose(solve_midpoint(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
"""
Explanation: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation:
$$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$
Write a function solve_midpoint that implements the midpoint method for a 1d ODE and follows the specification described in the docstring:
End of explanation
"""
def solve_exact(x):
"""compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
"""
# YOUR CODE HERE
raise NotImplementedError()
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
"""
Explanation: You are now going to solve the following differential equation:
$$
\frac{dy}{dx} = x + 2y
$$
which has the analytical solution:
$$
y(x) = 0.25 e^{2x} - 0.5 x - 0.25
$$
First, write a solve_exact function that compute the exact solution and follows the specification described in the docstring:
End of explanation
"""
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this for grading the plots
"""
Explanation: In the following cell you are going to solve the above ODE using four different algorithms:
Euler's method
Midpoint method
odeint
Exact
Here are the details:
Generate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$).
Define the derivs function for the above differential equation.
Using the solve_euler, solve_midpoint, odeint and solve_exact functions to compute
the solutions using the 4 approaches.
Visualize the solutions on a sigle figure with two subplots:
Plot the $y(x)$ versus $x$ for each of the 4 approaches.
Plot $\left|y(x)-y_{exact}(x)\right|$ versus $x$ for each of the 3 numerical approaches.
Your visualization should have legends, labeled axes, titles and be customized for beauty and effectiveness.
While your final plot will use $N=10$ points, first try making $N$ larger and smaller to see how that affects the errors of the different approaches.
End of explanation
"""
|
google-research/google-research | gfsa/notebooks/demo_learning_static_analyses.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
# Download the GFSA codebase
!git clone https://github.com/google-research/google-research.git --depth=1
import os
os.chdir("google-research")
# Install Python packages
!pip install flax labmaze optax tensor2tensor
"""
Explanation: Interactive demo: Learning static analyses with the GFSA layer
This notebook is designed to walk you through training the GFSA layer to perform simple static analyses on Python code, similar to the static analysis experiments in the paper.
Setting up the environment
These instructions are designed for running this demo using Google Colab; if you are using a different environment, the setup instructions may differ!
If you want to follow along, the first step is to connect the Colab runtime to a TPU. You can use the "Runtime > Change runtime type" option in the toolbar above.
Next, install necessary dependencies:
End of explanation
"""
# See https://github.com/google/jax/blob/master/cloud_tpu_colabs/JAX_demo.ipynb
import requests
import os
if 'COLAB_TPU_ADDR' not in os.environ:
raise RuntimeError("Please connect to a TPU runtime first!")
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver_nightly'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# Use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
import jax
jax.devices()
"""
Explanation: Make sure JAX can see the TPU:
End of explanation
"""
import functools
import itertools
import sys
import datetime
from absl import logging
from absl import flags
import astunparse
import dataclasses
import flax
import gast
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import jax.numpy as jnp
np.set_printoptions(linewidth=150)
logging.use_python_logging()
logging.set_verbosity("info")
logging.set_stderrthreshold("info")
from gfsa import automaton_builder
from gfsa import generic_ast_graphs
from gfsa import graph_types
from gfsa import jax_util
from gfsa import py_ast_graphs
from gfsa import schema_util
from gfsa.datasets import data_loading
from gfsa.datasets import graph_bundle
from gfsa.datasets import graph_edge_util
from gfsa.datasets.random_python import tasks
from gfsa.datasets.random_python import python_numbers_control_flow
from gfsa.model import automaton_layer
from gfsa.model import edge_supervision_models
from gfsa.model import model_util
from gfsa.training import train_edge_supervision_lib
from gfsa.training import train_util
from gfsa.training import simple_runner
from gfsa.training import learning_rate_schedules
from gfsa import visualization
import gfsa.visualization.ndarrays
from gfsa.visualization.pprint import pprint
from gfsa.visualization.pytrees import summarize_tree
"""
Explanation: You should see 8 TPU devices connected above!
Imports and configuration
Let's import and set up some things we will need later:
End of explanation
"""
the_ast = tasks.make_ast(25, np.random.RandomState(6))
print(astunparse.unparse(gast.gast_to_ast(the_ast)))
pprint(the_ast)
"""
Explanation: Generating random Python functions
In this notebook, we will be focusing on graph representations of random Python programs, similar to those used in the static analysis experiments in the paper. We can start by generating some random Python programs!
The GFSA codebase includes functions for generating random Python functions of configurable size. Python functions are represented as gast.AST objects, which can be converted back to source code using astunparse.unparse:
End of explanation
"""
the_ast_larger = tasks.make_ast(150, np.random.RandomState(1234))
print(astunparse.unparse(gast.gast_to_ast(the_ast_larger)))
"""
Explanation: The first argument to make_ast is the number of AST nodes to generate, and the second is the (optional) random seed to use. For instance, here's a larger function:
End of explanation
"""
generic_ast = py_ast_graphs.py_ast_to_generic(the_ast)
pprint(generic_ast)
"""
Explanation: Interpreting Python functions as MDPs
In order to run a GFSA layer on our functions, we need to convert them into graphs, and then impose an MDP structure on those graphs.
This process is performed over two steps:
First, convert the Python AST into a generic representation of abstract syntax trees.
Next, use an "AST specification" to transform the generic AST into an MDP. This describes what fields each AST node has, and how many of each there are. This information is used to determine what the states, actions, and transitions in the MDP should be. For instance, since a return node may or may not have a value, the MDP has an action for going to the value of a return node, and an observation that indicates that the value was missing. Similarly, since a function call may have many arguments, the MDP has states that track which argument the agent is visiting, actions that move to previous or next arguments, and an observation for reaching the end of the list.
<br><br>
💬 Note: If you want to apply the GFSA layer to a different type of AST, you need to:
write a function to convert from that AST to the generic AST representation shown below
define an AST specification for it (either by hand, or by using ast_spec_inference.py)
If you want to apply the GFSA layer to a graph that isn't an AST, you need to describe the MDP in more detail. See the notebook "Using the GFSA layer with new tasks", which describes how to do this!
After the first step, we have the same tree structure as before, but instead of various instances of gast.AST classes, we now have GenericASTNode objects tagged with numeric IDs and string types (which makes it easier to manipulate in a uniform way):
End of explanation
"""
mdp_graph, id_conversion_map = generic_ast_graphs.ast_to_graph(generic_ast, ast_spec=py_ast_graphs.PY_AST_SPECS)
pprint(mdp_graph)
"""
Explanation: The second step produces a "flat" representation of the tree as a graph-based MDP. Each item in the returned dictionary has an autogenerated key (based on the path through the AST from the root node to each AST node), and a GraphNode value that specifies the type of the node and the set of possible MDP transitions.
<br>
Note on terminology: In the GFSA codebase, we represent MDPs as a special type of directed graph, and use some terms interchangeably. In particular:
Every node in the graph corresponds to a possible location of the agent in the environment.
An out-edge connects each graph node to the other graph nodes that the agent could move TO when at this location. This determines the action space for the agent; the agent chooses an out-edge type, and then moves to (one of) the destination nodes for that out-edge type.
An in-edge connects each graph node to the other graph nodes that the agent could move FROM when arriving at this location. This determines the observation space for the agent; when arriving at a node, it will observe the in-edge type that was used to enter the node.
Each edge in the graph is associated with both an out-edge type and an in-edge type, which specifies how the agent chooses to cross this edge and the observation it receives when it does so. These will often be different. As an example, for a graph representing an AST where B is the left child of A,
the edge from A to B might have outgoing type "left child" and incoming type
"parent"; then an agent can choose to move to the left child, and upon doing so observes that it has arrived at that node from its parent.
An input-tagged node is a pair of (node id, in-edge type). This specifies both the location of an agent and the observation that agent receives at that location. When solving for the absorbing distribution, the set of input-tagged nodes determines the size of the transition matrix.
See graph_types.py for more information about the notation used.
End of explanation
"""
py_ast_graphs.SCHEMA
"""
Explanation: This graph is associated with a GraphSchema object that declares the set of all actions and observations for graphs in this MDP family. (As discussed above, the in_edges field specifies the set of observations, and the out_edges field specifies the set of possible actions.)
End of explanation
"""
schema_util.assert_conforms_to_schema(mdp_graph, py_ast_graphs.SCHEMA)
print("assertion passed!")
"""
Explanation: And we can confirm that our MDP graph conforms to this schema:
End of explanation
"""
# Use a distribution with more frequent return/break/continue statements
the_ast = tasks.make_ast(64, np.random.RandomState(3), python_numbers_control_flow.CFG_DISTRIBUTION)
print(astunparse.unparse(gast.gast_to_ast(the_ast)))
"""
Explanation: This means that we can parameterize a GFSA layer based on py_ast_graphs.SCHEMA, and then pass mdp_graph as an input to it.
Computing ground-truth static analyses
Suppose we wish to train a GFSA layer to perform static analyses of Python code. The next step is to build the ground-truth data for those analyses.
The implementation of the static analyses used in the paper is not yet open source, so for now let's consider a simpler form of static analysis: finding the target of return, break, and continue statements. More specifically:
For each return statement, we add an edge from the Return AST node to the FunctionDef AST node defining the function. If there is a return value, we also add an edge from the return value to the FunctionDef AST node.
For each break or continue statement, we add an edge from the Break/Continue AST node to the innermost For/While node that it is contained inside. This is the loop whose execution will be broken out of or continued with, respectively.
(This can be seen as a simplified type of control flow analysis, where we only consider return, break, and continue statements.)
First, let's sample a program:
End of explanation
"""
generic_ast = py_ast_graphs.py_ast_to_generic(the_ast)
mdp_graph, id_conversion_map = generic_ast_graphs.ast_to_graph(generic_ast, ast_spec=py_ast_graphs.PY_AST_SPECS)
"""
Explanation: Now we convert it to an MDP graph:
End of explanation
"""
target_edges = graph_edge_util.compute_jumps_out_edges(the_ast, id_conversion_map)
target_edges
"""
Explanation: And finally, we compute the target edges for this simple task. Each edge is represented as a tuple (source_id, dest_id, type).
Two edges connect each of the two return statements (denoted root_body_0_item_body_2_item_body_1_item_orelse_0_item_body_0_item__Return and root_body_0_item_body_2_item_body_2_item__Return based on their locations in the AST) to the function definition node (root_body_0_item__FunctionDef).
One edge connects the break statement (root_body_0_item_body_3_item_body_0_item_body_0_item__Break) to the for loop containing it (root_body_0_item_body_3_item_body_0_item__For).
All of these edges have type "EXTRA_JUMPS_OUT_OF", which is just a string that we use to distinguish these edges from other types of edge.
End of explanation
"""
other_edges = []
# Add edges based on the transitions in the MDP (which are in turn based on
# the parent-child relationships in the AST).
# These edges will have types like `SCHEMA_body_out_last` (which means that it
# is derived from the MDP schema, and connects nodes that are reachable using
# the `body_out_last` action. In this case, this is the same as connecting each
# control flow AST node to the last statement in the contained block.)
other_edges.extend(graph_edge_util.compute_schema_edges(mdp_graph))
# Add edges between uses of the same identifier. These edges will have type
# "EXTRA_SAME_IDENTIFIER".
other_edges.extend(
graph_edge_util.compute_same_identifier_edges(the_ast, id_conversion_map))
# For the experiments in the paper, we would also include edges based on more
# sophisticated static analyses, but the code for computing those is not yet
# open-source.
other_edges
all_edges = target_edges + other_edges
"""
Explanation: These edges will be used as the targets for our layer.
Other edges
We can also compute other edges from our AST, which are useful when running a baseline graph neural network on the same targets, or to provide additional observations to the GFSA layer. We won't technically need these for this particular toy problem, so feel free to skip this section, but here they are for completeness:
End of explanation
"""
EDGE_TYPES = sorted({
graph_edge_util.JUMPS_OUT_OF_EDGE_TYPE,
graph_edge_util.SAME_IDENTIFIER_EDGE_TYPE,
*graph_edge_util.schema_edge_types(py_ast_graphs.SCHEMA),
})
# Encode our edges into a coordinate matrix.
encoded_edges = graph_edge_util.encode_edges(all_edges, EDGE_TYPES)
# Encode the MDP as a sparse transition matrix.
builder = automaton_builder.AutomatonBuilder(py_ast_graphs.SCHEMA)
converted_example = graph_bundle.convert_graph_with_edges(mdp_graph, encoded_edges, builder)
# Use `summarize_tree` to replace large arrays with string summaries.
pprint(summarize_tree(converted_example))
"""
Explanation: Converting our examples to NDArrays
The next step is to encode the examples in terms of JAX arrays that we can run our model on.
Each example is encoded as a GraphBundle, which contains:
automaton_graph: a sparse representation of the environment dynamics of the corresponding graph MDP. Conceptually, it contains a list of transitions, each of which is associated with a source node, a source observation, an action, a destination node, a destination observation, and a probability of moving to that destination with that observation when taking that action. It is stored in a particular arrangement to make computation efficient; see the docstring for EncodedGraph in automaton_builder.py for more details.
graph_metadata: information about the size of the graph.
node_types: a list of integer node type indices.
edges: a sparse representation of the labeled adjacency matrix for the graph.
The automaton_graph is used to run the GFSA layer, and the node_types and edges are used to compute the loss and also to run normal graph-neural-network layers.
End of explanation
"""
padded_example = graph_bundle.pad_example(
converted_example,
config=graph_bundle.PaddingConfig(
static_max_metadata=automaton_builder.EncodedGraphMetadata(
num_nodes=256, num_input_tagged_nodes=512),
max_initial_transitions=1024,
max_in_tagged_transitions=2048,
max_edges=2048))
pprint(summarize_tree(padded_example))
"""
Explanation: This representation of the MDP graph can be fed into our JAX implementation of the GFSA layer.
If desired, it can also be padded to a larger size (for instance, to allow batching graphs with different numbers of nodes). The graph_metadata field keeps track of the original size, so this doesn't lose any information.
End of explanation
"""
USE_VARIANTS = False
padding_config = graph_bundle.PaddingConfig(
static_max_metadata=automaton_builder.EncodedGraphMetadata(
num_nodes=256, num_input_tagged_nodes=512),
max_initial_transitions=1024,
max_in_tagged_transitions=2048,
max_edges=2048)
@flax.deprecated.nn.module
def simple_model(example: graph_bundle.GraphBundle):
if USE_VARIANTS:
# Compute the probability of receiving each variant observation. In this case,
# these are one-hot vectors that specify whether the start node and the
# current node are both identifiers with the same name.
variant_weights = edge_supervision_models.variants_from_edges(
example,
graph_metadata=padding_config.static_max_metadata,
variant_edge_type_indices=[EDGE_TYPES.index("EXTRA_SAME_IDENTIFIER")],
num_edge_types=len(EDGE_TYPES))
# Weights are indexed by (start node, current node, variant index).
# `variants_from_edges` sets variants based on whether or not an edge exists
# between the two nodes (and what type that edge has).
assert variant_weights.shape == (
padding_config.static_max_metadata.num_nodes,
padding_config.static_max_metadata.num_nodes,
2)
else:
# Don't use variants. In this case, the agent only observes the current
# node type and the in-edge type used to enter the node.
variant_weights = None
# Pass all of the information into our GFSA layer.
# The flax.deprecated.nn API encapsulates the parameters inside the
# automaton_layer.FiniteStateGraphAutomaton call, so they aren't passed as
# arguments. Instead, they will be initialized when the model is instantiated
# below.
absorbing_probs = automaton_layer.FiniteStateGraphAutomaton(
encoded_graph=example.automaton_graph,
variant_weights=variant_weights,
static_metadata=padding_config.static_max_metadata,
dynamic_metadata=example.graph_metadata,
# The builder, defind above, determines how the MDP is interpreted.
builder=builder,
# Increasing this produces multiple edges by running multiple layers in
# parallel.
num_out_edges=1,
# Whether to share parameters across parallel layers, if num_out_edges > 1
share_states_across_edges=True,
# Controls how many states the automaton has, not counting the initial state.
num_intermediate_states = 1,
# Configures how parameters are initialized.
legacy_initialize=False,
# Turns off the adjustment parameters described in Section 3.3
logit_scaling="none",
# Controls \epsilon_{bt-stop} described in Appendix C.2
backtrack_fails_prob=0.001,
).squeeze(axis=0)
# Convert from probability space to logit space to make it easier to define
# the loss function; gracefully handles numerical stability issues by clipping
# gradients.
logits = model_util.safe_logit(absorbing_probs)
return logits
"""
Explanation: Building a model in Colab
Now that we have data that can be passed to a model, we can work on the model itself.
The core solver for the automaton Markov chain's absorbing distribution is implemented in automaton_builder.py. To make it easier to use, it has been wrapped in the flax.deprecated.nn API (note: it is not yet available for the new Linen API, but should be straightforward to adapt if needed).
Let's build a simple model that just runs a single GFSA layer with a few specific hyperparameters.
Note on variants: Variants are an optional input to the GFSA layer, which allows dynamically changing the observations received by the agent depending on its start node and its current node. The variant_weights input chooses between a fixed number of extra observations that the agent can receive, and the automaton can then act in a different way conditioned on this information. (In appendix C.2 of the paper, $\Gamma$ denotes the set of all variants, and each $\gamma \in \Gamma$ is a specific variant index.)
For this simple task, we do not really need to use any variants. However, the experiments in the paper use variants to tell the agent whether two identifier nodes have the same name (for the static analysis experiments) or to pass information from a graph neural network to the GFSA layer (for the variable misuse experiments).
If you would like to see how enabling variants affect the policy parameters, you can set USE_VARIANTS = True below.
End of explanation
"""
_, initial_params = simple_model.init(jax.random.PRNGKey(4321), padded_example)
# Visualize the parameters of the model; Flax implicitly passes these into the
# layer object when the model is run.
pprint(summarize_tree(initial_params))
"""
Explanation: Note that we could also have configure the FiniteStateGraphAutomaton layer with gin, which is how the standalone training scripts work.
Now we can initialize the model parameters:
End of explanation
"""
initial_log_routing_params = initial_params["FiniteStateGraphAutomaton_0"]["log_routing_params_shared"]
visualization.ndarrays.show_routing_params(
builder,
builder.routing_softmax(initial_log_routing_params),
rparams_are_logits=False,
row_split_ct=4,
figsize=(20, 10),
vmax=.5, # to enhance detail
colorbar=False)
"""
Explanation: In particular, the tabular policy for the FSA-based agent is stored in a dataclass of type RoutingParams. We can visualize the entries of it below.
Each row represents a different POMDP observation that the agent might receive, based on its current location and the last action it took. Within each row, the small dark 3x2 rectangles are the probabilities of taking special actions (add-edge-and-stop, backtrack, stop) in each of the two FSM states, and the large variable-sized rectangles are the probabilities of moving to some adjacent node with each possible next state. The "checkerboard" patterns reflect the initialization bias, which encourages the automaton to stay
in the same state more often.
(If you enabled variants, the two columns within each box represent the two "variant" observations. Since we set the variants using EXTRA_SAME_IDENTIFIER, the right column is used when the start node and current node are both variables with the same identifier, and the left column is used in every other case.)
End of explanation
"""
rng = np.random.RandomState(42)
examples = []
while len(examples) < 500:
the_ast = tasks.make_ast(150, rng, python_numbers_control_flow.CFG_DISTRIBUTION)
generic_ast = py_ast_graphs.py_ast_to_generic(the_ast)
mdp_graph, id_conversion_map = generic_ast_graphs.ast_to_graph(generic_ast, ast_spec=py_ast_graphs.PY_AST_SPECS)
all_edges = (
graph_edge_util.compute_jumps_out_edges(the_ast, id_conversion_map)
+ graph_edge_util.compute_same_identifier_edges(the_ast, id_conversion_map)
)
encoded_edges = graph_edge_util.encode_edges(all_edges, EDGE_TYPES)
converted_example = graph_bundle.convert_graph_with_edges(mdp_graph, encoded_edges, builder)
try:
padded_example = graph_bundle.pad_example(
converted_example,
config=padding_config)
examples.append((the_ast, padded_example))
except ValueError:
pass
if len(examples) % 100 == 0:
print(len(examples))
sys.stdout.flush()
"""
Explanation: Training the model
To show how the model can be trained, this section includes a simplified training loop that runs in Colab. (See the "Advanced usage" for a more efficient way to train a model for practical use.)
We can start by generating a dataset of programs:
End of explanation
"""
def loss_fn(model, example, unused_metadata):
"""Compute cross-entropy loss between model output logits and target edges."""
del unused_metadata
unused_rng = jax.random.PRNGKey(0) # For compatibility with stochastic baselines
loss, metrics_dict = train_edge_supervision_lib.loss_fn(
*train_edge_supervision_lib.extract_outputs_and_targets(
model=model,
padded_example_and_rng=(example, unused_rng),
target_edge_index=EDGE_TYPES.index("EXTRA_JUMPS_OUT_OF"),
num_edge_types=len(EDGE_TYPES)))
return loss, metrics_dict
def data_iterator():
rng = np.random.RandomState(1234)
for epoch in itertools.count():
shuffled_examples = list(enumerate(examples))
rng.shuffle(shuffled_examples)
for example_id, (_, padded_example) in shuffled_examples:
yield train_util.ExampleWithMetadata(
example=padded_example, epoch=epoch, example_id=example_id)
def batch_iterator():
it = data_iterator()
ldc = jax.local_device_count()
it = data_loading.batch(it, (ldc, 32 // ldc))
return it
pprint(summarize_tree(next(batch_iterator())))
"""
Explanation: Now we set up our loss function and data iterator:
End of explanation
"""
initial_optimizer = flax.optim.Adam().create(flax.deprecated.nn.Model(simple_model, initial_params))
"""
Explanation: Now we can construct an optimizer:
End of explanation
"""
run_name = datetime.datetime.now().strftime('%y_%m_%d__%H_%M_%S')
# Takes about 10 to 15 minutes on a TPU.
# If you prefer, you can skip this step and load precomputed weights in the next cell.
trained_optimizer = simple_runner.training_loop(
optimizer=initial_optimizer,
train_iterator=batch_iterator(),
loss_fn=loss_fn,
validation_fn=None,
max_iterations=3_000,
learning_rate_schedule=learning_rate_schedules.ConstantLearningRateSchedule(0.01),
artifacts_dir=f"/tmp/artifacts_{run_name}",
steps_per_save=200,
)
# Set this to:
# - "last" to load final trained weights
# - "precomputed" to load precomputed weights (if you skipped the previous cell)
# - a multiple of 200 to load a checkpoint from partway through training.
load_what = "last"
if load_what == "last":
trained_model = flax.jax_utils.unreplicate(trained_optimizer).target
elif load_what == "precomputed":
checkpoint_optimizer, _ = simple_runner.load_from_checkpoint(initial_optimizer, "./gfsa/notebooks/demo_checkpoint.msgpack")
trained_model = checkpoint_optimizer.target
else:
checkpoint_optimizer, _ = simple_runner.load_from_checkpoint(initial_optimizer, f"/tmp/artifacts_{run_name}/checkpoints/current_at_{load_what}.msgpack")
trained_model = checkpoint_optimizer.target
"""
Explanation: And do a bit of training:
End of explanation
"""
trained_log_routing_params = trained_model.params["FiniteStateGraphAutomaton_0"]["log_routing_params_shared"]
visualization.ndarrays.show_routing_params(
builder,
builder.routing_softmax(trained_log_routing_params),
rparams_are_logits=False,
row_split_ct=4,
figsize=(20, 10),
colorbar=False)
"""
Explanation: Investigating the resulting policy
We can visualize the model weights, and see that the model has learned a policy for solving this simplified static analysis task.
Note that the leftmost pixel is only turned on for a small subset of rows, in particular FunctionDef: body_in, For: parent_in/body_in, and While: body_in. This is because the leftmost pixel corresponds to the "add edge and stop" action, and the only possible destinations for edges in this task are function definitions, for loops, and while loops.
(If you enabled variants: Recall that the EXTRA_SAME_IDENTIFIER edge only connects identifiers, so the parameters in the second column are only used when the agent is at a Name node. As a consequence, only the rows for Name nodes have changed after training.)
End of explanation
"""
the_ast = gast.parse("""
def test_function(foo):
for x in foo:
if x:
return True
else:
continue
while False:
while True:
break
continue
return
""")
generic_ast = py_ast_graphs.py_ast_to_generic(the_ast)
mdp_graph, id_conversion_map = generic_ast_graphs.ast_to_graph(generic_ast, ast_spec=py_ast_graphs.PY_AST_SPECS)
all_edges = (
graph_edge_util.compute_jumps_out_edges(the_ast, id_conversion_map)
+ graph_edge_util.compute_same_identifier_edges(the_ast, id_conversion_map)
)
encoded_edges = graph_edge_util.encode_edges(all_edges, EDGE_TYPES)
converted_example = graph_bundle.convert_graph_with_edges(mdp_graph, encoded_edges, builder)
test_example = graph_bundle.pad_example(
converted_example,
config=padding_config)
def get_short_name(long_name):
"""Helper function to get a short name for an AST's MDP graph node."""
path, nodetype = long_name.split("__")
path_parts = path.split("_")
levels = []
for part in path_parts:
if all(c in '0123456789' for c in part):
levels[-1] = levels[-1] + "[" + part + "]"
else:
levels.append(part)
last = levels[-1]
num_rest = len(levels) - 1
return "|"*num_rest + last + ": " + nodetype
# Generate short summary names for each node, so we can visualize it easily
short_names = [f"{i:2d} " + get_short_name(k) for i, k in enumerate(mdp_graph.keys())]
maxlen = max(len(n) for n in short_names)
padded_names = [n + " " * (maxlen - len(n)) for n in short_names]
"""
Explanation: Let's write our own custom Python function to see how it works:
End of explanation
"""
model_output = jax.nn.sigmoid(trained_model(test_example))
target_output = edge_supervision_models.ground_truth_adjacency(
example=test_example,
graph_metadata=padding_config.static_max_metadata,
target_edge_type="EXTRA_JUMPS_OUT_OF",
all_edge_types=EDGE_TYPES)
"""
Explanation: We can run the model on this example:
End of explanation
"""
num_nodes = test_example.graph_metadata.num_nodes
visualization.ndarrays.ndshow(
jnp.stack([target_output, model_output])[:, :num_nodes, :num_nodes],
"crc",
figsize=(20,10),
names={1:padded_names, 2:padded_names},
ticks=True)
"""
Explanation: Visualizing the model on our example, we can see that the model outputs match the ground-truth targets.
In this figure, the targets are on the left, and the model output is on the right. The Y axis indicates the location of the source node, and the X axis corresponds to the location of the target node.
End of explanation
"""
untrained_output = jax.nn.sigmoid(initial_optimizer.target(test_example))
visualization.ndarrays.ndshow(
jnp.stack([target_output, untrained_output])[:, :num_nodes, :num_nodes],
"crc",
figsize=(20,10),
names={1:padded_names, 2:padded_names},
ticks=True)
"""
Explanation: We can compare this to the output of an untrained model:
End of explanation
"""
def unroll_automaton(routing_param_logits: automaton_builder.RoutingParams, example: graph_bundle.GraphBundle, initial_node_index:int, rollout_steps: int):
# Variants are the same as in model definition
variant_weights = edge_supervision_models.variants_from_edges(
example,
graph_metadata=padding_config.static_max_metadata,
variant_edge_type_indices=[EDGE_TYPES.index("EXTRA_SAME_IDENTIFIER")],
num_edge_types=len(EDGE_TYPES))
# Build the transition matrix for the combination of MDP and policy
transition_matrix = builder.build_transition_matrix(
builder.routing_softmax(routing_param_logits),
example.automaton_graph,
padding_config.static_max_metadata)
initial_state = 0
# Unroll
unrolled = automaton_builder.unroll_chain_steps(
builder=builder,
transition_matrix=transition_matrix,
variant_weights=variant_weights[initial_node_index],
start_machine_state=(jnp.arange(2) == initial_state),
node_index=initial_node_index,
steps=rollout_steps)
# Reformat for easier analysis
aggregated = automaton_builder.aggregate_unrolled_per_node(
unrolled,
node_index=initial_node_index,
initial_state=initial_state,
transition_matrix=transition_matrix,
graph_metadata=padding_config.static_max_metadata)
return aggregated
state_colors_dict = {
"State A": mpl.colors.to_rgb("#4e79a7"),
"State B": mpl.colors.to_rgb("#edc948"),
"✓ Add edge and stop": mpl.colors.to_rgb("#59a14f"),
"← Backtrack": mpl.colors.to_rgb("#b07aa1"),
"✗ Stop": mpl.colors.to_rgb("#e15759"),
}
state_symbols = "AB✓←✗"
state_colors = list(state_colors_dict.values())
legend_items = [
mpl.patches.Patch(color=c, label=l)
for l,c in state_colors_dict.items()
]
def show_automaton_steps(steps, ax):
steps = np.array(steps)
ax.imshow(np.einsum("sni,ic->nsc", steps[:, :num_nodes, :], state_colors))
ax.set_yticks(np.arange(num_nodes))
ax.set_yticklabels(labels=padded_names, family="monospace")
ax.set_xlabel("Step number")
for step in range(steps.shape[0]):
for node in range(num_nodes):
for i, sym in enumerate(state_symbols):
d = np.sum(steps[step, node]) + 1e-3
v = steps[step, node, i] / d
if v > 0.05 and d > 0.05:
ax.text(step, node, sym, c="black", alpha=v, verticalalignment="center", horizontalalignment="center")
"""
Explanation: To get a better sense of how the learned policy works, we can simulate each step of the automaton individually, and visualize a heatmap of the location of the automaton at each timestep.
End of explanation
"""
starting_positions = [8, 14, 15, 17, 20, 25, 27, 29]
_, ax = plt.subplots(2, 4, figsize=(40, 12))
for i in range(8):
r, c = np.unravel_index(i, (2, 4))
starting_position = starting_positions[i]
steps = unroll_automaton(initial_log_routing_params, test_example, initial_node_index=starting_position, rollout_steps=16)
ax[r, c].set_title(f"Starting from node {starting_position}")
show_automaton_steps(steps, ax[r, c])
if i == 0:
ax[r, c].legend(handles=legend_items, loc="lower right")
"""
Explanation: At initialization time, the untrained policy behaves like a random walk and does not do any particular action with high probability (leading to "blurry" output colors):
End of explanation
"""
starting_positions = [8, 14, 15, 17, 20, 25, 27, 29]
_, ax = plt.subplots(2, 4, figsize=(40, 12))
for i in range(8):
r, c = np.unravel_index(i, (2, 4))
starting_position = starting_positions[i]
steps = unroll_automaton(trained_log_routing_params, test_example, initial_node_index=starting_position, rollout_steps=16)
ax[r, c].set_title(f"Starting from node {starting_position}")
show_automaton_steps(steps, ax[r, c])
if i == 0:
ax[r, c].legend(handles=legend_items, loc="lower right")
"""
Explanation: After training, the automaton policy has learned to take a single sequence of actions with high probability, solving the task:
End of explanation
"""
# Tiny AST to make visualization easier
the_ast = gast.parse("""
def test_function(foo):
if foo:
return
pass
""")
generic_ast = py_ast_graphs.py_ast_to_generic(the_ast)
mdp_graph, id_conversion_map = generic_ast_graphs.ast_to_graph(generic_ast, ast_spec=py_ast_graphs.PY_AST_SPECS)
"""
Explanation: Some things to notice:
When starting from node 8 or node 20 (not relevant to the task), the automaton moves to the parent node, sees that it is not a return statement, and halts without adding an edge.
When starting from nodes 14, 15, or 29 (return statements or return values), the automaton switches to state B (yellow), walks up the tree until reaching the function definition, and adds an edge there.
When starting from nodes 17, 25, or 27, it stays in state A (blue) and stops at the first while loop or for loop it finds.
(Note that different random seeds or hyperparameters may result in slightly different state-changing behaviors for this example.)
Manually building the transition matrix and solving for the absorbing states
The sections above have taken care of running the automaton on the input graph automatically. However, if you want more control, you can run each of the steps individually.
End of explanation
"""
encoded_mdp, mdp_metadata = builder.encode_graph(mdp_graph, as_jax=False)
pprint(summarize_tree(encoded_mdp))
mdp_metadata
"""
Explanation: Instead of using graph_bundle.convert_graph_with_edges, which builds both the MDP environment and the extra edges at the same time, we can directly build the environment dynamics using the automaton builder object:
End of explanation
"""
learned_policy = builder.routing_softmax(trained_log_routing_params)
transition_matrix = builder.build_transition_matrix(
rparams=learned_policy,
graph=encoded_mdp,
metadata=mdp_metadata)
pprint(summarize_tree(transition_matrix))
"""
Explanation: Each of the fields in the EncodedGraph object describe different parts of the environment dynamics. However, they are stored in a sparse form that is difficult to visualize. In particular, they are encoded as sparse operators that transform a policy into a transition matrix, by indicating which policy actions result in which state changes (see the docstring for EncodedGraph in automaton_builder.py for more details).
This EncodedGraph could be fed into the high-level Flax wrapper. But it's also possible to run each of the steps directly in JAX.
To obtain a transition matrix, we need to combine our encoded graph with some parameters. Let's use the ones from our trained model:
End of explanation
"""
visualization.ndarrays.ndshow(
# Visualize only the first variant, which is used for almost all of the nodes
transition_matrix.initial_to_in_tagged[0],
"ccrr", figsize=(4, 8),
colskips=lambda _: 0, rowskips=lambda _: 0)
"""
Explanation: The result is still broken up into blocks, which will be used to efficiently solve the linear system. First, we have the transitions from each possible (initial_node, initial_state) to each possible (node, observation, hidden_state) tuple (in other words, the transition matrix for the starting state). The columns here represent the different possible start nodes and start states, and the rows represent the possible (node, observation, hidden_state) tuples.
End of explanation
"""
visualization.ndarrays.ndshow(
transition_matrix.initial_to_special[0],
"ccr", figsize=(10, 1),
colskips=lambda _: 0, rowskips=lambda _: 0)
"""
Explanation: Next, we have immediate transitions from the initial state to a halting state. Here, the columns represent the initial nodes, and each row corresponds to a special action (add edge and stop, backtrack, stop).
End of explanation
"""
visualization.ndarrays.ndshow(
transition_matrix.in_tagged_to_in_tagged[0],
"ccrr", figsize=(15, 14),
colskips=lambda _: 0, rowskips=lambda _: 0)
"""
Explanation: Next we have the transitions from (node, observation, hidden_state) to (node, observation, hidden_state), which make up the majority of the transitions in any given trajectory:
End of explanation
"""
visualization.ndarrays.ndshow(
transition_matrix.in_tagged_to_special[0],
"ccr", figsize=(20, 1),
colskips=lambda _: 0, rowskips=lambda _: 0)
"""
Explanation: Finally, we have transitions from intermediate states to halting states:
End of explanation
"""
num_nodes = mdp_metadata.num_nodes
absorbing_distribution = automaton_builder.all_nodes_absorbing_solve(
builder=builder,
transition_matrix=transition_matrix,
variant_weights=jax.nn.one_hot(jnp.zeros([num_nodes, num_nodes]), 2 if USE_VARIANTS else 1), # should sum to 1 across last axis if using multiple variants
start_machine_states=jax.nn.one_hot(jnp.zeros([num_nodes]), 2), # should sum to 1 across last axis if using multiple states
steps=100, # How long to run the solver for; determines the maximum trajectory length
)
pprint(summarize_tree(absorbing_distribution))
visualization.ndarrays.ndshow(absorbing_distribution, figsize=(8, 6))
"""
Explanation: (Try adding a for or while loop to the test function and seeing how the transition matrix changes!)
We can solve for the absorbing distribution for this transition matrix using all_nodes_absorbing_solve, which does the heavy lifting of constructing and solving a batch of linear systems, and also sets up the implicit differentiation machinery so that gradients can be computed efficiently.
End of explanation
"""
visualization.ndarrays.ndshow(absorbing_distribution, figsize=(8, 6), vmax=0.002)
"""
Explanation: This matrix of absorbing probabilities is then optionally postprocessed with adjustment parameters, and then returned as the output of the GFSA layer. In this case, the layer adds one edge with high confidence.
This is a weighted adjacency matrix, so there are also weak connections between other nodes. We can visualize this by changing the colormap:
End of explanation
"""
|
3upperm2n/notes-deeplearning | tensorboard/tensorboard/Anna KaRNNa Name Scoped.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
"""
Explanation: First we'll load the text file and convert it into integers for our network to use.
End of explanation
"""
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
"""
Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
"""
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_layers"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
"""
Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
"""
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
End of explanation
"""
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/3', sess.graph)
"""
Explanation: Write out the graph for TensorBoard
End of explanation
"""
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
"""
Explanation: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
|
ContinualAI/avalanche | notebooks/how-tos/dataloading_buffers_replay.ipynb | mit | !pip install avalanche-lib
"""
Explanation: description: How to implement replay and data loading
Dataloading, Memory Buffers, and Replay
Avalanche provides several components that help you to balance data loading and implement rehearsal strategies.
Dataloaders are used to provide balancing between groups (e.g. tasks/classes/experiences). This is especially useful when you have unbalanced data.
Buffers are used to store data from the previous experiences. They are dynamic datasets with a fixed maximum size, and they can be updated with new data continuously.
Finally, Replay strategies implement rehearsal by using Avalanche's plugin system. Most rehearsal strategies use a custom dataloader to balance the buffer with the current experience and a buffer that is updated for each experience.
First, let's install Avalanche. You can skip this step if you have installed it already.
End of explanation
"""
from avalanche.benchmarks import SplitMNIST
from avalanche.benchmarks.utils.data_loader import GroupBalancedDataLoader
benchmark = SplitMNIST(5, return_task_id=True)
dl = GroupBalancedDataLoader([exp.dataset for exp in benchmark.train_stream], batch_size=4)
for x, y, t in dl:
print(t.tolist())
break
"""
Explanation: Dataloaders
Avalanche dataloaders are simple iterators, located under avalanche.benchmarks.utils.data_loader. Their interface is equivalent to pytorch's dataloaders. For example, GroupBalancedDataLoader takes a sequence of datasets and iterates over them by providing balanced mini-batches, where the number of samples is split equally among groups. Internally, it instantiate a DataLoader for each separate group. More specialized dataloaders exist such as TaskBalancedDataLoader.
All the dataloaders accept keyword arguments (**kwargs) that are passed directly to the dataloaders for each group.
End of explanation
"""
from avalanche.training.storage_policy import ReservoirSamplingBuffer
from types import SimpleNamespace
benchmark = SplitMNIST(5, return_task_id=False)
storage_p = ReservoirSamplingBuffer(max_size=30)
print(f"Max buffer size: {storage_p.max_size}, current size: {len(storage_p.buffer)}")
"""
Explanation: Memory Buffers
Memory buffers store data up to a maximum capacity, and they implement policies to select which data to store and which the to remove when the buffer is full. They are available in the module avalanche.training.storage_policy. The base class is the ExemplarsBuffer, which implements two methods:
- update(strategy) - given the strategy's state it updates the buffer (using the data in strategy.experience.dataset).
- resize(strategy, new_size) - updates the maximum size and updates the buffer accordingly.
The data can be access using the attribute buffer.
End of explanation
"""
for i in range(5):
strategy_state = SimpleNamespace(experience=benchmark.train_stream[i])
storage_p.update(strategy_state)
print(f"Max buffer size: {storage_p.max_size}, current size: {len(storage_p.buffer)}")
print(f"class targets: {storage_p.buffer.targets}\n")
"""
Explanation: At first, the buffer is empty. We can update it with data from a new experience.
Notice that we use a SimpleNamespace because we want to use the buffer standalone, without instantiating an Avalanche strategy. Reservoir sampling requires only the experience from the strategy's state.
End of explanation
"""
from avalanche.training.storage_policy import ParametricBuffer, RandomExemplarsSelectionStrategy
storage_p = ParametricBuffer(
max_size=30,
groupby='class',
selection_strategy=RandomExemplarsSelectionStrategy()
)
print(f"Max buffer size: {storage_p.max_size}, current size: {len(storage_p.buffer)}")
for i in range(5):
strategy_state = SimpleNamespace(experience=benchmark.train_stream[i])
storage_p.update(strategy_state)
print(f"Max buffer size: {storage_p.max_size}, current size: {len(storage_p.buffer)}")
print(f"class targets: {storage_p.buffer.targets}\n")
"""
Explanation: Notice after each update some samples are substituted with new data. Reservoir sampling select these samples randomly.
Avalanche offers many more storage policies. For example, ParametricBuffer is a buffer split into several groups according to the groupby parameters (None, 'class', 'task', 'experience'), and according to an optional ExemplarsSelectionStrategy (random selection is the default choice).
End of explanation
"""
for k, v in storage_p.buffer_groups.items():
print(f"(group {k}) -> size {len(v.buffer)}")
datas = [v.buffer for v in storage_p.buffer_groups.values()]
dl = GroupBalancedDataLoader(datas)
for x, y, t in dl:
print(y.tolist())
break
"""
Explanation: The advantage of using grouping buffers is that you get a balanced rehearsal buffer. You can even access the groups separately with the buffer_groups attribute. Combined with balanced dataloaders, you can ensure that the mini-batches stay balanced during training.
End of explanation
"""
from avalanche.benchmarks.utils.data_loader import ReplayDataLoader
from avalanche.training.plugins import StrategyPlugin
class CustomReplay(StrategyPlugin):
def __init__(self, storage_policy):
super().__init__()
self.storage_policy = storage_policy
def before_training_exp(self, strategy,
num_workers: int = 0, shuffle: bool = True,
**kwargs):
""" Here we set the dataloader. """
if len(self.storage_policy.buffer) == 0:
# first experience. We don't use the buffer, no need to change
# the dataloader.
return
# replay dataloader samples mini-batches from the memory and current
# data separately and combines them together.
print("Override the dataloader.")
strategy.dataloader = ReplayDataLoader(
strategy.adapted_dataset,
self.storage_policy.buffer,
oversample_small_tasks=True,
num_workers=num_workers,
batch_size=strategy.train_mb_size,
shuffle=shuffle)
def after_training_exp(self, strategy: "BaseStrategy", **kwargs):
""" We update the buffer after the experience.
You can use a different callback to update the buffer in a different place
"""
print("Buffer update.")
self.storage_policy.update(strategy, **kwargs)
"""
Explanation: Replay Plugins
Avalanche's strategy plugins can be used to update the rehearsal buffer and set the dataloader. This allows to easily implement replay strategies:
End of explanation
"""
from torch.nn import CrossEntropyLoss
from avalanche.training import Naive
from avalanche.evaluation.metrics import accuracy_metrics
from avalanche.training.plugins import EvaluationPlugin
from avalanche.logging import InteractiveLogger
from avalanche.models import SimpleMLP
import torch
scenario = SplitMNIST(5)
model = SimpleMLP(num_classes=scenario.n_classes)
storage_p = ParametricBuffer(
max_size=500,
groupby='class',
selection_strategy=RandomExemplarsSelectionStrategy()
)
# choose some metrics and evaluation method
interactive_logger = InteractiveLogger()
eval_plugin = EvaluationPlugin(
accuracy_metrics(experience=True, stream=True),
loggers=[interactive_logger])
# CREATE THE STRATEGY INSTANCE (NAIVE)
cl_strategy = Naive(model, torch.optim.Adam(model.parameters(), lr=0.001),
CrossEntropyLoss(),
train_mb_size=100, train_epochs=1, eval_mb_size=100,
plugins=[CustomReplay(storage_p)],
evaluator=eval_plugin
)
# TRAINING LOOP
print('Starting experiment...')
results = []
for experience in scenario.train_stream:
print("Start of experience ", experience.current_experience)
cl_strategy.train(experience)
print('Training completed')
print('Computing accuracy on the whole test set')
results.append(cl_strategy.eval(scenario.test_stream))
"""
Explanation: And of course, we can use the plugin to train our continual model
End of explanation
"""
|
timothydmorton/usrp-sciprog | day2/exercises/Solutions/Python_answers.ipynb | mit | #I don't think this is the code golf winner. Try to beat me.
for i in range(100):
print('FizzBuzz'*(not (i+1)%5)*(not (i+1)%3) or 'Fizz'*(not (i+1)%5) or 'Buzz'*(not (i+1)%3) or str(i+1))
"""
Explanation: Exercises
Rules:
Every variable/function/class name should be meaningful
Variable/function names should be lowercase, class names uppercase
Write a documentation string (even if minimal) for every function.
1) (From jakevdp): Create a program (a .py file) which repeatedly asks the user for a word. The program should append all the words together. When the user types a "!", "?", or a ".", the program should print the resulting sentence and exit.
For example, a session might look like this::
$ ./make_sentence.py
Enter a word (. ! or ? to end): My
Enter a word (. ! or ? to end): name
Enter a word (. ! or ? to end): is
Enter a word (. ! or ? to end): Walter
Enter a word (. ! or ? to end): White
Enter a word (. ! or ? to end): !
My name is Walter White!
2) (From jakevdp): Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”. If you finish quickly... see how few characters you can write this program in (this is known as "code golf": going for the fewest key strokes).
End of explanation
"""
def sum_digits(number):
'''
Function that takes a number as an input and sums its digits
Parameters
number: int
an integer number with several digits
Returns
total: int
the sum of the digits in number
'''
string_num = str(number)
total = 0
for letter in string_num:
total += int(letter)
return total
def list_multiple(number, limit = None):
'''
Function that lists the sum_digits of every multiple of the input
until the limit or the square of number is reached
Parameters
number: int
an integer
Optional
limit: int
limit on the multiple of number for which to print its sum_digits()
Returns
None
'''
if limit == None:
limit = number**2
mul = 1
result = number+0
while (result+number) < limit:
result = mul*number
mul += 1
print(sum_digits(result))
return
print(sum_digits(128))
list_multiple(4, 21)
"""
Explanation: 2) Write a function called sum_digits that returns the sum of the digits of an integer argument; that is, sum_digits(123) should return 6. Use this function in an other function that prints out the sum of the digits of every integer multiple of the first argument, up to either a second optional argument (if included) or the first argument's square. That is::
list_multiple(4) #with one argument
4
8
3
7
And I'll let you figure out what it looks like with a second optional argument
End of explanation
"""
|
DJCordhose/big-data-visualization | code/notebooks/analysis-dask.ipynb | mit | # http://dask.pydata.org/en/latest/dataframe-overview.html
%time lazy_df = dd.read_csv('../../data/raw/2001.csv', encoding='iso-8859-1')
%time len(lazy_df)
# http://dask.pydata.org/en/latest/dataframe-api.html#dask.dataframe.DataFrame.sample
s = 10000 # desired sample size
n = 5967780
fraction = s / n
df = lazy_df.sample(fraction)
%time len(df)
df.head()
"""
Explanation: Loading data using Dask (loads lazily)
End of explanation
"""
# first turn our 10000 samples into a normal pandas df for convenience
%time df = df.compute()
# http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html
# turn those text labels into numerical
text_cols = ['UniqueCarrier', 'Origin', 'Dest']
le = preprocessing.LabelEncoder()
for c in text_cols:
# print (c,set(df[c].values))
flist = list(set(df[c].values))
# print(flist)
le.fit(flist)
leo = le.transform(flist)
# print (c,flist,leo)
df[c+'_'] = df[c]
df[c+'_'].replace(flist,value=leo,inplace=True)
df.head()
"""
Explanation: Create numeral versions of categoricals for later analysis
End of explanation
"""
df.fillna(-1, inplace=True)
df.head()
cols_for_correlation = [
'DayOfWeek',
'DepTime',
'ArrTime',
'ArrDelay',
'Distance',
'UniqueCarrier_',
'Origin_',
'Dest_'
]
corrmat = df[cols_for_correlation].corr()
sns.heatmap(corrmat, annot=True)
figure = plt.gcf()
figure.set_size_inches(10, 10)
# plt.show()
plt.savefig(IMG_DIR+'/corr.png', dpi = DPI)
def plot(col1, col2):
# https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.jointplot.html#seaborn.jointplot
sns.jointplot(df[col1],df[col2],dropna=True, kind="hex")
figure = plt.gcf()
figure.set_size_inches(10, 10)
# for notebook
# plt.show()
plt.savefig('%s/%s_%s.png'%(IMG_DIR, col1, col2), dpi = DPI)
plot('ArrTime', 'DepTime')
plot('Distance', 'UniqueCarrier_')
plot('Origin_', 'UniqueCarrier_')
"""
Explanation: Reaplace NaN with -1 (we have plenty of them)
End of explanation
"""
# 2400 is not a valid time
df['CRSDepTime'] = df.apply(lambda row: 2359 if row['CRSDepTime'] == 2400 else row['CRSDepTime'],axis='columns')
df['@timestamp'] = df.apply(lambda row: pd.Timestamp('%s-%s-%s;%04d'%(row['Year'], row['Month'], row['DayofMonth'], row['CRSDepTime'])),axis='columns')
df.head()
timestamps = df['@timestamp']
plt.hist?
plt.hist(timestamps.tolist(), bins=365, histtype = 'step', color='black')
plt.show()
10000 / 365
plt.hist(timestamps.tolist(), bins=12, histtype = 'bar')
plt.show()
"""
Explanation: Correct some timestamps and add a composed timestamp for easy reference
End of explanation
"""
df['Cancelled'] = df.apply(lambda row: False if row['Cancelled'] == 0 else True, axis='columns')
df['Diverted'] = df.apply(lambda row: False if row['Diverted'] == 0 else True, axis='columns')
df.head()
"""
Explanation: Convert fields 'cancelled' and 'diverted' to boolean
End of explanation
"""
|
AllenDowney/ModSim | soln/chap12.ipynb | gpl-2.0 | # install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
"""
Explanation: Chapter 12
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
"""
from modsim import State, System
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
from numpy import arange
from modsim import TimeFrame
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.loc[system.t0] = system.init
for t in arange(system.t0, system.t_end):
frame.loc[t+1] = update_func(frame.loc[t], t, system)
return frame
"""
Explanation: Code
Here's the code from the previous notebook that we'll need.
End of explanation
"""
def add_immunization(system, fraction):
system.init.S -= fraction
system.init.R += fraction
"""
Explanation: In the previous chapter I presented the SIR model of infectious disease and used it to model the Freshman Plague at Olin. In this chapter we'll consider metrics intended to quantify the effects of the disease and interventions intended to reduce those effects.
Immunization
Models like this are useful for testing "what if?" scenarios. As an
example, we'll consider the effect of immunization.
Suppose there is a vaccine that causes a student to become immune to the Freshman Plague without being infected. How might you modify the model to capture this effect?
One option is to treat immunization as a shortcut from susceptible to
recovered without going through infectious. We can implement this
feature like this:
End of explanation
"""
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
"""
Explanation: add_immunization moves the given fraction of the population from S
to R.
End of explanation
"""
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
results2 = run_simulation(system2, update_func)
"""
Explanation: If we assume that 10% of students are vaccinated at the
beginning of the semester, and the vaccine is 100% effective, we can
simulate the effect like this:
End of explanation
"""
results.S.plot(label='No immunization')
results2.S.plot(label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
"""
Explanation: The following figure shows S as a function of time, with and
without immunization.
End of explanation
"""
def calc_total_infected(results, system):
s_0 = results.S[system.t0]
s_end = results.S[system.t_end]
return s_0 - s_end
calc_total_infected(results, system)
calc_total_infected(results2, system2)
"""
Explanation: Metrics
When we plot a time series, we get a view of everything that happened
when the model ran, but often we want to boil it down to a few numbers
that summarize the outcome. These summary statistics are called
metrics, as we saw in Section xxx.
In the SIR model, we might want to know the time until the peak of the
outbreak, the number of people who are sick at the peak, the number of
students who will still be sick at the end of the semester, or the total number of students who get sick at any point.
As an example, I will focus on the last one --- the total number of sick students --- and we will consider interventions intended to minimize it.
When a person gets infected, they move from S to I, so we can get
the total number of infections by computing the difference in S at the beginning and the end:
End of explanation
"""
def sweep_immunity(immunize_array):
sweep = SweepSeries()
for fraction in immunize_array:
sir = make_system(beta, gamma)
add_immunization(sir, fraction)
results = run_simulation(sir, update_func)
sweep[fraction] = calc_total_infected(results, sir)
return sweep
"""
Explanation: Without immunization, almost 47% of the population gets infected at some point. With 10% immunization, only 31% get infected. That's pretty good.
Sweeping Immunization
Now let's see what happens if we administer more vaccines. This
following function sweeps a range of immunization rates:
End of explanation
"""
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
infected_sweep.plot()
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate')
"""
Explanation: The parameter of sweep_immunity is an array of immunization rates. The
result is a SweepSeries object that maps from each immunization rate
to the resulting fraction of students ever infected.
The following figure shows a plot of the SweepSeries. Notice that
the x-axis is the immunization rate, not time.
End of explanation
"""
from numpy import exp
def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
"""
Explanation: As the immunization rate increases, the number of infections drops
steeply. If 40% of the students are immunized, fewer than 4% get sick.
That's because immunization has two effects: it protects the people who get immunized (of course) but it also protects the rest of the
population.
Reducing the number of "susceptibles" and increasing the number of
"resistants" makes it harder for the disease to spread, because some
fraction of contacts are wasted on people who cannot be infected. This
phenomenon is called herd immunity, and it is an important element
of public health (see http://modsimpy.com/herd).
The steepness of the curve is a blessing and a curse. It's a blessing
because it means we don't have to immunize everyone, and vaccines can
protect the "herd" even if they are not 100% effective.
But it's a curse because a small decrease in immunization can cause a
big increase in infections. In this example, if we drop from 80%
immunization to 60%, that might not be too bad. But if we drop from 40% to 20%, that would trigger a major outbreak, affecting more than 15% of the population. For a serious disease like measles, just to name one, that would be a public health catastrophe.
One use of models like this is to demonstrate phenomena like herd
immunity and to predict the effect of interventions like vaccination.
Another use is to evaluate alternatives and guide decision making. We'll see an example in the next section.
Hand washing
Suppose you are the Dean of Student Life, and you have a budget of just \$1200 to combat the Freshman Plague. You have two options for spending this money:
You can pay for vaccinations, at a rate of \$100 per dose.
You can spend money on a campaign to remind students to wash hands
frequently.
We have already seen how we can model the effect of vaccination. Now
let's think about the hand-washing campaign. We'll have to answer two
questions:
How should we incorporate the effect of hand washing in the model?
How should we quantify the effect of the money we spend on a
hand-washing campaign?
For the sake of simplicity, let's assume that we have data from a
similar campaign at another school showing that a well-funded campaign
can change student behavior enough to reduce the infection rate by 20%.
In terms of the model, hand washing has the effect of reducing beta.
That's not the only way we could incorporate the effect, but it seems
reasonable and it's easy to implement.
Now we have to model the relationship between the money we spend and the
effectiveness of the campaign. Again, let's suppose we have data from
another school that suggests:
If we spend \$500 on posters, materials, and staff time, we can
change student behavior in a way that decreases the effective value of beta by 10%.
If we spend \$1000, the total decrease in beta is almost 20%.
Above \$1000, additional spending has little additional benefit.
Logistic function
To model the effect of a hand-washing campaign, I'll use a generalized logistic function (GLF), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
End of explanation
"""
spending = linspace(0, 1200, 21)
"""
Explanation: The following array represents the range of possible spending.
End of explanation
"""
def compute_factor(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=0.2, B=0.01)
"""
Explanation: compute_factor computes the reduction in beta for a given level of campaign spending.
M is chosen so the transition happens around \$500.
K is the maximum reduction in beta, 20%.
B is chosen by trial and error to yield a curve that seems feasible.
End of explanation
"""
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate')
"""
Explanation: Here's what it looks like.
End of explanation
"""
def compute_factor(spending):
return logistic(spending, M=500, K=0.2, B=0.01)
"""
Explanation: The result is the following function, which
takes spending as a parameter and returns factor, which is the factor
by which beta is reduced:
End of explanation
"""
def add_hand_washing(system, spending):
factor = compute_factor(spending)
system.beta *= (1 - factor)
"""
Explanation: I use compute_factor to write add_hand_washing, which takes a
System object and a budget, and modifies system.beta to model the
effect of hand washing:
End of explanation
"""
def sweep_hand_washing(spending_array):
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[spending] = calc_total_infected(results, system)
return sweep
"""
Explanation: Now we can sweep a range of values for spending and use the simulation
to compute the effect:
End of explanation
"""
from numpy import linspace
spending_array = linspace(0, 1200, 20)
infected_sweep2 = sweep_hand_washing(spending_array)
"""
Explanation: Here's how we run it:
End of explanation
"""
infected_sweep2.plot()
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections')
"""
Explanation: The following figure shows the result.
End of explanation
"""
num_students = 90
budget = 1200
price_per_dose = 100
max_doses = int(budget / price_per_dose)
"""
Explanation: Below \$200, the campaign has little effect.
At \$800 it has a substantial effect, reducing total infections from more than 45% to about 20%.
Above \$800, the additional benefit is small.
Optimization
Let's put it all together. With a fixed budget of \$1200, we have to
decide how many doses of vaccine to buy and how much to spend on the
hand-washing campaign.
Here are the parameters:
End of explanation
"""
dose_array = arange(max_doses+1)
"""
Explanation: The fraction budget/price_per_dose might not be an integer. int is a
built-in function that converts numbers to integers, rounding down.
We'll sweep the range of possible doses:
End of explanation
"""
def sweep_doses(dose_array):
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[doses] = calc_total_infected(results, system)
return sweep
"""
Explanation: In this example we call linrange with only one argument; it returns a
NumPy array with the integers from 0 to max_doses. With the argument
endpoint=True, the result includes both endpoints.
Then we run the simulation for each element of dose_array:
End of explanation
"""
infected_sweep3 = sweep_doses(dose_array)
infected_sweep3.plot()
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses')
"""
Explanation: For each number of doses, we compute the fraction of students we can
immunize, fraction and the remaining budget we can spend on the
campaign, spending. Then we run the simulation with those quantities
and store the number of infections.
The following figure shows the result.
End of explanation
"""
# Solution
"""There is no unique best answer to this question,
but one simple option is to model quarantine as an
effective reduction in gamma, on the assumption that
quarantine reduces the number of infectious contacts
per infected student.
Another option would be to add a fourth compartment
to the model to track the fraction of the population
in quarantine at each point in time. This approach
would be more complex, and it is not obvious that it
is substantially better.
The following function could be used, like
add_immunization and add_hand_washing, to adjust the
parameters in order to model various interventions.
In this example, `high` is the highest duration of
the infection period, with no quarantine. `low` is
the lowest duration, on the assumption that it takes
some time to identify infectious students.
`fraction` is the fraction of infected students who
are quarantined as soon as they are identified.
"""
def add_quarantine(system, fraction):
"""Model the effect of quarantine by adjusting gamma.
system: System object
fraction: fraction of students quarantined
"""
# `low` represents the number of days a student
# is infectious if quarantined.
# `high` is the number of days they are infectious
# if not quarantined
low = 1
high = 4
tr = high - fraction * (high-low)
system.gamma = 1 / tr
"""
Explanation: If we buy no doses of vaccine and spend the entire budget on the campaign, the fraction infected is around 19%. At 4 doses, we have \$800 left for the campaign, and this is the optimal point that minimizes the number of students who get sick.
As we increase the number of doses, we have to cut campaign spending,
which turns out to make things worse. But interestingly, when we get
above 10 doses, the effect of herd immunity starts to kick in, and the
number of sick students goes down again.
Summary
Exercises
Exercise: Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending?
Exercise: Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious.
How might you incorporate the effect of quarantine in the SIR model?
End of explanation
"""
|
readywater/caltrain-predict | .ipynb_checkpoints/01sepEvents-checkpoint.ipynb | mit | # Import necessary libraries
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sys
import re
import random
import operator
from func import *
# inline plot
%matplotlib inline
#%%javascript
#IPython.OutputArea.auto_scroll_threshold = 9999;
#%load 'data/raw-twt2016-01-26-14/21/09.csv'
df = pd.read_csv("data/raw-twt2016-01-26-14-21-09.csv",sep='\t',error_bad_lines=False)
# df.head(5)
print len(df.index)
list(df.columns.values)
"""
Explanation: Dictionary
train_direction = 0 south, 1 north
train_type = 0 Local, 1 Limited, 2 Bullet
train_
End of explanation
"""
# Fill in blank hashtags
df = df.where((pd.notnull(df)), np.nan)
df["hashtags"].fillna('')
# Add some date/time things
df["created_at"] = pd.to_datetime(df["created_at"], errors='coerce')
df["day_of_week"] = df["created_at"].apply(lambda x: x.weekday())
df["day_of_month"] = df["created_at"].apply(lambda x: x.day)
df["month"] = df["created_at"].apply(lambda x: x.month)
df["isRushHour"] = df["created_at"].apply(lambda x: get_time_of_day(x))
# del tod_Dummy['shutdown']
# df['in_reply_to_screen_name'].fillna(-1)
# df['in_reply_to_status_id'].fillna(-1)
# df['in_reply_to_user_id'].fillna(-1)
# df['retweeted_status'].fillna(-1)
# df['retweeted'].fillna(-1)
df['retweet_count'].fillna(np.nan)
df['favorite_count'].fillna(np.nan)
df["hashtags"].fillna(np.nan)
df["hashtags"] = df["hashtags"].apply(lambda x: str(x)[1:-1])
df.loc[df["hashtags"]=='a',"hashtags"] = ''
#list(df.columns.values)
#Potentially remove, just cleaning for analysis sake
del df['Unnamed: 0']
# del df['truncated']
del df['user_mentions']
del df['urls']
del df['source']
del df['lang']
del df['place']
del df['favorited']
del df['media']
del df['user']
# More likely to remove
del df['in_reply_to_status_id']
del df['in_reply_to_user_id']
del df['retweeted']
del df['retweeted_status']
len(df)
"""
Explanation: Cleanin' the data
End of explanation
"""
# df['favorite_count'] = df['favorite_count'].astype(np.int64)
# df['retweet_count'] = df['retweet_count'].astype(np.int64)
# df['text'] = df['text'].astype(str)
# df['id'] = df['id'].astype(np.int64)
# df['day_of_week'] = df['day_of_week'].astype(np.int64)
# df['day_of_month'] = df['day_of_month'].astype(np.int64)
# df['month'] = df['month'].astype(np.int64)
# df['time_of_day'] = df['time_of_day'].astype(np.int64)
df.loc[df["hashtags"]=='on',"hashtags"] = np.nan
df.convert_objects(convert_numeric=True)
df.dtypes
len(df)
# Pull out potential trains from both hashtags and text
df["topic_train"] = df["text"].apply(lambda x: check_train_id(x))
df["topic_train"] = df["topic_train"].apply(lambda x: str(x)[1:-1])
df["topic_train"].fillna(np.nan)
df.head(2)
"""
Explanation: Let's start getting some more detailed data from the trips as well
End of explanation
"""
ret = []
def parse_train(t):
# Revised this function to work with categorical variables
# x should be a list with train codes eg 123
# {"id": "123", "type:" "bullet", direction: "south"}
try:
s = t['topic_train'].split(',')
except:
return t['topic_train']
if s[0] == '':
return np.nan
for x in s:
q = {}
x = str(x)
x = re.sub('[^0-9]','', x)
if len(x)<3: continue
# 1 = north, 0 = south
q["t_northbound"] = 1 if int(x[2]) in [1,3,5,7,9] else 0
q['t_limited'] = 0
q['t_bullet'] = 0
if x[0] == '1':
q['t_limited'] = 0
elif x[0] == '2':
q["t_limited"] = 1 # limited
elif x[0] == '3':
q["t_bullet"] = 1 # bullet
else:
q['t_limited'] = 0
ret.append({'tweet_id': t['id'],
'timestamp': t['created_at'],
'train_id': int(x),
't_northbound':q["t_northbound"],
't_limited': q["t_limited"],
't_bullet': q['t_bullet']})
return s
# Let's then filter those train topics into details
# Btw this is jank as fuck.
# red = df[['id','created_at','topic_train']]
red = df.apply(lambda x:parse_train(x),axis=1)
print "red return:",len(red)
print "ret return,",len(ret)
#red
tf = pd.DataFrame(ret)
tf.head(5)
#events = pd.DataFrame([pd.Series(x) for x in red.apply(parse_train)])
#events
#del new.iloc[0]
#new.fillna('')
#df.combine_first(new)
print df.loc[df['topic_train'] != '',['topic_train','text']]
len(tf)
# Merge on tweet ID,
df = df.merge(tf, left_on='id',right_on='tweet_id',how='right')
# Okay, let's try and get a yae or nae to delay and see our hit rate.
# Only events that have train mentioned
trains = df[df['train_id'] > 0]
# d
filename = "./data/formated_twts.csv"
df.to_csv(filename, sep=',', encoding='utf-8')
"""
Explanation: First, a word about the below code.
In the accompanying func.py there is a function called parse_train that returns a pandas.Series object. For some reason, when it's returned from a map or apply, it seems to get cast as a string. When applied to a list or a dataframe, this string gets turned into a single field in the row, OR divided into several rows, throwing the count off.
To get around this, I return the results of the parse_train function and then CAST it back to a series. This adds a weird 0 index, which I delete. I then fill in the plethora of NaNs and recombine it with the primary dataframe.
For context, previous iterations included
df['topic_train'].apply(lambda x:parse_train(x))
which would return a pd.Series object with str versions of the returned pd.Series from parse_train
End of explanation
"""
|
theideasmith/theideasmith.github.io | _notebooks/.ipynb_checkpoints/Asymptotic Convergence of Gradient Descent for Linear Regression Least Squares Optimization-checkpoint.ipynb | mit | from pylab import *
from numpy import random as random
random.seed(1)
N=1000.
w = array([14., 30.]);
x = zeros((2, int(N))).astype(float32)
x[0,:] = arange(N).astype(float32)
x[1,:] = 1
y = w.dot(x) + random.normal(size=int(N), scale=100.)
"""
Explanation: Supplementary Materials
This code accompanies the paper Asymptotic Convergence of Gradient Descent for Linear Regression Least Squares Optimization (Lipshitz, 2017)
Initialization
End of explanation
"""
yh = lambda xs, ws: \
ws.dot(xs)
grad = lambda ys, yhs, xs: \
(1./xs.shape[1])*sum((yhs-ys)*xs).astype(float32)
delta = lambda gs, a: \
a*gs
def regress(y, x, alpha, T=1000, wh=None, **kwargs):
wh = random.normal(2, size=2)
whs = zeros((T, 2))
whs[0,:] = wh
for i in xrange(1,T):
wh+=delta(grad(y,yh(x,wh), x), alpha)
whs[i,:] = wh.copy()
return wh, whs
def regrSample(y, x, alpha, T=1000, N=10, **kwargs):
out = map(
lambda a: \
regress(y,x, alpha, T=T), xrange(N)
)
trains = array([o[1] for o in out])
wDist = array([o[0] for o in out])
return wDist, trains
def statsRegr(*args, **kwargs):
wDist, trains = regrSample(*args, **kwargs)
return np.mean(trains, axis=0), np.std(trains, axis=0)
"""
Explanation: Defining Regression
End of explanation
"""
def plotDynamicsForAlpha(alpha, axTitle, T=1000, N=10):
t = np.arange(T)
mu, sig = statsRegr(y, x, alpha, T=T, N=N)
plot(mu[:,0], 'r:', label='$w_1$')
plot(mu[:,1], 'b:', label='$w_2$')
fill_between(t, \
mu[:,0]+sig[:,0], \
mu[:,0]-sig[:,0], \
facecolor='red', alpha=0.5)
fill_between(t,\
mu[:,1]+sig[:,1], \
mu[:,1]-sig[:,1], \
facecolor='blue', alpha=0.5)
xlabel("t [Iterations]", fontdict={'fontsize':fs*.8})
yl = ylabel("$w_{i,t}$",fontdict={'fontsize':fs*.8})
yl.set_rotation('horizontal')
title(axTitle, fontdict={'fontsize':fs})
tight_layout()
return mu, sig
alphaData = [
("$a=2$", 2),
("$a=0$",0.),
("$a=-0.5N/x^2$",-0.5*N/linalg.norm(x[0,:])**2),
("$a=-N/x^2$", -N/linalg.norm(x[0,:])**2),
("$a=-1.3N/x^2$", -1.3*N/linalg.norm(x[0,:])**2),
("$a=-1.6N/x^2$", -1.6*N/linalg.norm(x[0,:])**2),
("$a=-1.99N/x^2$", -1.99*N/linalg.norm(x[0,:])**2),
("$a=-2N/x^2$", -2*N/linalg.norm(x[0,:])**2)
]
%matplotlib inline
from scipy.stats import norm
import seaborn as sns
fs = 15
figure(figsize=(10,3*len(alphaData)))
outs = []
for i, d in enumerate(alphaData):
k, v = d
# subplot(len(alphaData),1, i+1)
figure(figsize=(10,3))
outs.append(plotDynamicsForAlpha(v, k, T=150 ))
tight_layout()
# suptitle("Dynamical Learning Trajectories for Significant Alpha Values", y=1.08, fontdict={'fontsize':20});
for i, axtitle in enumerate(alphaData):
axtitle, axnum = axtitle
mu, sig = outs[i]
figure(figsize=(10,3))
if np.sum(np.isnan(mu)) > 0:
k=2
idx0=argwhere(~np.isnan(mu[:,0]))[-1]-1
idx1=argwhere(~np.isnan(sig[:,0]))[-1]-1
idx = min(idx0, idx1)
xmin = max(mu[idx,0]-k*sig[idx,0], mu[idx,0]-k*sig[idx,0])
xmax = min(mu[idx,0]+k*sig[idx,0], mu[idx,0]+k*sig[idx,0])
x_axis = np.linspace(xmin,xmax, num=300);
else:
xmin = max(mu[-1,0]-3*sig[-1,0], mu[-1,0]-3*sig[-1,0])
xmax = min(mu[-1,0]+3*sig[-1,0], mu[-1,0]+3*sig[-1,0])
x_axis = np.linspace(xmin,xmax, num=300);
plt.plot(x_axis, norm.pdf(x_axis,mu[-1,0],sig[-1,0]),'r:');
plt.plot(x_axis, norm.pdf(x_axis,mu[-1,1],sig[-1,1]), 'b:');
xlim(xmin = xmin, xmax=xmax)
p, v = yticks()
plt.yticks(p,map(lambda w: round(w, 2),linspace(0, 1, num=len(p))))
title(axtitle)
tight_layout()
x.shape
figure(figsize=(10,10))
subplot(2,1,1)
title("Closed From Expression", fontdict={'fontsize':10})
T = 30
w0 = random.normal(2, size=2)
t = np.arange(T)
a = -2.1*N/linalg.norm(x[0,:])**2
beta2 = (1/N)*a*x[0,:].dot(x[0,:])
beta1 = -(1/N)*a*x[0,:].dot(y)
ws = w0[0]*(beta2+1)**t - beta1*(1-(beta2+1)**t)/beta2
# ws = w0[0]*(-1)**t + ((-1)**t -1)*x[0,:].dot(y)/linalg.norm(x[0,:])**2
plot(ws)
subplot(2,1,2)
title("Simulation", fontdict={'fontsize':10})
wh = w0
whs = zeros((T, 2))
whs[0,:] = wh
for i in xrange(1,T):
wh+=delta(grad(y,yh(x,wh), x), a)
whs[i,:] = wh.copy()
plot(whs[:,0])
suptitle(("Asymptotic Behavior "
"of Closed form and Simulated Learning: $a = -2.1N/x^2$"), fontdict={"fontsize":20})
"""
Explanation: Running Regression above and Below the Upper Bound on $\alpha$
The theoretically derived bounds on $\alpha$ are $$\alpha \in \left( -2\frac{N}{|\mathbf{x}|^2}, 0 \right]$$
Other $\alpha$ values diverge
End of explanation
"""
t = arange(0,10)
ws = (0**t)*(w0[0]+x[0,:].dot(y)/linalg.norm(x[0,:])**2) + x[0,:].dot(y)/linalg.norm(x[0,:])**2
figure()
ax = subplot(111)
ax.set_title("alpha = sup A")
ax.plot(ws)
t = arange(0,10)
ws = ((-1)**t)*w0[0] - (x[0,:].dot(y)/linalg.norm(x[0,:])**2) + (-2)**t*x[0,:].dot(y)/linalg.norm(x[0,:])**2
figure()
ax = subplot(111)
ax.set_title("alpha = sup A")
ax.plot(ws)
"""
Explanation: $\alpha = \sup A$
End of explanation
"""
|
TuKo/brainiak | examples/reprsimil/group_brsa_example.ipynb | apache-2.0 | %matplotlib inline
import scipy.stats
import scipy.spatial.distance as spdist
import numpy as np
from brainiak.reprsimil.brsa import GBRSA
import brainiak.utils.utils as utils
import matplotlib.pyplot as plt
import matplotlib as mpl
import logging
np.random.seed(10)
import copy
"""
Explanation: This demo shows how to use the Group Bayesian Representational Similarity Analysis (GBRSA) method in brainiak with a simulated dataset.
Note that although the name has "group", it is also suitable for analyzing data of a single participant
When you apply this tool to real fMRI data, it is required that the data of each participant to be motion corrected. If multiple runs are acquired for each participant, they should be spatially aligned. You might want to do slice-timing correction.
You will need to have the mask of the Region of Interest (ROI) ready (defined anatomically or by independent tasks, which is up to you). nilearn provides tools to extract signal from mask. You can refer to http://nilearn.github.io/manipulating_images/manipulating_images.html
When analyzing an ROI of hundreds to thousands voxels, it is expected to be faster than the non-group version BRSA (refer to the other example). The reason is that GBRSA marginalize the SNR and AR(1) coefficient parameters of each voxel by numerical integration, thus eliminating hundreds to thousands of free parameters and reducing computation. However, if you are doing searchlight analysis with tens of voxels in each searchlight, it is possible that BRSA is faster.
GBRSA and BRSA might not return exactly the same result. Which one is more accurate might depend on the parameter choice, as well as the property of data.
Load some package which we will use in this demo.
If you see error related to loading any package, you can install that package. For example, if you use Anaconda, you can use "conda install matplotlib" to install matplotlib.
End of explanation
"""
logging.basicConfig(
level=logging.DEBUG,
filename='gbrsa_example.log',
format='%(relativeCreated)6d %(threadName)s %(message)s')
"""
Explanation: You might want to keep a log of the output.
End of explanation
"""
n_subj = 5
n_run = np.random.random_integers(2, 4, n_subj)
ROI_edge = np.random.random_integers(20, 40, n_subj)
# We simulate "ROI" of a square shape
design = [None] * n_subj
for subj in range(n_subj):
design[subj] = utils.ReadDesign(fname="example_design.1D")
design[subj].n_TR = design[subj].n_TR * n_run[subj]
design[subj].design_task = np.tile(design[subj].design_task[:,:-1],
[n_run[subj], 1])
# The last "condition" in design matrix
# codes for trials subjects made an error.
# We ignore it here.
n_C = np.size(design[0].design_task, axis=1)
# The total number of conditions.
n_V = [int(roi_e**2) for roi_e in ROI_edge]
# The total number of simulated voxels
n_T = [d.n_TR for d in design]
# The total number of time points,
# after concatenating all fMRI runs
fig = plt.figure(num=None, figsize=(12, 3),
dpi=150, facecolor='w', edgecolor='k')
plt.plot(design[0].design_task)
plt.ylim([-0.2, 0.4])
plt.title('hypothetic fMRI response time courses '
'of all conditions for one subject\n'
'(design matrix)')
plt.xlabel('time')
plt.show()
"""
Explanation: We want to simulate some data in which each voxel responds to different task conditions differently, but following a common covariance structure
Load an example design matrix.
The user should prepare their design matrix with their favorate software, such as using 3ddeconvolve of AFNI, or using SPM or FSL.
The design matrix reflects your belief of how fMRI signal should respond to a task (if a voxel does respond).
The common assumption is that a neural event that you are interested in will elicit a slow hemodynamic response in some voxels. The response peaks around 4-6 seconds after the event onset and dies down more than 12 seconds after the event. Therefore, typically you convolve a time series A, composed of delta (stem) functions reflecting the time of each neural event belonging to the same category (e.g. all trials in which a participant sees a face), with a hemodynamic response function B, to form the hypothetic response of any voxel to such type of neural event.
For each type of event, such a convoluted time course can be generated. These time courses, put together, are called design matrix, reflecting what we believe a temporal signal would look like, if it exists in any voxel.
Our goal is to figure out how the (spatial) response pattern of a population of voxels (in an Region of Interest, ROI) are similar or disimilar to different types of tasks (e.g., watching face vs. house, watching different categories of animals, different conditions of a cognitive task). So we need the design matrix in order to estimate the similarity matrix we are interested.
We can use the utility called ReadDesign from brainiak.utils to read a design matrix generated from AFNI. For design matrix saved as Matlab data file by SPM or or other toolbox, you can use scipy.io.loadmat('YOURFILENAME') and extract the design matrix from the dictionary returned. Basically, the Bayesian RSA in this toolkit just needs a numpy array which is in size of {time points} * {condition}
You can also generate design matrix using the function gen_design which is in brainiak.utils. It takes in (names of) event timing files in AFNI or FSL format (denoting onsets, duration, and weight for each event belongning to the same condition) and outputs the design matrix as numpy array.
In typical fMRI analysis, some nuisance regressors such as head motion, baseline time series and slow drift are also entered into regression. In using our method, you should not include such regressors into the design matrix, because the spatial spread of such nuisance regressors might be quite different from the spatial spread of task related signal. Including such nuisance regressors in design matrix might influence the pseudo-SNR map, which in turn influence the estimation of the shared covariance matrix. But you may include motion time course in the nuisance parameter.
We concatenate the design matrix by 2 to 3 times, mimicking 2 to 3 runs of identical timing
Note that different subjects do not have to have the same number of voxels or time points. The timing of the task conditions of them can also differ. The simulation below reflects this
End of explanation
"""
noise_bot = 0.5
noise_top = 1.5
noise_level = [None] * n_subj
noise = [None] * n_subj
rho1 = [None] * n_subj
for subj in range(n_subj):
noise_level[subj] = np.random.rand(n_V[subj]) * \
(noise_top - noise_bot) + noise_bot
# The standard deviation of the noise is in the range of [noise_bot, noise_top]
# In fact, we simulate autocorrelated noise with AR(1) model. So the noise_level reflects
# the independent additive noise at each time point (the "fresh" noise)
# AR(1) coefficient
rho1_top = 0.8
rho1_bot = -0.2
for subj in range(n_subj):
rho1[subj] = np.random.rand(n_V[subj]) \
* (rho1_top - rho1_bot) + rho1_bot
noise_smooth_width = 10.0
dist2 = [None] * n_subj
for subj in range(n_subj):
coords = np.mgrid[0:ROI_edge[subj], 0:ROI_edge[subj], 0:1]
coords_flat = np.reshape(coords,[3, n_V[subj]]).T
dist2[subj] = spdist.squareform(spdist.pdist(coords_flat, 'sqeuclidean'))
# generating noise
K_noise = noise_level[subj][:, np.newaxis] \
* (np.exp(-dist2[subj] / noise_smooth_width**2 / 2.0) \
+ np.eye(n_V[subj]) * 0.1) * noise_level[subj]
# We make spatially correlated noise by generating
# noise at each time point from a Gaussian Process
# defined over the coordinates.
L_noise = np.linalg.cholesky(K_noise)
noise[subj] = np.zeros([n_T[subj], n_V[subj]])
noise[subj][0, :] = np.dot(L_noise, np.random.randn(n_V[subj]))\
/ np.sqrt(1 - rho1[subj]**2)
for i_t in range(1, n_T[subj]):
noise[subj][i_t, :] = noise[subj][i_t - 1, :] * rho1[subj] \
+ np.dot(L_noise,np.random.randn(n_V[subj]))
# For each voxel, the noise follows AR(1) process:
# fresh noise plus a dampened version of noise at
# the previous time point.
# In this simulation, we also introduced spatial smoothness resembling a Gaussian Process.
# Notice that we simulated in this way only to introduce spatial noise correlation.
# This does not represent the assumption of the form of spatial noise correlation in the model.
# Instead, the model is designed to capture structured noise correlation manifested
# as a few spatial maps each modulated by a time course, which appears as spatial noise correlation.
plt.pcolor(K_noise)
plt.colorbar()
plt.xlim([0, ROI_edge[-1] * ROI_edge[-1]])
plt.ylim([0, ROI_edge[-1] * ROI_edge[-1]])
plt.title('Spatial covariance matrix of noise\n of the last participant')
plt.show()
fig = plt.figure(num=None, figsize=(12, 2), dpi=150,
facecolor='w', edgecolor='k')
plt.plot(noise[-1][:, 0])
plt.title('noise in an example voxel')
plt.show()
"""
Explanation: simulate data: noise + signal
First, we start with noise, which is Gaussian Process in space and AR(1) in time
End of explanation
"""
# ideal covariance matrix
ideal_cov = np.zeros([n_C, n_C])
ideal_cov = np.eye(n_C) * 0.6
ideal_cov[8:12, 8:12] = 0.6
for cond in range(8, 12):
ideal_cov[cond,cond] = 1
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(ideal_cov)
plt.colorbar()
plt.xlim([0, 16])
plt.ylim([0, 16])
ax = plt.gca()
ax.set_aspect(1)
plt.title('ideal covariance matrix')
plt.show()
std_diag = np.diag(ideal_cov)**0.5
ideal_corr = ideal_cov / std_diag / std_diag[:, None]
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(ideal_corr)
plt.colorbar()
plt.xlim([0, 16])
plt.ylim([0, 16])
ax = plt.gca()
ax.set_aspect(1)
plt.title('ideal correlation matrix')
plt.show()
"""
Explanation: Then, we simulate signals, assuming the magnitude of response to each condition follows a common covariance matrix.
Note that Group Bayesian Representational Similarity Analysis (GBRSA) does not impose Gaussian Process prior on log(SNR) as BRSA does, for two reasons: (1) computational speed, (2) we numerically marginalize SNR for each voxel in GBRSA
Let's keep in mind of the pattern of the ideal covariance / correlation below and see how well BRSA can recover their patterns.
End of explanation
"""
L_full = np.linalg.cholesky(ideal_cov)
# generating signal
snr_level = np.random.rand(n_subj) * 0.6 + 0.4
# Notice that accurately speaking this is not SNR.
# The magnitude of signal depends not only on beta but also on x.
# (noise_level*snr_level)**2 is the factor multiplied
# with ideal_cov to form the covariance matrix from which
# the response amplitudes (beta) of a voxel are drawn from.
tau = np.random.rand(n_subj) * 0.8 + 0.2
# magnitude of Gaussian Process from which the log(SNR) is drawn
smooth_width = np.random.rand(n_subj) * 5.0 + 3.0
# spatial length scale of the Gaussian Process, unit: voxel
inten_kernel = np.random.rand(n_subj) * 4.0 + 2.0
# intensity length scale of the Gaussian Process
# Slightly counter-intuitively, if this parameter is very large,
# say, much larger than the range of intensities of the voxels,
# then the smoothness has much small dependency on the intensity.
Y = [None] * n_subj
snr = [None] * n_subj
signal = [None] * n_subj
betas_simulated = [None] * n_subj
inten = [None] * n_subj
for subj in range(n_subj):
inten[subj] = np.random.rand(n_V[subj]) * 20.0
# For simplicity, we just assume that the intensity
# of all voxels are uniform distributed between 0 and 20
# parameters of Gaussian process to generate pseuso SNR
# For curious user, you can also try the following commond
# to see what an example snr map might look like if the intensity
# grows linearly in one spatial direction
inten_tile = np.tile(inten[subj], [n_V[subj], 1])
inten_diff2 = (inten_tile - inten_tile.T)**2
K = np.exp(-dist2[subj] / smooth_width[subj]**2 / 2.0
- inten_diff2 / inten_kernel[subj]**2 / 2.0) * tau[subj]**2 \
+ np.eye(n_V[subj]) * tau[subj]**2 * 0.001
# A tiny amount is added to the diagonal of
# the GP covariance matrix to make sure it can be inverted
L = np.linalg.cholesky(K)
snr[subj] = np.exp(np.dot(L, np.random.randn(n_V[subj]))) * snr_level[subj]
sqrt_v = noise_level[subj] * snr[subj]
betas_simulated[subj] = np.dot(L_full, np.random.randn(n_C, n_V[subj])) * sqrt_v
signal[subj] = np.dot(design[subj].design_task, betas_simulated[subj])
Y[subj] = signal[subj] + noise[subj] + inten[subj]
# The data to be fed to the program.
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(np.reshape(snr[0], [ROI_edge[0], ROI_edge[0]]))
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
plt.title('pseudo-SNR in a square "ROI" \nof participant 0')
plt.show()
snr_all = np.concatenate(snr)
idx = np.argmin(np.abs(snr_all - np.median(snr_all)))
median_subj = np.min(np.where(idx - np.cumsum(n_V) < 0))
idx = idx - np.cumsum(np.concatenate([[0], n_V]))[median_subj]
# choose a voxel of medium level SNR.
fig = plt.figure(num=None, figsize=(12, 4), dpi=150,
facecolor='w', edgecolor='k')
noise_plot, = plt.plot(noise[median_subj][:,idx],'g')
signal_plot, = plt.plot(signal[median_subj][:,idx],'b')
plt.legend([noise_plot, signal_plot], ['noise', 'signal'])
plt.title('simulated data in an example voxel'
' with pseudo-SNR of {} in participant {}'.format(snr[median_subj][idx], median_subj))
plt.xlabel('time')
plt.show()
fig = plt.figure(num=None, figsize=(12, 4), dpi=150,
facecolor='w', edgecolor='k')
data_plot, = plt.plot(Y[median_subj][:,idx],'r')
plt.legend([data_plot], ['observed data of the voxel'])
plt.xlabel('time')
plt.show()
idx = np.argmin(np.abs(snr_all - np.max(snr_all)))
highest_subj = np.min(np.where(idx - np.cumsum(n_V) < 0))
idx = idx - np.cumsum(np.concatenate([[0], n_V]))[highest_subj]
# display the voxel of the highest level SNR.
fig = plt.figure(num=None, figsize=(12, 4), dpi=150,
facecolor='w', edgecolor='k')
noise_plot, = plt.plot(noise[highest_subj][:,idx],'g')
signal_plot, = plt.plot(signal[highest_subj][:,idx],'b')
plt.legend([noise_plot, signal_plot], ['noise', 'signal'])
plt.title('simulated data in the voxel with the highest'
' pseudo-SNR of {} in subject {}'.format(snr[highest_subj][idx], highest_subj))
plt.xlabel('time')
plt.show()
fig = plt.figure(num=None, figsize=(12, 4), dpi=150,
facecolor='w', edgecolor='k')
data_plot, = plt.plot(Y[highest_subj][:,idx],'r')
plt.legend([data_plot], ['observed data of the voxel'])
plt.xlabel('time')
plt.show()
"""
Explanation: In the following, pseudo-SNR is generated from a Gaussian Process defined on a "square" ROI, just for simplicity of code
Notice that GBRSA does not make assumption of smoothness of SNR, so it won't utilize this fact.
End of explanation
"""
scan_onsets = [np.int32(np.linspace(0, design[i].n_TR,num=n_run[i] + 1)[: -1]) for i in range(n_subj)]
print('scan onsets: {}'.format(scan_onsets))
"""
Explanation: The reason that the pseudo-SNRs in the example voxels are not too small, while the signal looks much smaller is because we happen to have low amplitudes in our design matrix. The true SNR depends on both the amplitudes in design matrix and the pseudo-SNR. Therefore, be aware that pseudo-SNR does not directly reflects how much signal the data have, but rather a map indicating the relative strength of signal in differerent voxels.
When you have multiple runs, the noise won't be correlated between runs. Therefore, you should tell BRSA when is the onset of each scan.
Note that the data (variable Y above) you feed to BRSA is the concatenation of data from all runs along the time dimension, as a 2-D matrix of time x space
End of explanation
"""
gbrsa = GBRSA()
# Initiate an instance
gbrsa.fit(X=Y, design=[d.design_task for d in design],scan_onsets=scan_onsets)
# The data to fit should be given to the argument X.
# Design matrix goes to design. And so on.
"""
Explanation: Fit Group Bayesian RSA to our simulated data
The nuisance regressors in typical fMRI analysis (such as head motion signal) are replaced by principal components estimated from residuals after subtracting task-related response. n_nureg tells the model how many principal components to keep from the residual as nuisance regressors, in order to account for spatial correlation in noise. When it is set to None and auto_nuisance=True, this number will be estimated automatically by an algorithm of Gavish & Dohono 2014. If you prefer not using this approach based on principal components of residuals, you can set auto_nuisance=False, and optionally provide your own nuisance regressors as a list (one numpy array per subject) as nuisance argument to GBRSA.fit(). In practice, we find that the result is much better with auto_nuisance=True.
The idea of modeling the spatial noise correlation with the principal component decomposition of the residual noise is similar to that in GLMdenoise (http://kendrickkay.net/GLMdenoise/).
Apparently one can imagine that the choice of the number of principal components used as nuisance regressors can influence the result. If you just choose 1 or 2, perhaps only the global drift would be captured. But including too many nuisance regressors would slow the fitting speed and might have risk of overfitting. Among all the algorithms we have tested with simulation data, th Gavish & Donoho algorithm appears the most robust and the estimate is closest to the true simulated number. But it does have a tendency to under-estimate the number of components, which is one limitation in (G)BRSA module.
End of explanation
"""
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(gbrsa.C_, vmin=-0.1, vmax=1)
plt.xlim([0, 16])
plt.ylim([0, 16])
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
plt.title('Estimated correlation structure\n shared between voxels\n'
'This constitutes the output of Bayesian RSA\n')
plt.show()
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(gbrsa.U_)
plt.xlim([0, 16])
plt.ylim([0, 16])
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
plt.title('Estimated covariance structure\n shared between voxels\n')
plt.show()
"""
Explanation: We can have a look at the estimated similarity in matrix gbrsa.C_.
We can also compare the ideal covariance above with the one recovered, gbrsa.U_
End of explanation
"""
sum_point_corr = np.zeros((n_C, n_C))
sum_point_cov = np.zeros((n_C, n_C))
betas_point = [None] * n_subj
for subj in range(n_subj):
regressor = np.insert(design[subj].design_task,
0, 1, axis=1)
betas_point[subj] = np.linalg.lstsq(regressor, Y[subj])[0]
point_corr = np.corrcoef(betas_point[subj][1:, :])
point_cov = np.cov(betas_point[subj][1:, :])
sum_point_corr += point_corr
sum_point_cov += point_cov
if subj == 0:
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(point_corr, vmin=-0.1, vmax=1)
plt.xlim([0, 16])
plt.ylim([0, 16])
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
plt.title('Correlation structure estimated\n'
'based on point estimates of betas\n'
'for subject {}'.format(subj))
plt.show()
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(point_cov)
plt.xlim([0, 16])
plt.ylim([0, 16])
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
plt.title('Covariance structure of\n'
'point estimates of betas\n'
'for subject {}'.format(subj))
plt.show()
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(sum_point_corr / n_subj, vmin=-0.1, vmax=1)
plt.xlim([0, 16])
plt.ylim([0, 16])
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
plt.title('Correlation structure estimated\n'
'based on point estimates of betas\n'
'averaged over subjects')
plt.show()
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(sum_point_cov / n_subj)
plt.xlim([0, 16])
plt.ylim([0, 16])
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
plt.title('Covariance structure of\n'
'point estimates of betas\n'
'averaged over subjects')
plt.show()
"""
Explanation: In contrast, we can have a look of the similarity matrix based on Pearson correlation between point estimates of betas of different conditions.
This is what vanila RSA might give
End of explanation
"""
subj = highest_subj
fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5))
vmax = np.max([np.max(gbrsa.nSNR_[s]) for s in range(n_subj)])
for s in range(n_subj):
im = axes[s].pcolor(np.reshape(gbrsa.nSNR_[s], [ROI_edge[s], ROI_edge[s]]),
vmin=0,vmax=vmax)
axes[s].set_aspect(1)
fig.colorbar(im, ax=axes.ravel().tolist(), shrink=0.75)
plt.suptitle('estimated pseudo-SNR',fontsize="xx-large" )
plt.show()
fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5))
vmax = np.max([np.max(snr[s]) for s in range(n_subj)])
for s in range(n_subj):
im = axes[s].pcolor(np.reshape(snr[s], [ROI_edge[s], ROI_edge[s]]),
vmin=0,vmax=vmax)
axes[s].set_aspect(1)
fig.colorbar(im, ax=axes.ravel().tolist(), shrink=0.75)
plt.suptitle('simulated pseudo-SNR',fontsize="xx-large" )
plt.show()
RMS_GBRSA = np.mean((gbrsa.C_ - ideal_corr)**2)**0.5
RMS_RSA = np.mean((point_corr - ideal_corr)**2)**0.5
print('RMS error of group Bayesian RSA: {}'.format(RMS_GBRSA))
print('RMS error of standard RSA: {}'.format(RMS_RSA))
"""
Explanation: We can make a comparison between the estimated SNR map and the true SNR map
End of explanation
"""
fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5))
for s in range(n_subj):
im = axes[s].scatter(np.log(snr[s]) - np.mean(np.log(snr[s])),
np.log(gbrsa.nSNR_[s]))
if s == 0:
axes[s].set_ylabel('recovered log pseudo-SNR',fontsize='xx-large')
if s == int(n_subj/2):
axes[s].set_xlabel('true normalized log SNR',fontsize='xx-large')
axes[s].set_aspect(1)
plt.suptitle('estimated vs. simulated normalized log SNR',fontsize="xx-large" )
plt.show()
fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5))
for s in range(n_subj):
im = axes[s].scatter(snr[s], gbrsa.nSNR_[s])
if s == 0:
axes[s].set_ylabel('recovered pseudo-SNR',fontsize='xx-large')
if s == int(n_subj/2):
axes[s].set_xlabel('true normalized SNR',fontsize='xx-large')
axes[s].set_aspect(1)
plt.suptitle('estimated vs. simulated SNR',fontsize="xx-large" )
plt.show()
"""
Explanation: We can also look at how SNRs are recovered.
End of explanation
"""
fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5))
for s in range(n_subj):
im = axes[s].scatter(betas_simulated[s] , gbrsa.beta_[s])
if s == 0:
axes[s].set_ylabel('recovered betas by GBRSA',fontsize='xx-large')
if s == int(n_subj/2):
axes[s].set_xlabel('true betas',fontsize='xx-large')
axes[s].set_aspect(1)
plt.suptitle('estimated vs. simulated betas, \nby GBRSA',fontsize="xx-large" )
plt.show()
fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5))
for s in range(n_subj):
im = axes[s].scatter(betas_simulated[s] , betas_point[s][1:, :])
if s == 0:
axes[s].set_ylabel('recovered betas by simple regression',fontsize='xx-large')
if s == int(n_subj/2):
axes[s].set_xlabel('true betas',fontsize='xx-large')
axes[s].set_aspect(1)
plt.suptitle('estimated vs. simulated betas, \nby simple regression',fontsize="xx-large" )
plt.show()
"""
Explanation: We can also examine the relation between recovered betas and true betas
End of explanation
"""
noise_new = [None] * n_subj
Y_new = [None] * n_subj
for subj in range(n_subj):
# generating noise
K_noise = noise_level[subj][:, np.newaxis] \
* (np.exp(-dist2[subj] / noise_smooth_width**2 / 2.0) \
+ np.eye(n_V[subj]) * 0.1) * noise_level[subj]
# We make spatially correlated noise by generating
# noise at each time point from a Gaussian Process
# defined over the coordinates.
L_noise = np.linalg.cholesky(K_noise)
noise_new[subj] = np.zeros([n_T[subj], n_V[subj]])
noise_new[subj][0, :] = np.dot(L_noise, np.random.randn(n_V[subj]))\
/ np.sqrt(1 - rho1[subj]**2)
for i_t in range(1, n_T[subj]):
noise_new[subj][i_t, :] = noise_new[subj][i_t - 1, :] * rho1[subj] \
+ np.dot(L_noise,np.random.randn(n_V[subj]))
Y_new[subj] = signal[subj] + noise_new[subj] + inten[subj]
ts, ts0 = gbrsa.transform(Y_new,scan_onsets=scan_onsets)
# ts is the estimated task-related time course, with each column corresponding to the task condition of the same
# column in design matrix.
# ts0 is the estimated time courses that have the same spatial spread as those in the training data (X0).
# It is possible some task related signal is still in X0 or ts0, but not captured by the design matrix.
fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5))
for s in range(n_subj):
recovered_plot, = axes[s].plot(ts[s][:200, 8], 'b')
design_plot, = axes[s].plot(design[s].design_task[:200, 8], 'g')
if s == int(n_subj/2):
axes[s].set_xlabel('time',fontsize='xx-large')
fig.legend([design_plot, recovered_plot],
['design matrix for one condition', 'recovered time course for the condition'],
fontsize='xx-large')
plt.show()
# We did not plot the whole time series for the purpose of seeing closely how much the two
# time series overlap
fig, axes = plt.subplots(nrows=1, ncols=n_subj, figsize=(25, 5))
for s in range(n_subj):
c = np.corrcoef(design[s].design_task.T, ts[s].T)
im = axes[s].pcolor(c[0:16, 16:],vmin=-0.5,vmax=1)
axes[s].set_aspect(1)
if s == int(n_subj/2):
axes[s].set_xlabel('recovered time course',fontsize='xx-large')
if s == 0:
axes[s].set_ylabel('true design matrix',fontsize='xx-large')
fig.suptitle('correlation between true design matrix \nand the recovered task-related activity')
fig.colorbar(im, ax=axes.ravel().tolist(), shrink=0.75)
plt.show()
print('average SNR level:', snr_level)
print('Apparently how much the recovered time course resembles the true design matrix depends on SNR')
"""
Explanation: "Decoding" from new data
Now we generate a new data set, assuming signal is the same but noise is regenerated. We want to use the transform() function of gbrsa to estimate the "design matrix" in this new dataset.
We keep the signal the same as in training data, but generate new noise.
Note that we did this purely for simplicity of simulation. It is totally fine and encouraged for the event timing to be different in your training and testing data. You just need to capture them in your design matrix
End of explanation
"""
width = 0.35
[score, score_null] = gbrsa.score(X=Y_new, design=[d.design_task for d in design], scan_onsets=scan_onsets)
plt.bar(np.arange(n_subj),np.asarray(score)-np.asarray(score_null), width=width)
plt.ylim(np.min([np.asarray(score)-np.asarray(score_null)])-100, np.max([np.asarray(score)-np.asarray(score_null)])+100)
plt.ylabel('cross-validated log likelihood')
plt.xlabel('partipants')
plt.title('Difference between cross-validated log likelihoods\n of full model and null model\non new data containing signal')
plt.show()
Y_nosignal = [noise_new[s] + inten[s] for s in range(n_subj)]
[score_noise, score_null_noise] = gbrsa.score(X=Y_nosignal, design=[d.design_task for d in design], scan_onsets=scan_onsets)
plt.bar(np.arange(n_subj),np.asarray(score_noise)-np.asarray(score_null_noise), width=width)
plt.ylim(np.min([np.asarray(score_noise)-np.asarray(score_null_noise)])-100,
np.max([np.asarray(score_noise)-np.asarray(score_null_noise)])+100)
plt.ylabel('cross-validated log likelihood')
plt.xlabel('partipants')
plt.title('Difference between cross-validated log likelihoods\n of full model and null model\non pure noise')
plt.show()
"""
Explanation: Model selection by cross-validataion:
Similar to BRSA, you can compare different models by cross-validating the parameters of one model learnt from some training data
on some testing data. GBRSA provides a score() function, which returns you a pair of cross-validated log likelihood
for testing data. The first returned item is a numpy array of the cross-validated log likelihood of the model you have specified, for the testing data of all the subjects.
The second is a numpy arrary of those of a null model which assumes everything else the same except that there is no task-related activity.
Notice that comparing the score of your model of interest against its corresponding null model is not the only way to compare models. You might also want to compare against a model using the same set of design matrix, but a different rank (especially rank 1, which means all task conditions have the same response pattern, only differing in their magnitude).
In general, in the context of GBRSA, a model means the timing of each event and the way these events are grouped, together with other trivial parameters such as the rank of the covariance matrix and the number of nuisance regressors. All these parameters can influence model performance.
In future, we will provide interface to evaluate the predictive power for the data by different predefined similarity matrix or covariance matrix.
End of explanation
"""
gbrsa_noise = GBRSA()
gbrsa_noise.fit(X=[noise[s] + inten[s] for s in range(n_subj)],
design=[d.design_task for d in design],scan_onsets=scan_onsets)
Y_nosignal = [noise_new[s] + inten[s] for s in range(n_subj)]
[score_noise, score_null_noise] = gbrsa_noise.score(X=Y_nosignal,
design=[d.design_task for d in design], scan_onsets=scan_onsets)
plt.bar(np.arange(n_subj),np.asarray(score_noise)-np.asarray(score_null_noise), width=width)
plt.ylim(np.min([np.asarray(score_noise)-np.asarray(score_null_noise)])-100,
np.max([np.asarray(score_noise)-np.asarray(score_null_noise)])+100)
plt.ylabel('cross-validated log likelihood')
plt.xlabel('partipants')
plt.title('Difference between cross-validated log likelihoods\n of full model and null model\ntrained on pure noise')
plt.show()
"""
Explanation: Full model performs better on testing data that has the same property of signal and noise with training data.
Below, we fit the model to data containing only noise and test how it performs on data with signal.
End of explanation
"""
plt.pcolor(gbrsa_noise.U_)
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
plt.title('covariance matrix of task conditions estimated from pure noise')
"""
Explanation: We can see that the difference is smaller but full model generally performs slightly worse, because of overfitting. This is expected.
So, after fitting a model to your data, you should also check cross-validated log likelihood on separate runs from the same group of participants, and make sure your model is at least better than a null model before you trust your similarity matrix.
Another diagnostic of bad model to your data is very small diagonal values in the shared covariance structure U_
Shown below:
End of explanation
"""
|
AAbercrombie0492/gdelt_distributed_architecture | notebooks/GDELT_Architecture.ipynb | mit | from IPython.display import YouTubeVideo, HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/GpCarC_I3Ao?list=PLlRVXVT7h9_gCGCOl_bNYHA7FXbSOIVbs" frameborder="0" allowfullscreen></iframe>')
"""
Explanation: Data Engineering Final Project: GDELT
Global Data on Events, Location, and Tone
"The GDELT Project is a realtime network diagram and database of global human society for open research. GDELT monitors the world's news media from nearly every corner of every country
in print, broadcast, and web formats, in over 100 languages,
every moment of every day."
Events:
The GDELT Event Database records over 300 categories of physical activities around the world, from riots and protests to peace appeals and diplomatic exchanges, georeferenced to the city or mountaintop, across the entire planet dating back to January 1, 1979 and updated every 15 minutes.
Essentially it takes a sentence like "The United States criticized Russia yesterday for deploying its troops in Crimea, in which a recent clash with its soldiers left 10 civilians injured" and transforms this blurb of unstructured text into three structured database entries, recording US CRITICIZES RUSSIA, RUSSIA TROOP-DEPLOY UKRAINE (CRIMEA), and RUSSIA MATERIAL-CONFLICT CIVILIANS (CRIMEA).
Nearly 60 attributes are captured for each event, including the approximate location of the action and those involved. This translates the textual descriptions of world events captured in the news media into codified entries in a grand "global spreadsheet."
Global Knowledge Graph:
Much of the true insight captured in the world's news media lies not in what it says, but the context of how it says it. The GDELT Global Knowledge Graph (GKG) compiles a list of every person, organization, company, location and several million themes and thousands of emotions from every news report, using some of the most sophisticated named entity and geocoding algorithms in existance, designed specifically for the noisy and ungrammatical world that is the world's news media.
The resulting network diagram constructs a graph over the entire world, encoding not only what's happening, but what its context is, who's involved, and how the world is feeling about it, updated every single day.
End of explanation
"""
!ls ../src/data
# Load the "autoreload" extension
%load_ext autoreload
# always reload modules marked with "%aimport"
%autoreload 1
# Import Dependencies
import os
import requests
import sys
import pandas as pd
# add the 'src' directory as one where we can import modules
PROJ_ROOT = os.pardir
src_dir = os.path.join(PROJ_ROOT, 'src')
sys.path.append(src_dir)
# import my ingestion method from the source code
%aimport data.ingest_data
import data.ingest_data as ingest
# Get a a list of all the csv files available for download
url_list = ingest.get_list_of_urls()
# File sample
print('Number of zipped CSV files: ',len(url_list))
print('\n10 most recent files:\n\n',url_list[-10:])
# This is the file I will download every 15 minutes
gdelt_last_15 = requests.get('http://data.gdeltproject.org/gdeltv2/lastupdate.txt')
last_15_lines = gdelt_last_15.text.split('\n')
last_15_lines = [i.split() for i in last_15_lines]
last_15_lines
"""
Explanation: Proposed Architecture
Why Cassandra and Neo4j? I'll be doing a lot a column queries and will be working with network data.
How does my system have this property?
How does my system fall short and how could it be improved?
Robustness and Fault Tolerance
My system hinges on two EC2 instances, and an EMR cluster. I may look into using Elastic Beanstalk to deploy fresh EC2 instances in the event that one fails.
Low latency reads and updates
My system does not consider how to lower latency reads between my flask app and my distributed data stores. However, I will serialize each CSV as an Apache Parquet file which will make data processing in Spark faster. I honestly have no idea how I can improve web app latency.
Scalability
Again, I may need to look into Elastic Beanstalk to enable automated scaling.
Genearlization
I hope to use Airflow to coordinate my Spark DAGs. This would make it easier to do future projects that entail prediction and machine learning. It would be nice to enable queries in the web app that trigger a Spark job that returns an answer. Elastic Search may be worth investigating.
Extensibility
Saving my raw files onto S3 and using Airflow to construct Spark DAGs makes this system reasonably extensible.
Ad Hoc Queries
Elastic Search combined with Spark would be a great way to perform efficient ad hoc queries.
Minimal Maintenance
Using two NoSQL databases is a liability, but I want to try them out for this project. I am not sure how minimize maintanence within these data stores.
Debuggability
Airflow will help with debuggability. I will want to store error logs on S3 for each process.
Data Demo
End of explanation
"""
gkg_columns = ['GKGRECORDID', 'DATE', 'SourceCollectionIdentifier',
'SourceCommonName', 'DocumentIdentifier', 'Counts',
'V2Counts', 'Themes', 'V2Themes', 'Locations',
'V2Locations', 'Persons', 'V2Persons', 'Organizations',
'V2Organizations', 'V2Tone', 'Dates', 'GCAM',
'SharingImage', 'RelatedImages', 'SocialImageEmbeds',
'SocialVideoEmbeds', 'Quotations', 'AllNames', 'Amounts',
'TranslationInfo', 'Extras']
events_columns = ['GLOBALEVENTID', 'SQLDATE', 'MonthYear', 'Year', 'FractionDate',
'Actor1Code', 'Actor1Name', 'Actor1CountryCode',
'Actor1KnownGroupCode', 'Actor1EthnicCode', 'Actor1Religion1Code',
'Actor1Religion2Code', 'Actor1Type1Code', 'Actor1Type2Code',
'Actor1Type3Code', 'Actor2Code', 'Actor2Name', 'Actor2CountryCode',
'Actor2KnownGroupCode', 'Actor2EthnicCode', 'Actor2Religion1Code',
'Actor2Religion2Code', 'Actor2Type1Code', 'Actor2Type2Code',
'Actor2Type3Code', 'IsRootEvent', 'EventCode', 'EventBaseCode',
'EventRootCode', 'QuadClass', 'GoldsteinScale', 'NumMentions',
'NumSources', 'NumArticles', 'AvgTone', 'Actor1Geo_Type',
'Actor1Geo_FullName', 'Actor1Geo_CountryCode', 'Actor1Geo_ADM1Code',
'Actor1Geo_ADM2Code',
'Actor1Geo_Lat', 'Actor1Geo_Long', 'Actor1Geo_FeatureID',
'Actor2Geo_Type', 'Actor2Geo_FullName', 'Actor2Geo_CountryCode',
'Actor2Geo_ADM1Code',
'Actor2Geo_ADM2Code',
'Actor2Geo_Lat', 'Actor2Geo_Long',
'Actor2Geo_FeatureID', 'ActionGeo_Type', 'ActionGeo_FullName',
'ActionGeo_CountryCode', 'ActionGeo_ADM1Code',
'ActionGeo_ADM2Code',
'ActionGeo_Lat',
'ActionGeo_Long', 'ActionGeo_FeatureID', 'DATEADDED', 'SOURCEURL']
mentions_columns = ['GLOBALEVENTID', 'EventTimeDate', 'MentionTimeDate',
'MentionType', 'MentionSourceName', 'MentionIdentifier',
'SentenceID', 'Actor1CharOffset', 'Actor2CharOffset',
'ActionCharOffset', 'InRawText', 'Confidence',
'MentionDocLen', 'MentionDocTone',
'MentionDocTranslationInfo', 'Extras']
"""
Explanation: Features found in the three datasets
End of explanation
"""
gkg = pd.read_csv('{}/data/raw/20150220183000.gkg.csv'.format(PROJ_ROOT), sep='\t')
gkg.columns = gkg_columns
events = pd.read_csv('{}/data/raw/20150220184500.export.CSV'.format(PROJ_ROOT), sep='\t')
events.columns = events_columns
mentions = pd.read_csv('{}/data/raw/20150220184500.mentions.CSV'.format(PROJ_ROOT), sep='\t')
mentions.columns = mentions_columns
"""
Explanation: Load sample DataFrames
End of explanation
"""
events.T.iloc[:,:2]
"""
Explanation: Events
End of explanation
"""
gkg.T.iloc[:,:2]
"""
Explanation: Mentions (Supplements Events Table)
End of explanation
"""
mentions.T.iloc[:,:2]
"""
Explanation: Global Knowledge Graph
End of explanation
"""
gdelt_last_15 = requests.get('http://data.gdeltproject.org/gdeltv2/lastupdate.txt')
lines = gdelt_last_15.text.split('\n')
lines = [i.split() for i in lines]
lines
"""
Explanation: Last Update file provides URLs to most recent datasets
End of explanation
"""
|
celiasmith/syde556 | SYDE 556 Lecture 9 Action Selection.ipynb | gpl-2.0 | %pylab inline
import nengo
model = nengo.Network('Selection')
with model:
stim = nengo.Node(lambda t: [np.sin(t), np.cos(t)])
s = nengo.Ensemble(200, dimensions=2)
Q_A = nengo.Ensemble(50, dimensions=1)
Q_B = nengo.Ensemble(50, dimensions=1)
Q_C = nengo.Ensemble(50, dimensions=1)
Q_D = nengo.Ensemble(50, dimensions=1)
nengo.Connection(s, Q_A, transform=[[1,0]])
nengo.Connection(s, Q_B, transform=[[-1,0]])
nengo.Connection(s, Q_C, transform=[[0,1]])
nengo.Connection(s, Q_D, transform=[[0,-1]])
nengo.Connection(stim, s)
model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)
qa_p = nengo.Probe(Q_A)
qb_p = nengo.Probe(Q_B)
qc_p = nengo.Probe(Q_C)
qd_p = nengo.Probe(Q_D)
s_p = nengo.Probe(s)
sim = nengo.Simulator(model)
sim.run(3.)
t = sim.trange()
plot(t, sim.data[s_p], label="state")
legend()
figure(figsize=(8,8))
plot(t, sim.data[qa_p], label='Q_A')
plot(t, sim.data[qb_p], label='Q_B')
plot(t, sim.data[qc_p], label='Q_C')
plot(t, sim.data[qd_p], label='Q_D')
legend(loc='best');
"""
Explanation: SYDE 556/750: Simulating Neurobiological Systems
Readings: Stewart et al.
Biological Cognition -- Control
Lots of contemporary neural models are quite simple
Working memory, vision, audition, perceptual decision making, oscillations, etc.
What happens when our models get more complex?
I.e., what happens when the models:
Switch modalities?
Have a complex environment?
Have limited resources?
Can't do everything at once?
The brain needs a way to determine how to best use the finite resources it has.
Think about what happens when:
You have two targets to reach to at once (or 3 or 4)
You want to get to a goal that requires a series of actions
You don't know what the target is, but you know what modality it will be in
You don't know what the target will be, but you know where it will be
In all these cases, your brain needs to control the flow of information through it to solve the task.
Chapter 5 of How to build a brain is focussed on relevant neural models.
That chapter distinguishes two aspects of control:
determining what an appropriate control signal is
applying that signal to change the system
The first is a kind of decision making called 'action selection'
The second is more of an implementational question about how to gate information effectively (we've seen several possibilities for this already; e.g. inhibition, multiplication)
This lecture focusses on the first aspect of control
Action Selection and the Basal Ganglia
Actions can be many different things
physical movements
moving attention
changing contents of working memory
recalling items from long-term memory
Action Selection
How can we do this?
Suppose we're a critter that's trying to survive in a harsh environment
We have a bunch of different possible actions
go home
move randomly
go towards food
go away from predators
Which one do we pick?
Ideas?
Reinforcement Learning
Reinforcement learning is a biologically inspired computational approach to machine learning. It is based on the idea that creatures maximize reward, which seems to be the case (see, e.g., the Rescorla-Wagner model of Pavlov's experiments).
There have been a lot of interesting connections found between signals in these models and signals in the brain.
So, let's steal a simple idea from reinforcement learning:
Each action has a utility $Q$ that depends on the current state $s$
$Q(s, a)$ (the action value)
The best action will then be the action that has the largest $Q$
Note
Lots of different variations on this
$V(s)$ (the state value - expected reward given a state & policy)
Softmax: $p(a_i) = e^{Q(s, a_i)/T} / \sum_i e^{Q(s, a_i)/T}$ (instead of max)
In RL research, people come up with learning algorithms for adjusting $Q$ based on rewards
We won't worry about that for now (see the lecture on learning) and just use the basic idea
There's some sort of state $s$
For each action $a_i$, compute $Q(s, a_i)$ which is a function that we can define
Take the biggest $Q$ and perform that action
Implementation
One group of neurons to represent the state $s$
One group of neurons for each action's utility $Q(s, a_i)$
Or one large group of neurons for all the $Q$ values
What should the output be?
We could have $index$, which is the index $i$ of the action with the largest $Q$ value
Or we could have something like $[0,0,1,0]$, indicating which action is selected
Advantages and disadvantages?
The second option seems easier if we consider that we have to do action execution next...
A Simple Example
State $s$ is 2-dimensional (x,y plane)
Four actions (A, B, C, D)
Do action A if $s$ is near [1,0], B if near [-1,0], C if near [0,1], D if near [0,-1]
$Q(s, a_A)=s \cdot [1,0]$
$Q(s, a_B)=s \cdot [-1,0]$
$Q(s, a_C)=s \cdot [0,1]$
$Q(s, a_D)=s \cdot [0,-1]$
REMINDER: COURSE EVALUATION STUFF!
End of explanation
"""
import nengo
model = nengo.Network('Selection')
with model:
stim = nengo.Node(lambda t: [np.sin(t), np.cos(t)])
s = nengo.Ensemble(200, dimensions=2)
Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)
nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])
nengo.Connection(stim, s)
model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)
qs_p = nengo.Probe(Qs.output)
s_p = nengo.Probe(s)
sim = nengo.Simulator(model)
sim.run(3.)
t = sim.trange()
plot(t, sim.data[s_p], label="state")
legend()
figure(figsize=(8,8))
plot(t, sim.data[qs_p], label='Qs')
legend(loc='best');
"""
Explanation: That behavior makes a lot of sense
The highest Q happens when an action's 'favorite state' (i.e. when the transform is equal to state) is in s
It's annoying to have all those separate $Q$ neurons
Perfect opportunity to use the EnsembleArray again (see last lecture)
Doesn't change the model at all
It just groups things together for you
End of explanation
"""
import nengo
def maximum(x):
result = [0,0,0,0]
result[np.argmax(x)] = 1
return result
model = nengo.Network('Selection')
with model:
stim = nengo.Node(lambda t: [np.sin(t), np.cos(t)])
s = nengo.Ensemble(200, dimensions=2)
Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)
Qall = nengo.Ensemble(400, dimensions=4)
Action = nengo.Ensemble(200, dimensions=4)
nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])
nengo.Connection(Qs.output, Qall)
nengo.Connection(Qall, Action, function=maximum)
nengo.Connection(stim, s)
model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)
qs_p = nengo.Probe(Qs.output)
action_p = nengo.Probe(Action)
s_p = nengo.Probe(s)
sim = nengo.Simulator(model)
sim.run(3.)
t = sim.trange()
plot(t, sim.data[s_p], label="state")
legend()
figure()
plot(t, sim.data[qs_p], label='Qs')
legend(loc='best')
figure()
plot(t, sim.data[action_p], label='Action')
legend(loc='best');
"""
Explanation: Yay, Network Arrays make shorter code!
Back to the model: How do we implement the $max$ function?
Well, it's just a function, so let's implement it
Need to combine all the $Q$ values into one 4-dimensional ensemble
Why?
End of explanation
"""
import nengo
model = nengo.Network('Selection')
with model:
stim = nengo.Node(lambda t: [.5,.4] if t <1. else [0,0] )
s = nengo.Ensemble(200, dimensions=2)
Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)
nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])
e = 0.1
i = -1
recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]]
nengo.Connection(Qs.output, Qs.input, transform=recur)
nengo.Connection(stim, s)
model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)
qs_p = nengo.Probe(Qs.output)
s_p = nengo.Probe(s)
sim = nengo.Simulator(model)
sim.run(1.)
t = sim.trange()
plot(t, sim.data[s_p], label="state")
legend()
figure()
plot(t, sim.data[qs_p], label='Qs')
legend(loc='best');
"""
Explanation: Not so great (it looks pretty much the same as the linear case)
Very nonlinear function, so neurons are not able to approximate it well
Other options?
The Standard Neural Network Approach (modified)
If you give this problem to a standard neural networks person, what would they do?
They'll say this is exactly what neural networks are great at
Implement this with mutual inhibition and self-excitation
Neural competition
4 "neurons"
have excitation from each neuron back to themselves
have inhibition from each neuron to all the others
Now just put in the input and wait for a while and it will stablize to one option
Can we do that?
Sure! Just replace each "neuron" with a group of neurons, and compute the desired function on those connections
note that this is a very general method of converting any non-realistic neural model into a biologically realistic spiking neuron model (though often you can do a one-for-one neuron conversion as well)
End of explanation
"""
import nengo
model = nengo.Network('Selection')
with model:
stim = nengo.Node(lambda t: [.5,.4] if t <1. else [0,0] )
s = nengo.Ensemble(200, dimensions=2)
Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)
Action = nengo.networks.EnsembleArray(50, n_ensembles=4)
nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])
nengo.Connection(Qs.output, Action.input)
e = 0.1
i = -1
recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]]
# Let's force the feedback connection to only consider positive values
def positive(x):
if x[0]<0: return [0]
else: return x
pos = Action.add_output('positive', positive)
nengo.Connection(pos, Action.input, transform=recur)
nengo.Connection(stim, s)
model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)
qs_p = nengo.Probe(Qs.output)
action_p = nengo.Probe(Action.output)
s_p = nengo.Probe(s)
sim = nengo.Simulator(model)
sim.run(1.)
t = sim.trange()
plot(t, sim.data[s_p], label="state")
legend(loc='best')
figure()
plot(t, sim.data[qs_p], label='Qs')
legend(loc='best')
figure()
plot(t, sim.data[action_p], label='Action')
legend(loc='best');
"""
Explanation: Oops, that's not quite right
Why is it selecting more than one action?
End of explanation
"""
%pylab inline
import nengo
def stimulus(t):
if t<.3:
return [.5,.4]
elif .3<t<.5:
return [.4,.5]
else:
return [0,0]
model = nengo.Network('Selection')
with model:
stim = nengo.Node(stimulus)
s = nengo.Ensemble(200, dimensions=2)
Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)
Action = nengo.networks.EnsembleArray(50, n_ensembles=4)
nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])
nengo.Connection(Qs.output, Action.input)
e = .5
i = -1
recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]]
# Let's force the feedback connection to only consider positive values
def positive(x):
if x[0]<0: return [0]
else: return x
pos = Action.add_output('positive', positive)
nengo.Connection(pos, Action.input, transform=recur)
nengo.Connection(stim, s)
model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)
qs_p = nengo.Probe(Qs.output)
action_p = nengo.Probe(Action.output)
s_p = nengo.Probe(s)
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/action_selection.py.cfg")
sim = nengo.Simulator(model)
sim.run(1.)
t = sim.trange()
plot(t, sim.data[s_p], label="state")
legend(loc='best')
figure()
plot(t, sim.data[qs_p], label='Qs')
legend(loc='best')
figure()
plot(t, sim.data[action_p], label='Action')
legend(loc='best');
"""
Explanation: Now we only influence other Actions when we have a positive value
Note: Is there a more neurally efficient way to do this?
Much better
Selects one action reliably
But still gives values smaller than 1.0 for the output a lot
Can we fix that?
What if we adjust e?
End of explanation
"""
%pylab inline
import nengo
def stimulus(t):
if t<.3:
return [.5,.4]
elif .3<t<.5:
return [.3,.5]
else:
return [0,0]
model = nengo.Network('Selection')
with model:
stim = nengo.Node(stimulus)
s = nengo.Ensemble(200, dimensions=2)
Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)
Action = nengo.networks.EnsembleArray(50, n_ensembles=4)
nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])
nengo.Connection(Qs.output, Action.input)
e = 0.2
i = -1
recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]]
def positive(x):
if x[0]<0: return [0]
else: return x
pos = Action.add_output('positive', positive)
nengo.Connection(pos, Action.input, transform=recur)
def select(x):
if x[0]>=0: return [1]
else: return [0]
sel = Action.add_output('select', select)
aValues = nengo.networks.EnsembleArray(50, n_ensembles=4)
nengo.Connection(sel, aValues.input)
nengo.Connection(stim, s)
model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)
qs_p = nengo.Probe(Qs.output)
action_p = nengo.Probe(Action.output)
aValues_p = nengo.Probe(aValues.output)
s_p = nengo.Probe(s)
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/action_selection2.py.cfg")
sim = nengo.Simulator(model)
sim.run(1.)
t = sim.trange()
plot(t, sim.data[s_p], label="state")
legend(loc='best')
figure()
plot(t, sim.data[qs_p], label='Qs')
legend(loc='best')
figure()
plot(t, sim.data[action_p], label='Action')
legend(loc='best')
figure()
plot(t, sim.data[aValues_p], label='Action Values')
legend(loc='best');
"""
Explanation: That seems to introduce a new problem
The self-excitation is so strong that it can't respond to changes in the input
Indeed, any method like this is going to have some form of memory effects
Notice that what has been implemented is an integrator (sort of)
Could we do anything to help without increasing e too much?
End of explanation
"""
%pylab inline
import nengo
def stimulus(t):
if t<.3:
return [.5,.4]
elif .3<t<.5:
return [.3,.5]
else:
return [0,0]
model = nengo.Network('Selection')
with model:
stim = nengo.Node(stimulus)
s = nengo.Ensemble(200, dimensions=2)
Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)
Action = nengo.networks.EnsembleArray(50, n_ensembles=4)
nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])
nengo.Connection(Qs.output, Action.input)
e = 0.1
i = -1
recur = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]]
def positive(x):
if x[0]<0: return [0]
else: return x
pos = Action.add_output('positive', positive)
nengo.Connection(pos, Action.input, transform=recur)
def select(x):
if x[0]>=0: return [1]
else: return [0]
sel = Action.add_output('select', select)
aValues = nengo.networks.EnsembleArray(50, n_ensembles=4)
nengo.Connection(sel, aValues.input)
nengo.Connection(stim, s)
model.config[nengo.Probe].synapse = nengo.Lowpass(0.01)
qs_p = nengo.Probe(Qs.output)
action_p = nengo.Probe(Action.output)
aValues_p = nengo.Probe(aValues.output)
s_p = nengo.Probe(s)
#sim = nengo.Simulator(model)
#sim.run(1.)
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/bg_simple1.py.cfg")
"""
Explanation: Better behaviour
But there's still situations where there's too much memory (see the visualizer)
We can reduce this by reducing e
End of explanation
"""
%pylab inline
mm=1
mp=1
me=1
mg=1
#connection strengths from original model
ws=1
wt=1
wm=1
wg=1
wp=0.9
we=0.3
#neuron lower thresholds for various populations
e=0.2
ep=-0.25
ee=-0.2
eg=-0.2
le=0.2
lg=0.2
D = 10
tau_ampa=0.002
tau_gaba=0.008
N = 50
radius = 1.5
import nengo
from nengo.dists import Uniform
model = nengo.Network('Basal Ganglia', seed=4)
with model:
stim = nengo.Node([0]*D)
StrD1 = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(e,1),
encoders=Uniform(1,1), radius=radius)
StrD2 = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(e,1),
encoders=Uniform(1,1), radius=radius)
STN = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(ep,1),
encoders=Uniform(1,1), radius=radius)
GPi = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(eg,1),
encoders=Uniform(1,1), radius=radius)
GPe = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(ee,1),
encoders=Uniform(1,1), radius=radius)
nengo.Connection(stim, StrD1.input, transform=ws*(1+lg), synapse=tau_ampa)
nengo.Connection(stim, StrD2.input, transform=ws*(1-le), synapse=tau_ampa)
nengo.Connection(stim, STN.input, transform=wt, synapse=tau_ampa)
def func_str(x): #relu-like function
if x[0]<e: return 0
return mm*(x[0]-e)
strd1_out = StrD1.add_output('func_str', func_str)
strd2_out = StrD2.add_output('func_str', func_str)
nengo.Connection(strd1_out, GPi.input, transform=-wm, synapse=tau_gaba)
nengo.Connection(strd2_out, GPe.input, transform=-wm, synapse=tau_gaba)
def func_stn(x):
if x[0]<ep: return 0
return mp*(x[0]-ep)
stn_out = STN.add_output('func_stn', func_stn)
tr=[[wp]*D for i in range(D)]
nengo.Connection(stn_out, GPi.input, transform=tr, synapse=tau_ampa)
nengo.Connection(stn_out, GPe.input, transform=tr, synapse=tau_ampa)
def func_gpe(x):
if x[0]<ee: return 0
return me*(x[0]-ee)
gpe_out = GPe.add_output('func_gpe', func_gpe)
nengo.Connection(gpe_out, GPi.input, transform=-we, synapse=tau_gaba)
nengo.Connection(gpe_out, STN.input, transform=-wg, synapse=tau_gaba)
Action = nengo.networks.EnsembleArray(N, n_ensembles=D, intercepts=Uniform(0.2,1),
encoders=Uniform(1,1))
bias = nengo.Node([1]*D)
nengo.Connection(bias, Action.input)
nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=tau_gaba)
def func_gpi(x):
if x[0]<eg: return 0
return mg*(x[0]-eg)
gpi_out = GPi.add_output('func_gpi', func_gpi)
nengo.Connection(gpi_out, Action.input, transform=-3, synapse=tau_gaba)
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/bg_good2.py.cfg")
"""
Explanation: Much less memory, but it's still there
And slower to respond to changes
Note that this speed is dependent on $e$, $i$, and the time constant of the neurotransmitter used
Can be hard to find good values
And this gets harder to balance as the number of actions increases
Also hard to balance for a wide range of $Q$ values
(Does it work for $Q$=[0.9, 0.9, 0.95, 0.9] and $Q$=[0.2, 0.2, 0.25, 0.2]?)
But this is still a pretty standard approach
Nice and easy to get working for special cases
Don't really need the NEF (if you're willing to assume non-realistic non-spiking neurons)
(Although really, if you're not looking for biological realism, why not just compute the max function?)
Example: OReilly, R.C. (2006). Biologically Based Computational Models of High-Level Cognition. Science, 314, 91-94.
Leabra
They tend to use a "kWTA" (k-Winners Take All) approach in their models
Set up inhibition so that only $k$ neurons will be active
But since that's complex to do, just do the math instead of doing the inhibition
We think that doing it their way means that the dynamics of the model will be wrong (i.e. all the effects we saw above are being ignored).
Any other options?
Biology
Let's look at the biology
Where is this action selection in the brain?
General consensus: the basal ganglia
<img src="files/lecture_selection/basal_ganglia.jpg" width="500">
Pretty much all of cortex connects in to this area (via the striatum)
Output goes to the thalamus, the central routing system of the brain
Disorders of this area of the brain cause problems controlling actions:
Parkinson's disease
Neurons in the substantia nigra die off
Extremely difficult to trigger actions to start
Usually physical actions; as disease progresses and more of the SNc is gone, can get cognitive effects too
Huntington's disease
Neurons in the striatum die off
Actions are triggered inappropriately (disinhibition)
Small uncontrollable movements
Trouble sequencing cognitive actions too
Also heavily implicated in reinforcement learning
The dopamine levels seem to map onto reward prediction error
High levels when get an unexpected reward, low levels when didn't get a reward that was expected
<img src="files/lecture_selection/dopamine.png" width="500">
Connectivity diagram:
<img src="files/lecture_selection/basal_ganglia2.gif" width="500">
Old terminology:
"direct" pathway: cortex -> striatum -> GPi -> thalamus
"indirect" pathway: cortex -> striatum -> GPe -> STN -> GPi -> thalamus
Then they found:
"hyperdirect" pathway: cortex -> STN -> GPi -> thalamus
and lots of other connections
Activity in the GPi (output)
generally always active
neurons stop firing when corresponding action is chosen
representing [1, 1, 0, 1] instead of [0, 0, 1, 0]
Leabra approach
Each action has two groups of neurons in the striatum representing $Q(s, a_i)$ and $1-Q(s, a_i)$ ("go" and "no go")
Mutual inhibition causes only one of the "go" and one of the "no go" groups to fire
GPi neuron get connections from "go" neurons, with value multiplied by -1 (direct pathway)
GPi also gets connections from "no go" neurons, but multiplied by -1 (striatum->GPe), then -1 again (GPe->STN), then +1 (STN->GPi)
Result in GPi is close to [1, 1, 0, 1] form
Seems to match onto the biology okay
But why the weird double-inverting thing? Why not skip the GPe and STN entirely?
And why split into "go" and "no-go"? Just the direct pathway on its own would be fine
Maybe it's useful for some aspect of the learning...
What about all those other connections?
An alternate model of the Basal Ganglia
Maybe the weird structure of the basal ganglia is an attempt to do action selection without doing mutual inhibition
Needs to select from a large number of actions
Needs to do so quickly, and without the memory effects
Gurney, Prescott, and Redgrave, 2001
Let's start with a very simple version
<img src="files/lecture_selection/gpr1.png">
Sort of like an "unrolled" version of one step of mutual inhibition
Note that both A and B have surround inhibition and local excitation that is 'flipped' (in slightly different ways) on the way to the output
Unfortunately this doesn't easily map onto the basal ganglia because of the diffuse inhibition needed from cortex to what might be the striatum (the first layer). Instead, we can get similar functionality using something like the following
Notice the importance of the hyperdirect pathway (from cortex to STN).
<img src="files/lecture_selection/gpr2.png">
But that's only going to work for very specific $Q$ values. (Here, the winning option is the sum of the losing ones)
Need to dynamically adjust the amount of +ve and -ve weighting
Here the GPe is adjusting the weighting by monitoring STN & D2 activity.
Notice that the GPe gets the same inputs as GPi, but projects back to STN, to 'regulate' the action selection.
<img src="files/lecture_selection/gpr3.png">
This turns out to work surprisingly well
But extremely hard to analyze its behaviour
They showed that it qualitatively matches pretty well
So what happens if we convert this into realistic spiking neurons?
Use the same approach where one "neuron" in their model is a pool of neurons in the NEF
The "neuron model" they use was rectified linear
That becomes the function the decoders are computing
Neurotransmitter time constants are all known
$Q$ values are between 0 and 1
Firing rates max out around 50-100Hz
Encoders are all positive and thresholds are chosen for efficiency
End of explanation
"""
%pylab inline
import nengo
from nengo.dists import Uniform
model = nengo.Network(label='Selection')
D=4
with model:
stim = nengo.Node([0,0])
s = nengo.Ensemble(200, dimensions=2)
Qs = nengo.networks.EnsembleArray(50, n_ensembles=D)
nengo.Connection(stim, s)
nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])
Action = nengo.networks.EnsembleArray(50, n_ensembles=D, intercepts=Uniform(0.2,1),
encoders=Uniform(1,1))
bias = nengo.Node([1]*D)
nengo.Connection(bias, Action.input)
nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=0.008)
basal_ganglia = nengo.networks.BasalGanglia(dimensions=D)
nengo.Connection(Qs.output, basal_ganglia.input, synapse=None)
nengo.Connection(basal_ganglia.output, Action.input)
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/bg_good1.py.cfg")
"""
Explanation: Notice that we are also flipping the output from [1, 1, 0, 1] to [0, 0, 1, 0]
Mostly for our convenience, but we can also add some mutual inhibition there
Works pretty well
Scales up to many actions
Selects quickly
Gets behavioural match to empirical data, including timing predictions (!)
Also shows interesting oscillations not seen in the original GPR model
But these are seen in the real basal ganglia
<img src="files/lecture_selection/gpr-latency.png">
Dynamic Behaviour of a Spiking Model of Action Selection in the Basal Ganglia
Let's make sure this works with our original system
To make it easy to use the basal ganglia, there is a special network constructor
Since this is a major component of the SPA, it's also in that module
End of explanation
"""
%pylab inline
import nengo
from nengo.dists import Uniform
model = nengo.Network(label='Selection')
D=4
with model:
stim = nengo.Node([0,0])
s = nengo.Ensemble(200, dimensions=2)
Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)
nengo.Connection(stim, s)
nengo.Connection(s, Qs.input, transform=[[1,0],[-1,0],[0,1],[0,-1]])
Action = nengo.networks.EnsembleArray(50, n_ensembles=D, intercepts=Uniform(0.2,1),
encoders=Uniform(1,1))
bias = nengo.Node([1]*D)
nengo.Connection(bias, Action.input)
nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=0.008)
basal_ganglia = nengo.networks.BasalGanglia(dimensions=D)
nengo.Connection(Qs.output, basal_ganglia.input, synapse=None)
nengo.Connection(basal_ganglia.output, Action.input)
motor = nengo.Ensemble(100, dimensions=2)
nengo.Connection(Action.output[0], motor, transform=[[1],[0]])
nengo.Connection(Action.output[1], motor, transform=[[-1],[0]])
nengo.Connection(Action.output[2], motor, transform=[[0],[1]])
nengo.Connection(Action.output[3], motor, transform=[[0],[-1]])
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/bg_good3.py.cfg")
"""
Explanation: This system seems to work well
Still not perfect
Matches biology nicely, because of how we implemented it
Some more details on the basal ganglia implementation
all those parameters come from here
<img src="files/lecture_selection/gpr-diagram.png" width="500">
In the original model, each action has a single "neuron" in each area that responds like this:
$$
y = \begin{cases}
0 &\mbox{if } x < \epsilon \
m(x- \epsilon) &\mbox{otherwise}
\end{cases}
$$
These need to get turned into groups of neurons
What is the best way to do this?
<img src="files/lecture_selection/gpr-tuning.png">
encoders are all +1
intercepts are chosen to be $> \epsilon$
Action Execution
Now that we can select an action, how do we perform it?
Depends on what the action is
Let's start with simple actions
Move in a given direction
Remember a specific vector
Send a particular value as input into a particular cognitive system
Example:
State $s$ is 2-dimensional
Four actions (A, B, C, D)
Do action A if $s$ is near [1,0], B if near [-1,0], C if near [0,1], D if near [0,-1]
$Q(s, a_A)=s \cdot [1,0]$
$Q(s, a_B)=s \cdot [-1,0]$
$Q(s, a_C)=s \cdot [0,1]$
$Q(s, a_D)=s \cdot [0,-1]$
To do Action A, set $m=[1,0]$
To do Action B, set $m=[-1,0]$
To do Action C, set $m=[0,1]$
To do Action D, set $m=[0,-1]$
End of explanation
"""
%pylab inline
import nengo
from nengo.dists import Uniform
model = nengo.Network('Creature')
with model:
stim = nengo.Node([0,0], label='stim')
command = nengo.Ensemble(100, dimensions=2, label='command')
motor = nengo.Ensemble(100, dimensions=2, label='motor')
position = nengo.Ensemble(1000, dimensions=2, label='position')
scared_direction = nengo.Ensemble(100, dimensions=2, label='scared direction')
def negative(x):
return -x[0], -x[1]
nengo.Connection(position, scared_direction, function=negative)
nengo.Connection(position, position, synapse=.05)
def rescale(x):
return x[0]*0.1, x[1]*0.1
nengo.Connection(motor, position, function=rescale)
nengo.Connection(stim, command)
D=4
Q_input = nengo.Node([0,0,0,0], label='select')
Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)
nengo.Connection(Q_input, Qs.input)
Action = nengo.networks.EnsembleArray(50, n_ensembles=D, intercepts=Uniform(0.2,1),
encoders=Uniform(1,1))
bias = nengo.Node([1]*D)
nengo.Connection(bias, Action.input)
nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=0.008)
basal_ganglia = nengo.networks.BasalGanglia(dimensions=D)
nengo.Connection(Qs.output, basal_ganglia.input, synapse=None)
nengo.Connection(basal_ganglia.output, Action.input)
do_command = nengo.Ensemble(300, dimensions=3, label='do command')
nengo.Connection(command, do_command[0:2])
nengo.Connection(Action.output[0], do_command[2])
def apply_command(x):
return x[2]*x[0], x[2]*x[1]
nengo.Connection(do_command, motor, function=apply_command)
do_scared = nengo.Ensemble(300, dimensions=3, label='do scared')
nengo.Connection(scared_direction, do_scared[0:2])
nengo.Connection(Action.output[1], do_scared[2])
nengo.Connection(do_scared, motor, function=apply_command)
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/bg_creature.py.cfg")
#first dimensions activates do_command, i.e. go in the indicated direciton
#second dimension activates do_scared, i.e. return 'home' (0,0)
#creature tracks the position it goes to (by integrating)
#creature inverts direction to position via scared direction/do_scared and puts that into motor
"""
Explanation: What about more complex actions?
Consider a simple creature that goes where it's told, or runs away if it's scared
Action 1: set $m$ to the direction it's told to do
Action 2: set $m$ to the direction we started from
Need to pass information from one group of neurons to another
But only do this when the action is chosen
How?
Well, let's use a function
$m = a \times d$
where $a$ is the action selection (0 for not selected, 1 for selected)
Let's try that with the creature
End of explanation
"""
%pylab inline
import nengo
from nengo.dists import Uniform
model = nengo.Network('Creature')
with model:
stim = nengo.Node([0,0], label='stim')
command = nengo.Ensemble(100, dimensions=2, label='command')
motor = nengo.Ensemble(100, dimensions=2, label='motor')
position = nengo.Ensemble(1000, dimensions=2, label='position')
scared_direction = nengo.Ensemble(100, dimensions=2, label='scared direction')
def negative(x):
return -x[0], -x[1]
nengo.Connection(position, scared_direction, function=negative)
nengo.Connection(position, position, synapse=.05)
def rescale(x):
return x[0]*0.1, x[1]*0.1
nengo.Connection(motor, position, function=rescale)
nengo.Connection(stim, command)
D=4
Q_input = nengo.Node([0,0,0,0], label='select')
Qs = nengo.networks.EnsembleArray(50, n_ensembles=4)
nengo.Connection(Q_input, Qs.input)
Action = nengo.networks.EnsembleArray(50, n_ensembles=D, intercepts=Uniform(0.2,1),
encoders=Uniform(1,1))
bias = nengo.Node([1]*D)
nengo.Connection(bias, Action.input)
nengo.Connection(Action.output, Action.input, transform=(np.eye(D)-1), synapse=0.008)
basal_ganglia = nengo.networks.BasalGanglia(dimensions=D)
nengo.Connection(Qs.output, basal_ganglia.input, synapse=None)
nengo.Connection(basal_ganglia.output, Action.input)
do_command = nengo.Ensemble(300, dimensions=2, label='do command')
nengo.Connection(command, do_command)
nengo.Connection(Action.output[1], do_command.neurons, transform=-np.ones([300,1]))
nengo.Connection(do_command, motor)
do_scared = nengo.Ensemble(300, dimensions=2, label='do scared')
nengo.Connection(scared_direction, do_scared)
nengo.Connection(Action.output[0], do_scared.neurons, transform=-np.ones([300,1]))
nengo.Connection(do_scared, motor)
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/bg_creature2.py.cfg")
"""
Explanation: There's also another way to do this
A special case for forcing a function to go to zero when a particular group of neurons is active
End of explanation
"""
%pylab inline
import nengo
from nengo import spa
D = 16
def start(t):
if t < 0.05:
return 'A'
else:
return '0'
model = spa.SPA(label='Sequence_Module', seed=5)
with model:
model.cortex = spa.Buffer(dimensions=D, label='cortex')
model.input = spa.Input(cortex=start, label='input')
actions = spa.Actions(
'dot(cortex, A) --> cortex = B',
'dot(cortex, B) --> cortex = C',
'dot(cortex, C) --> cortex = D',
'dot(cortex, D) --> cortex = E',
'dot(cortex, E) --> cortex = A'
)
model.bg = spa.BasalGanglia(actions=actions)
model.thal = spa.Thalamus(model.bg)
cortex = nengo.Probe(model.cortex.state.output, synapse=0.01)
actions = nengo.Probe(model.thal.actions.output, synapse=0.01)
utility = nengo.Probe(model.bg.input, synapse=0.01)
sim = nengo.Simulator(model)
sim.run(0.5)
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/bg_alphabet.py.cfg")
fig = figure(figsize=(12,8))
p1 = fig.add_subplot(3,1,1)
p1.plot(sim.trange(), model.similarity(sim.data, cortex))
p1.legend(model.get_output_vocab('cortex').keys, fontsize='x-small')
p1.set_ylabel('State')
p2 = fig.add_subplot(3,1,2)
p2.plot(sim.trange(), sim.data[actions])
p2_legend_txt = [a.effect for a in model.bg.actions.actions]
p2.legend(p2_legend_txt, fontsize='x-small')
p2.set_ylabel('Action')
p3 = fig.add_subplot(3,1,3)
p3.plot(sim.trange(), sim.data[utility])
p3_legend_txt = [a.condition for a in model.bg.actions.actions]
p3.legend(p3_legend_txt, fontsize='x-small')
p3.set_ylabel('Utility')
fig.subplots_adjust(hspace=0.2)
"""
Explanation: This is a situation where it makes sense to ignore the NEF!
All we want to do is shut down the neural activity
So just do a very inhibitory connection
The Cortex-Basal Ganglia-Thalamus loop
We now have everything we need for a model of one of the primary structures in the mammalian brain
Basal ganglia: action selection
Thalamus: action execution
Cortex: everything else
<img src="lecture_selection/ctx-bg-thal.png" width="800">
We build systems in cortex that give some input-output functionality
We set up the basal ganglia and thalamus to make use of that functionality appropriately
Example
Cortex stores some state (integrator)
Add some state transition rules
If in state A, go to state B
If in state B, go to state C
If in state C, go to state D
...
For now, let's just have states A, B, C, D, etc be some randomly chosen vectors
$Q(s, a_i) = s \cdot a_i$
The effect of each action is to input the corresponding vector into the integrator
This is the basic loop of the SPA, so we can use that module
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo('sUvHCs5y0o8', width=640, height=390, loop=1, autoplay=0)
"""
Explanation: Behavioural Evidence
Is there any evidence that this is the way it works in brains?
Consistent with anatomy/connectivity
What about behavioural evidence?
A few sources of support
Timing data
How long does it take to do an action?
There are lots of existing computational (non-neural) cognitive models that have something like this action selection loop
Usually all-symbolic
A set of IF-THEN rules
e.g. ACT-R
Used to model mental arithmetic, driving a car, using a GUI, air-traffic control, staffing a battleship, etc etc
Best fit across all these situations is to set the loop time to 50ms
How long does this model take?
Notice that all the timing is based on neural properties, not the algorithm
Dominated by the longer neurotransmitter time constants in the basal ganglia
<img src="files/lecture_selection/timing-simple.png">
<center>Simple actions</center>
<img src="files/lecture_selection/timing-complex.png">
<center>Complex actions (routing)</center>
This is in the right ballpark
But what about this distinction between the two types of actions?
Not a distinction made in the literature
But once we start looking for it, there is evidence
Resolves an outstanding weirdness where some actions seem to take twice as long as others
Starting to be lots of citations for 40ms for simple tasks
Task artifacts and strategic adaptation in the change signal task
This is a nice example of the usefulness of making neural models!
This distinction wasn't obvious from computational implementations
More complex tasks
Lots of complex tasks can be modelled this way
Some basic cognitive components (cortex)
action selection system (basal ganglia and thalamus)
The tricky part is figuring out the actions
Example: the Tower of Hanoi task
3 pegs
N disks of different sizes on the pegs
move from one configuration to another
can only move one disk at a time
no larger disk can be on a smaller disk
<img src="files/lecture_selection/hanoi.png">
can we build rules to do this?
End of explanation
"""
|
mortonjt/yummy-octo-duck | ipynb/UniFrac benchmarking.ipynb | bsd-3-clause | import numpy.testing as npt
ids, otu_ids, otu_data, t = get_random_samples(10, tree, True)
fu_mat = make_and_run_pw_distances(unifrac, otu_data, otu_ids=otu_ids, tree=t)
u_mat = pw_distances(unweighted_unifrac, otu_data, otu_ids=otu_ids, tree=t)
fwu_mat = make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=t)
wu_mat = pw_distances(weighted_unifrac, otu_data, otu_ids=otu_ids, tree=t)
fwun_mat = make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=t, normalized=True)
wun_mat = pw_distances(weighted_unifrac, otu_data, otu_ids=otu_ids, tree=t, normalized=True)
npt.assert_almost_equal(fu_mat.data, u_mat.data)
npt.assert_almost_equal(fwu_mat.data, wu_mat.data)
npt.assert_almost_equal(fwun_mat.data, wun_mat.data)
"""
Explanation: Verify producing the sames results.
End of explanation
"""
%timeit make_and_run_pw_distances(unifrac, otu_data, otu_ids=otu_ids, tree=t)
%timeit pw_distances(unweighted_unifrac, otu_data, otu_ids=otu_ids, tree=t)
%timeit make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=t)
%timeit pw_distances(weighted_unifrac, otu_data, otu_ids=otu_ids, tree=t)
%timeit make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=t, normalized=True)
%timeit pw_distances(weighted_unifrac, otu_data, otu_ids=otu_ids, tree=t, normalized=True)
"""
Explanation: General timing
End of explanation
"""
method_sets = [[unweighted_unifrac, unweighted_unifrac_fast],
[weighted_unifrac, weighted_unifrac_fast]]
ids, otu_ids, otu_data, t = get_random_samples(5, tree, True)
for i in range(len(otu_data)):
for j in range(len(otu_data)):
for method_set in method_sets:
method_results = []
for method in method_set:
method_results.append(method(otu_data[i], otu_data[j], otu_ids, t))
npt.assert_almost_equal(*method_results)
"""
Explanation: API testing, making the same method calls and verifying results. Intentionally doing the full matrix in the (very unexpected event) that d(u, v) != d(v, u)
End of explanation
"""
sample_counts = [2, 4, 8, 16, 32]
uw_times = []
uwf_times = []
w_times = []
wn_times = []
wf_times = []
wnf_times = []
uw_times_p = []
uwf_times_p = []
w_times_p = []
wn_times_p = []
wf_times_p = []
wnf_times_p = []
for n_samples in sample_counts:
ids, otu_ids, otu_data, t = get_random_samples(n_samples, tree, True)
# sheared trees
for times, method in [[uw_times_p, unweighted_unifrac], [w_times_p, weighted_unifrac]]:
result = %timeit -o pw_distances(method, otu_data, otu_ids=otu_ids, tree=t)
times.append(result.best)
result = %timeit -o pw_distances(weighted_unifrac, otu_data, otu_ids=otu_ids, tree=t, normalized=True)
wn_times_p.append(result.best)
for times, method in [[uwf_times_p, unifrac], [wf_times_p, w_unifrac]]:
result = %timeit -o make_and_run_pw_distances(method, otu_data, otu_ids=otu_ids, tree=t)
times.append(result.best)
result = %timeit -o make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=t, normalized=True)
wnf_times_p.append(result.best)
# full trees
for times, method in [[uw_times, unweighted_unifrac], [w_times, weighted_unifrac]]:
result = %timeit -o pw_distances(method, otu_data, otu_ids=otu_ids, tree=tree)
times.append(result.best)
result = %timeit -o pw_distances(weighted_unifrac, otu_data, otu_ids=otu_ids, tree=tree, normalized=True)
wn_times.append(result.best)
for times, method in [[uwf_times, unifrac], [wf_times, w_unifrac]]:
result = %timeit -o make_and_run_pw_distances(method, otu_data, otu_ids=otu_ids, tree=tree)
times.append(result.best)
result = %timeit -o make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=tree, normalized=True)
wnf_times.append(result.best)
fig = figure(figsize=(6,6))
plot(sample_counts, uw_times, '--', color='blue')
plot(sample_counts, w_times, '--', color='cyan')
plot(sample_counts, uwf_times, '--', color='red')
plot(sample_counts, wf_times, '--', color='orange')
plot(sample_counts, wn_times, '--', color='green')
plot(sample_counts, wnf_times, '--', color='black')
plot(sample_counts, uw_times_p, color='blue')
plot(sample_counts, w_times_p, color='cyan')
plot(sample_counts, uwf_times_p, color='red')
plot(sample_counts, wf_times_p, color='orange')
plot(sample_counts, wn_times_p, color='green')
plot(sample_counts, wnf_times_p, color='black')
legend_acronyms = [
('u', 'unweighted unifrac'),
('w', 'weighted unifrac'),
('fu', 'unweighted fast unifrac'),
('fw', 'weighted fast unifrac'),
('wn', 'weighted normalized unifrac'),
('fwn', 'weighted normalized fast unifrac'),
('u-p', 'unweighted unifrac pruned tree'),
('w-p', 'weighted unifrac pruned tree'),
('fu-p', 'unweighted fast unifrac pruned tree'),
('fw-p', 'weighted fast unifrac pruned tree'),
('wn-p', 'weighted normalized unifrac pruned tree'),
('fwn-p', 'weighted normalized fast unifrac pruned tree')
]
legend([i[0] for i in legend_acronyms], loc=2)
title("pw_distances scaling, with full trees")
xlabel('number of samples', fontsize=15)
ylabel('time (seconds)', fontsize=15)
xscale('log', basex=2)
yscale('log')
xlim(min(sample_counts), max(sample_counts))
ylim(min(uwf_times_p), max(w_times))
tick_params(axis='both', which='major', labelsize=12)
tick_params(axis='both', which='minor', labelsize=12)
savefig('edu vs fast.png')
for ac, name in legend_acronyms:
print("%s\t: %s" % (ac, name))
"""
Explanation: pw_distances scaling tests.
End of explanation
"""
sample_counts_ext = [64, 128, 256, 512, 1024]
for n_samples in sample_counts_ext:
print("sample count: %d" % n_samples)
ids, otu_ids, otu_data, t = get_random_samples(n_samples, tree, True)
for times, method in [[uwf_times_p, unifrac], [wf_times_p, w_unifrac]]:
result = %timeit -o make_and_run_pw_distances(method, otu_data, otu_ids=otu_ids, tree=t)
times.append(result.best)
result = %timeit -o make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=t, normalized=True)
wnf_times_p.append(result.best)
for times, method in [[uwf_times, unifrac], [wf_times, w_unifrac]]:
result = %timeit -o make_and_run_pw_distances(method, otu_data, otu_ids=otu_ids, tree=tree)
times.append(result.best)
result = %timeit -o make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=tree, normalized=True)
wnf_times.append(result.best)
# at 4GB mem for 1024 set, counts array in this case is ~(1024 x 180000) or approx 1.4GB
# so not _that_ bad given the other resident data structures and notebook state.
sample_counts_ext = [64, 128, 256, 512, 1024]
sample_counts_full = sample_counts[:]
sample_counts_full.extend(sample_counts_ext)
fig = figure(figsize=(6,6))
plot(sample_counts_full, uwf_times, '--', color='red')
plot(sample_counts_full, wf_times, '--', color='orange')
plot(sample_counts_full, wnf_times, '--', color='black')
plot(sample_counts_full, uwf_times_p, color='red')
plot(sample_counts_full, wf_times_p, color='orange')
plot(sample_counts_full, wnf_times_p, color='black')
legend_acronyms = [
('fu', 'unweighted fast unifrac'),
('fw', 'weighted fast unifrac'),
('fwn', 'weighted normalized fast unifrac'),
('fu-p', 'unweighted fast unifrac pruned tree'),
('fw-p', 'weighted fast unifrac pruned tree'),
('fwn-p', 'weighted normalized fast unifrac pruned tree')
]
legend([i[0] for i in legend_acronyms], loc=2)
title("pw_distances scaling, extended fast unifrac")
xlabel('number of samples', fontsize=15)
ylabel('time (seconds)', fontsize=15)
xscale('log', basex=2)
yscale('log')
xlim(min(sample_counts_full), max(sample_counts_full))
ylim(min(uwf_times_p), max([uwf_times[-1], wf_times[-1], wnf_times[-1]]))
tick_params(axis='both', which='major', labelsize=12)
tick_params(axis='both', which='minor', labelsize=12)
savefig('fast extended.png')
for ac, name in legend_acronyms:
print("%s\t: %s" % (ac, name))
n_upper_tri = lambda n: max((n**2 / 2.0) - n, 1)
time_per_calc = lambda times, counts: [(t / n_upper_tri(c)) for t, c in zip(times, counts)]
fig = figure(figsize=(6,6))
plot(sample_counts_full, time_per_calc(uwf_times, sample_counts_full), '--', color='red')
plot(sample_counts_full, time_per_calc(wf_times, sample_counts_full), '--', color='orange')
plot(sample_counts_full, time_per_calc(wnf_times, sample_counts_full), '--', color='black')
plot(sample_counts_full, time_per_calc(uwf_times_p, sample_counts_full), color='red')
plot(sample_counts_full, time_per_calc(wf_times_p, sample_counts_full), color='orange')
plot(sample_counts_full, time_per_calc(wnf_times_p, sample_counts_full), color='black')
legend_acronyms = [
('fu', 'unweighted fast unifrac'),
('fw', 'weighted fast unifrac'),
('fwn', 'weighted normalized fast unifrac'),
('fu-p', 'unweighted fast unifrac pruned tree'),
('fw-p', 'weighted fast unifrac pruned tree'),
('fwn-p', 'weighted normalized fast unifrac pruned tree')
]
legend([i[0] for i in legend_acronyms], loc=2)
title("pw_distances scaling, fast unifrac extended")
xlabel('number of samples', fontsize=15)
ylabel('time (seconds) per pairwise calc', fontsize=15)
xscale('log', basex=2)
yscale('log')
xlim(min(sample_counts_full), max(sample_counts_full))
#ylim(min(uwf_times_p), max([uwf_times[-1], wf_times[-1], wnf_times[-1]]))
tick_params(axis='both', which='major', labelsize=12)
tick_params(axis='both', which='minor', labelsize=12)
savefig('fast extended per calc.png')
for ac, name in legend_acronyms:
print("%s\t: %s" % (ac, name))
"""
Explanation: Extend to larger sample counts for fast unifrac
End of explanation
"""
|
gaufung/Data_Analytics_Learning_Note | python-statatics-tutorial/basic-theme/python-language/Function.ipynb | mit | bigx = 10
def double_times(x = bigx):
return x * 2
bigx = 1000
double_times()
"""
Explanation: 函数
1 默认参数
函数的参数中如果有默认参数,那么函数在定义的时候将被计算而不是等到函数被调用的时候
End of explanation
"""
def foo(values, x=[]):
for value in values:
x.append(value)
return x
foo([0,1,2])
foo([4,5])
def foo_fix(values, x=[]):
if len(x) != 0:
x = []
for value in values:
x.append(value)
return x
foo_fix([0,1,2])
foo_fix([4,5])
"""
Explanation: 在可变的集合类型中(list和dictionary)中,如果默认参数为该类型,那么所有的操作调用该函数的操作将会发生变化
End of explanation
"""
x = 5
def set_x(y):
x = y
print 'inner x is {}'.format(x)
set_x(10)
print 'global x is {}'.format(x)
"""
Explanation: 2 global 参数
End of explanation
"""
def set_global_x(y):
global x
x = y
print 'global x is {}'.format(x)
set_global_x(10)
print 'global x now is {}'.format(x)
"""
Explanation: x = 5 表明为global变量,但是在set_x函数内部中,出现了x,但是其为局部变量,因此全局变量x并没有发生改变。
End of explanation
"""
def fib_recursive(n):
if n == 0 or n == 1:
return n
else:
return fib_recursive(n-1) + fib_recursive(n-2)
fib_recursive(10)
"""
Explanation: 通过添加global关键字,使得global变量x发生了改变。
3 Exercise
Fibonacci sequence
$F_{n+1}=F_{n}+F_{n-1}$ 其中 $F_{0}=0,F_{1}=1,F_{2}=1,F_{3}=2 \cdots$
递归版本
算法时间时间复杂度高达 $T(n)=n^2$
End of explanation
"""
def fib_iterator(n):
g = 0
h = 1
i = 0
while i < n:
h = g + h
g = h - g
i += 1
return g
fib_iterator(10)
"""
Explanation: 迭代版本
算法时间复杂度为$T(n)=n$
End of explanation
"""
def fib_iter(n):
g = 0
h = 1
i = 0
while i < n:
h = g + h
g = h -g
i += 1
yield g
for value in fib_iter(10):
print value,
"""
Explanation: 迭代器版本
使用 yield 关键字可以实现迭代器
End of explanation
"""
import numpy as np
a = np.array([[1,1],[1,0]])
def pow_n(n):
if n == 1:
return a
elif n % 2 == 0:
half = pow_n(n/2)
return half.dot(half)
else:
half = pow_n((n-1)/2)
return a.dot(half).dot(half)
def fib_pow(n):
a_n = pow_n(n)
u_0 = np.array([1,0])
return a_n.dot(u_0)[1]
fib_pow(10)
"""
Explanation: 矩阵求解法
$$\begin{bmatrix}F_{n+1}\F_{n}\end{bmatrix}=\begin{bmatrix}1&1\1&0\end{bmatrix}\begin{bmatrix}F_{n}\F_{n-1}\end{bmatrix}$$
令$u_{n+1}=Au_{n}$ 其中 $u_{n+1}=\begin{bmatrix}F_{n+1}\F_{n}\end{bmatrix}$
通过矩阵的迭代求解
$u_{n+1}=A^{n}u_{0}$,其中 $u_{0}=\begin{bmatrix}1 \0 \end{bmatrix}$,对于$A^n$ 可以通过 $(A^{n/2})^{2}$ 方式求解,使得算法时间复杂度达到 $log(n)$
End of explanation
"""
def quick_sort(array):
if len(array) < 2:
return array
else:
pivot = array[0]
left = [item for item in array[1:] if item < pivot]
right = [item for item in array[1:] if item >= pivot]
return quick_sort(left)+[pivot]+quick_sort(right)
quick_sort([10,11,3,21,9,22])
"""
Explanation: Quick Sort
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/bnu/cmip6/models/sandbox-1/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-1', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: BNU
Source ID: SANDBOX-1
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
laurajchang/NPTFit | examples/Example7_Galactic_Center_nonPoissonian.ipynb | mit | # Import relevant modules
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
from NPTFit import nptfit # module for performing scan
from NPTFit import create_mask as cm # module for creating the mask
from NPTFit import dnds_analysis # module for analysing the output
from NPTFit import psf_correction as pc # module for determining the PSF correction
"""
Explanation: Example 7: Application of NPTFit to the Galactic Center Excess
It was found in Example 3 that a non-zero value of the GCE template is preferred by a fit in the galactic center. In this example we will test the point source interpretation of this excess by including, in addition to the Poissonian templates considered, non-Poissonian point source templates of various morphologies.
Here we simply perform the run, a detailed analysis of the results can be found in the next example. A Python script version of this notebook can be found as Example7_Galactic_Center_nonPoissonian.py, which can be run faster on multiple processors with MPI (see example in Example7_Galactic_Center_Batch.batch.
NB: Even with nlive=100, this notebook takes roughly one hour to complete. This highlights that for realistic non-Poissonian runs, running on multiple cores becomes necessary. We show an explicit application of this in Example 9.
NB: This example makes use of the Fermi Data, which needs to already be installed. See Example 1 for details.
End of explanation
"""
n = nptfit.NPTF(tag='GCE_Example')
fermi_data = np.load('fermi_data/fermidata_counts.npy')
fermi_exposure = np.load('fermi_data/fermidata_exposure.npy')
n.load_data(fermi_data, fermi_exposure)
pscmask=np.array(np.load('fermi_data/fermidata_pscmask.npy'), dtype=bool)
analysis_mask = cm.make_mask_total(band_mask = True, band_mask_range = 2,
mask_ring = True, inner = 0, outer = 30,
custom_mask = pscmask)
n.load_mask(analysis_mask)
dif = np.load('fermi_data/template_dif.npy')
iso = np.load('fermi_data/template_iso.npy')
bub = np.load('fermi_data/template_bub.npy')
gce = np.load('fermi_data/template_gce.npy')
dsk = np.load('fermi_data/template_dsk.npy')
n.add_template(dif, 'dif')
n.add_template(iso, 'iso')
n.add_template(bub, 'bub')
n.add_template(gce, 'gce')
n.add_template(dsk, 'dsk')
"""
Explanation: Step 1: Set up the Scan
We first need to
1. Set up an instance of NPTF from npfit.py
2. Load in the data and exposure maps
3. Set up and load the mask used for the scan
4. Load in the spatial templates
These are done identically to Example 3, and we refer to that notebook for details.
End of explanation
"""
n.add_poiss_model('dif', '$A_\mathrm{dif}$', fixed=True, fixed_norm=14.67)
n.add_poiss_model('iso', '$A_\mathrm{iso}$', [0,2], False)
n.add_poiss_model('gce', '$A_\mathrm{gce}$', [0,2], False)
n.add_poiss_model('bub', '$A_\mathrm{bub}$', [0,2], False)
"""
Explanation: Step 2: Add Models
End of explanation
"""
n.add_non_poiss_model('gce',
['$A_\mathrm{gce}^\mathrm{ps}$','$n_1^\mathrm{gce}$','$n_2^\mathrm{gce}$','$S_b^{(1), \mathrm{gce}}$'],
[[-6,1],[2.05,30],[-2,1.95],[0.05,40]],
[True,False,False,False])
n.add_non_poiss_model('dsk',
['$A_\mathrm{dsk}^\mathrm{ps}$','$n_1^\mathrm{dsk}$','$n_2^\mathrm{dsk}$','$S_b^{(1), \mathrm{dsk}}$'],
[[-6,1],[2.05,30],[-2,1.95],[0.05,40]],
[True,False,False,False])
"""
Explanation: This time we add a non-Poissonian template correlated with the Galactic Center Excess and also one spatially distributed as a thin disk. The latter is designed to account for the unresolved point sources attributed to the disk of the Milky Way (known sources in the 3FGL are masked).
End of explanation
"""
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.1812)
f_ary, df_rho_div_f_ary = pc_inst.f_ary, pc_inst.df_rho_div_f_ary
n.configure_for_scan(f_ary, df_rho_div_f_ary, nexp=1)
"""
Explanation: Step 3: Configure Scan with PSF correction
End of explanation
"""
n.perform_scan(nlive=100)
"""
Explanation: Step 4: Perform the Scan
As noted above, we take a small value of nlive simply to ensure the run finishes in a reasonable time on a single core.
End of explanation
"""
from IPython.display import Image
Image(url = "https://imgs.xkcd.com/comics/compiling.png")
"""
Explanation: This can take up to an hour to run. The output of this run will be analyzed in detail in the next example.
End of explanation
"""
|
2php/CodeToolKit | 9.caffe-ssd/examples/inceptionv3.ipynb | mit | # set up Python environment: numpy for numerical routines, and matplotlib for plotting
import numpy as np
import matplotlib.pyplot as plt
# display plots in this notebook
%matplotlib inline
# set display defaults
plt.rcParams['figure.figsize'] = (10, 10) # large images
plt.rcParams['image.interpolation'] = 'nearest' # don't interpolate: show square pixels
plt.rcParams['image.cmap'] = 'gray' # use grayscale output rather than a (potentially misleading) color heatmap
"""
Explanation: Classification with Inception-V3
In this example we'll classify an image with the Inception-V3.
We'll dig into the model to inspect features and the output.
1. Setup
First, set up Python, numpy, and matplotlib.
End of explanation
"""
# The caffe module needs to be on the Python path;
# we'll add it here explicitly.
import sys
caffe_root = '../' # this file should be run from {caffe_root}/examples (otherwise change this line)
sys.path.insert(0, caffe_root + 'python')
import caffe
# If you get "No module named _caffe", either you have not built pycaffe or you have the wrong path.
"""
Explanation: Load caffe.
End of explanation
"""
import os
if os.path.isfile(caffe_root + 'models/inception_v3/inception_v3.caffemodel'):
print 'Inception v3 found.'
else:
sys.exit()
"""
Explanation: If needed, download the reference model ("CaffeNet", a variant of AlexNet).
End of explanation
"""
caffe.set_device(0) # if we have multiple GPUs, pick the first one
caffe.set_mode_gpu()
model_def = caffe_root + 'models/inception_v3/inception_v3_deploy.prototxt'
model_weights = caffe_root + 'models/inception_v3/inception_v3.caffemodel'
net = caffe.Net(model_def, # defines the structure of the model
model_weights, # contains the trained weights
caffe.TEST) # use test mode (e.g., don't perform dropout)
"""
Explanation: 2. Load net and set up input preprocessing
Set Caffe to CPU mode and load the net from disk.
End of explanation
"""
import caffe
mu = np.array([128.0, 128.0, 128.0])
# create transformer for the input called 'data'
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
transformer.set_transpose('data', (2,0,1)) # move image channels to outermost dimension
transformer.set_mean('data', mu) # subtract the dataset-mean value in each channel
transformer.set_raw_scale('data', 255) # rescale from [0, 1] to [0, 255]
transformer.set_input_scale('data', 1/128.0)
transformer.set_channel_swap('data', (2,1,0)) # swap channels from RGB to BGR
"""
Explanation: Set up input preprocessing. (We'll use Caffe's caffe.io.Transformer to do this, but this step is independent of other parts of Caffe, so any custom preprocessing code may be used).
Our default CaffeNet is configured to take images in BGR format. Values are expected to start in the range [0, 255] and then have the mean ImageNet pixel value subtracted from them. In addition, the channel dimension is expected as the first (outermost) dimension.
As matplotlib will load images with values in the range [0, 1] in RGB format with the channel as the innermost dimension, we are arranging for the needed transformations here.
End of explanation
"""
# set the size of the input (we can skip this if we're happy
# with the default; we can also change it later, e.g., for different batch sizes)
net.blobs['data'].reshape(1, # batch size
3, # 3-channel (BGR) images
299, 299) # image size is 227x227
"""
Explanation: 3. CPU classification
Now we're ready to perform classification. Even though we'll only classify one image, we'll set a batch size of 50 to demonstrate batching.
End of explanation
"""
image = caffe.io.load_image(caffe_root + 'examples/images/cropped_panda.jpg')
transformed_image = transformer.preprocess('data', image)
plt.imshow(image)
"""
Explanation: Load an image (that comes with Caffe) and perform the preprocessing we've set up.
End of explanation
"""
# copy the image data into the memory allocated for the net
net.blobs['data'].data[...] = transformed_image
### perform classification
output = net.forward()
output_prob = output['softmax_prob'][0] # the output probability vector for the first image in the batch
print output_prob.shape
print 'predicted class is:', output_prob.argmax()
"""
Explanation: * Adorable! Let's classify it!
End of explanation
"""
from caffe.model_libs import *
from google.protobuf import text_format
# load ImageNet labels
labelmap_file = caffe_root + 'data/ILSVRC2016/labelmap_ilsvrc_clsloc.prototxt'
file = open(labelmap_file, 'r')
labelmap = caffe_pb2.LabelMap()
text_format.Merge(str(file.read()), labelmap)
def get_labelname(labels):
num_labels = len(labelmap.item)
labelnames = []
if type(labels) is not list:
labels = [labels]
for label in labels:
found = False
for i in xrange(0, num_labels):
if label == labelmap.item[i].label:
found = True
labelnames.append(labelmap.item[i].display_name)
break
assert found == True
return labelnames
print 'output label:', get_labelname(output_prob.argmax())[0]
"""
Explanation: The net gives us a vector of probabilities; the most probable class was the 169th one. But is that correct? Let's check the ImageNet labels...
End of explanation
"""
# sort top five predictions from softmax output
top_inds = output_prob.argsort()[::-1][:5] # reverse sort and take five largest items
print 'probabilities and labels:'
zip(output_prob[top_inds], get_labelname(top_inds.tolist()))
"""
Explanation: "Giant panda" is correct! But let's also look at other top (but less confident predictions).
End of explanation
"""
# for each layer, show the output shape
for layer_name, blob in net.blobs.iteritems():
print layer_name + '\t' + str(blob.data.shape)
"""
Explanation: We see that less confident predictions are sensible.
5. Examining intermediate output
A net is not just a black box; let's take a look at some of the parameters and intermediate activations.
First we'll see how to read out the structure of the net in terms of activation and parameter shapes.
For each layer, let's look at the activation shapes, which typically have the form (batch_size, channel_dim, height, width).
The activations are exposed as an OrderedDict, net.blobs.
End of explanation
"""
for layer_name, param in net.params.iteritems():
if 'bn' in layer_name:
print layer_name + '\t' + str(param[0].data.shape), str(param[1].data.shape), str(param[2].data.shape)
elif 'scale' in layer_name or layer_name == 'softmax':
print layer_name + '\t' + str(param[0].data.shape), str(param[1].data.shape)
else:
print layer_name + '\t' + str(param[0].data.shape)
"""
Explanation: Now look at the parameter shapes. The parameters are exposed as another OrderedDict, net.params. We need to index the resulting values with either [0] for weights or [1] for biases.
The param shapes typically have the form (output_channels, input_channels, filter_height, filter_width) (for the weights) and the 1-dimensional shape (output_channels,) (for the biases).
End of explanation
"""
def vis_square(data):
"""Take an array of shape (n, height, width) or (n, height, width, 3)
and visualize each (height, width) thing in a grid of size approx. sqrt(n) by sqrt(n)"""
# normalize data for display
data = (data - data.min()) / (data.max() - data.min())
# force the number of filters to be square
n = int(np.ceil(np.sqrt(data.shape[0])))
padding = (((0, n ** 2 - data.shape[0]),
(0, 1), (0, 1)) # add some space between filters
+ ((0, 0),) * (data.ndim - 3)) # don't pad the last dimension (if there is one)
data = np.pad(data, padding, mode='constant', constant_values=1) # pad with ones (white)
# tile the filters into an image
data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1)))
data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:])
plt.imshow(data); plt.axis('off')
"""
Explanation: Since we're dealing with four-dimensional data here, we'll define a helper function for visualizing sets of rectangular heatmaps.
End of explanation
"""
# the parameters are a list of [weights, biases]
filters = net.params['conv'][0].data
print filters[(0),(0)]
vis_square(filters.transpose(0, 2, 3, 1))
"""
Explanation: First we'll look at the first layer filters, conv1
End of explanation
"""
feat = net.blobs['conv_1'].data[0, :32]
vis_square(feat)
"""
Explanation: The first layer output, conv1 (rectified responses of the filters above, first 36 only)
End of explanation
"""
feat = net.blobs['conv_3'].data[0]
print feat[7,]
vis_square(feat)
"""
Explanation: The fifth layer after pooling, pool5
End of explanation
"""
feat = net.blobs['pool_3'].data[0]
plt.subplot(2, 1, 1)
plt.plot(feat.flat)
plt.subplot(2, 1, 2)
_ = plt.hist(feat.flat[feat.flat > 0], bins=100)
"""
Explanation: The first fully connected layer, fc6 (rectified)
We show the output values and the histogram of the positive values
End of explanation
"""
feat = net.blobs['softmax_prob'].data[0]
plt.figure(figsize=(15, 3))
plt.plot(feat.flat)
"""
Explanation: The final probability output, prob
End of explanation
"""
|
spectralDNS/shenfun | docs/source/fasttransforms.ipynb | bsd-2-clause | from shenfun import *
from mpi4py_fft import fftw
"""
Explanation: <!-- File automatically generated using DocOnce (https://github.com/doconce/doconce/):
doconce format ipynb fasttransforms.do.txt -->
Demo - Some fast transforms
Mikael Mortensen (email: mikaem@math.uio.no), Department of Mathematics, University of Oslo.
Date: May 27, 2021
Summary. This demo will show how to compute fast forward transforms for the three
different Dirichlet bases that are implemented for Chebyshev
polynomials in Shenfun.
Forward and backward transforms
A function $u(x)$ can be approximated in a finite global spectral
expansion $u_N(x)$ as
<!-- Equation labels as ordinary links -->
<div id="eq:expansion"></div>
$$
\label{eq:expansion} \tag{1}
u_N(x) = \sum_{k=0}^{N-1} \hat{u}_k \phi_k(x), \quad \forall \, x \, \in [-1, 1],
$$
where $\phi_k(x)$ are the basis functions and $\boldsymbol{\hat{u}} = {\hat{u}k}{k=0}^{N-1}$
are the expansion coefficients. The function $u_N(x)$ is continuous
on the interval domain $[-1, 1]$. The span of the basis functions
$V_N = \text{span} {\phi_k}{k=0}^{N-1}$ represents a functionspace.
Associated with this functionspace is a set of quadrature points
${x_k}{k=0}^{N-1}$ that, along with quadrature weights ${\omega_k}{k=0}^{N-1}$, can be used
for efficient integration. We can also evaluate the function $u_N(x)$ at
these quadrature points to get the sequence
$\boldsymbol{u} = {u_N(x_k)}{k=0}^{N-1}$. If $\boldsymbol{\hat{u}}={\hat{u}k}{k=0}^{N-1}$ are known,
then $\boldsymbol{u}$ can be evaluated directly from
Eq. (1)
<!-- Equation labels as ordinary links -->
<div id="eq:expansionQ"></div>
$$
\label{eq:expansionQ} \tag{2}
u_N(x_j) = \sum_{k=0}^{N-1} \hat{u}_k \phi_k(x_j), \quad \forall \, j=0,1, \ldots, N-1.
$$
This would correspond to a backward transform according to
the Shenfun terminology. A direct evaluation of the backward
(2) transform takes $\mathcal{O}(N^2)$
operations since it requires a double sum (over both $j$
and $k$). A fast transform is
a transform that can be computed in $\mathcal{O}(N \log N)$ operations.
This is what the Fast Fourier Transform (FFT) does. It computes a double
sum, like (2), in $\mathcal{O}(N \log N)$ operations.
The other way around, computing ${\hat{u}k}{k=0}^{N-1}$ from the
known ${u_N(x_k)}_{k=0}^{N-1}$ corresponds to a forward transform.
The forward transform is computed using a projection of $u$
into $V_N$, which is formulated as: find $u_N \in V_N$ such that
<!-- Equation labels as ordinary links -->
<div id="eq:projection"></div>
$$
\label{eq:projection} \tag{3}
(u_N-u, v){\omega^{\sigma}} = 0, \quad \forall \, v \in V{N},
$$
where $(a, b){\omega^{\sigma}} = \int{I} a b \omega^{\sigma} dx$ is the
inner product in $L^2_{\omega^{\sigma}}(I)$, and $\omega^{\sigma}(x)=(1-x^2)^{\sigma}$ is a weight function.
For Chebyshev polynomials the weight function is usually $\omega^{-1/2}=(1-x^2)^{-1/2}$.
Inserting for $u_N$ and $v=\phi_k$, we get
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
\sum_{j=0}^{N-1}(\phi_j, \phi_k){\omega^{\sigma}} \hat{u}{j} = (u, \phi_k)_{\omega^{\sigma}},
\label{_auto1} \tag{4}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
B \boldsymbol{\hat{u}} = \boldsymbol{\tilde{u}},
\label{_auto2} \tag{5}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
\boldsymbol{\hat{u}} = B^{-1} \boldsymbol{\tilde{u}},
\label{_auto3} \tag{6}
\end{equation}
$$
where
$\boldsymbol{\tilde{u}} = {(u, \phi_k){\omega^{\sigma}}}{k=0}^{N-1}$ and the mass matrix
$B = (b_{kj}){k,j=0}^{N-1}$, with $b{kj}=(\phi_j, \phi_k)_{\omega^{\sigma}}$.
Note that the forward transform requires both an inner product
$\boldsymbol{\tilde{u}}$ and a matrix inversion. By a fast forward transform
we mean a transform that can be computed in $\mathcal{O}(N \log N)$
operations. If $B$ is a diagonal or banded matrix, the matrix inversion costs $\mathcal{O}(N)$,
and the limiting factor is then the inner product. Like for the backward transform,
the inner product, computed with quadrature, is a double sum
$$
(u, \phi_k){\omega^{\sigma}} = \sum{j=0}^{N-1} u(x_j) \phi_k(x_j) \omega_j, \quad \forall \, k = 0, 1, \ldots, N-1,
$$
where ${\omega_j}_{j=0}^{N-1}$ are the quadrature weights.
A naive implementation of the inner product
takes $\mathcal{O}(N^2)$ operations. However,
for Chebyshev polynomials we can compute the double loop with
fast $\mathcal{O}(N \log N)$ discrete sine or cosine transforms,
that are versions of the FFT. To see this, assume that the basis functions are $\phi_k(x) =T_k(x)$, where
$T_k(x)$ is the $k$'th Chebyshev polynomial of the first kind,
and the weight function is $\omega^{-1/2}$.
We then choose Gauss-Chebyshev points $x_j = \cos(\theta_j)$,
where $\theta_j=\pi (2j+1)/(2N)$, and the associated quadrature weights
that are constant $\omega_j = \pi/N$. The Chebyshev polynomials evaluated
on the quadrature points can now
alternatively be written as $T_k(x_j) = \cos(k \theta_j)$,
and the inner product becomes
$$
(u, T_k){\omega^{-1/2}} = \sum{j=0}^{N-1} u(x_j) \cos(k \theta_j) \pi/N, \quad \forall \, k = 0, 1, \ldots, N-1.
$$
From the FFTW documentation
we recognise this sum as half a DCT-II (the FFTW DCT-II has a factor
2 in front of the sum) of $\boldsymbol{u}\pi/N$. Hence, we can compute the inner product as
$$
(u, T_k)_{\omega^{-1/2}} = \frac{\pi}{2N} \text{dct}^{II}(\boldsymbol{u})_k, \quad k = 0, 1, \ldots, N-1.
$$
Dirichlet bases
The basis function $T_k$ satisfies $T_k(\pm 1) = (\pm 1)^k$ at the
boundaries of the domain, and the space $S_N=\text{span}{T_k}_{k=0}^{N-1}$,
of dimension $N$,
is thus not associated with any specific set of boundary conditions.
A functionspace for homogeneous Dirichlet boundary conditions is
given as $V_N={v\in S_N | v(\pm 1)=0 }$. Because of the two restrictions
the space has dimension $N-2$.
There are several different choices of basis functions
for $V_N$.
The most interesting we name $\phi_k^n$, for integer $n$, and
define them as
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
\phi^n_k = \omega T^{(n)}{k+n} = (1-x^2) T^{(n)}{k+n},
\label{_auto4} \tag{7}
\end{equation}
$$
where $T^{(n)}{k+n}$ is the $n$'th derivative of $T{k+n}$. We have
for any integer $n$ that $V_N=\text{span}{\phi^n_k}_{k=0}^{N-3}$, and an
expansion in any of these basis functions is
<!-- Equation labels as ordinary links -->
<div id="eq:uNgeneric"></div>
$$
\begin{equation}
\label{eq:uNgeneric} \tag{8}
u_N = \sum_{k=0}^{N-3} \hat{u}^n_k \phi^n_k.
\end{equation}
$$
We can find the sequence ${\hat{u}^n_{k}}_{k=0}^{N-3}$ for any $n$
using a projection into the space $V_N$. The projection is computed
by using Eq. (8) and $v=\phi^n_k$ in
Eq. (3)
<!-- Equation labels as ordinary links -->
<div id="eq:projortho"></div>
$$
\begin{equation}
\label{eq:projortho} \tag{9}
\sum_{j=0}^{N-3} ( T^{(n)}{j+n}, T^{(n)}{k+n}){\omega^{\sigma+2}} \hat{u}^{n}_j = (u, T^{(n)}{k+n})_{\omega^{\sigma+1}}.
\end{equation}
$$
Now how can this projection be computed as efficiently as possible?
The Chebyshev polynomials and their derivatives are known to satisfy
the following orthogonality relation
<!-- Equation labels as ordinary links -->
<div id="eq:orthon"></div>
$$
\begin{equation}
\label{eq:orthon} \tag{10}
(T^{(n)}j, T^{(n)}_k){\omega^{n-1/2}} = \alpha^{n}k \delta{kj}, \quad \text{for}\, n \ge 0,
\end{equation}
$$
where $\delta_{kj}$ is the Kronecker delta function and
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
\alpha^n_k = \frac{c_{k+n}\pi k (k+n-1)!}{2(k-n)!},
\label{_auto5} \tag{11}
\end{equation}
$$
where $c_0=2$ and $c_k=1$ for $k>0$. This can be used in
computing (9), because we just
need to choose the $\sigma$ that leads to a diagonal mass matrix.
For $n=(0, 1, 2)$ this will be $\sigma=-5/2, -3/2$ and $-1/2$,
respectively. So, choosing $\sigma=-5/2, -3/2$ and $-1/2$
for $n=0, 1$ and 2, respectively, will lead to a diagonal
mass matrix $( T^{(n)}{j+n}, T^{(n)}{k+n})_{\omega^{\sigma+2}}$.
Using these $\sigma$'s we can invert the diagonal mass matrices
in Eq. (9) to get
<!-- Equation labels as ordinary links -->
<div id="_auto6"></div>
$$
\begin{equation}
\hat{u}^n_k = \frac{1}{\alpha^n_{k+n}}(u, T^{(n)}{k+n}){\omega^{\sigma+1}}, \quad k=0, 1, \ldots, N-3, \text{ for } n \in (0, 1, 2).
\label{_auto6} \tag{12}
\end{equation}
$$
Using now quadrature, $1-x^2_i=\sin^2 \theta_i$ and the
fast transforms $(u, T_k){\omega^{-1/2}} = \pi/2/N \text{dct}^{II}(\boldsymbol{u})_k$
and $(u, U_k){\omega^{-1/2}} = \pi/2/N \text{dst}^{II}(\boldsymbol{u}/\sin \boldsymbol{\theta})_k$,
where $\boldsymbol{u}/\sin \boldsymbol{\theta}$ implies element-wise division,
we get
<!-- Equation labels as ordinary links -->
<div id="eq:fast1"></div>
$$
\begin{equation}
\hat{u}^0_k = \frac{1}{c_k N} \text{dct}^{II}(\boldsymbol{u}/\sin^2 \boldsymbol{\theta})_k, \quad k = 0, 1, \ldots, N-3, \label{eq:fast1} \tag{13}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="eq:fast2"></div>
$$
\begin{equation}
\hat{u}^1_k = \frac{1}{(k+1)N}\text{dst}^{II}(\boldsymbol{u}/\sin \boldsymbol{\theta})_k, \quad k = 0, 1, \ldots, N-3, \label{eq:fast2} \tag{14}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="eq:fast3"></div>
$$
\begin{equation}
\hat{u}^2_k = \frac{1}{2(k+2)}\left(\hat{u}^1_k - \hat{u}^1_{k+2} \right), \quad k=0, 1, \ldots, N-3. \label{eq:fast3} \tag{15}
\end{equation}
$$
The last one requires some work, using the identity
$\phi^2_k=(1-x^2)T''{k+2}=0.5(k+2)(k+3)(U_k - (k+1)/(k+3)U{k+2})$.
Verification
To validate all the fast methods we compute the projection first regularly
using the Shenfun function project,
which is using $\sigma=-1/2$, and then the fast methods above. The two
projections should be the same, but they will not give identical results.
In general, the fast transforms above should be both faster and more
accurate, because they only take a discrete transform and merely a diagonal
mass matrix inversion.
We start the implementation by importing necessary modules from Shenfun
and mpi4py-fft
End of explanation
"""
N = 20
D0 = FunctionSpace(N, 'C', bc=(0, 0), basis='Heinrichs')
"""
Explanation: The three bases ${\phi^n_k}_{k=0}^{N-3}$ are implemented
with slightly different scaling in shenfun.
The first, with $n=0$, is obtained with no special scaling using
End of explanation
"""
D1 = FunctionSpace(N, 'C', bc=(0, 0)) # this is the default basis
"""
Explanation: The second basis is implemented in Shenfun as $\phi_k = \frac{2}{k+1}\phi^1_k$,
which can be simplified as
<!-- Equation labels as ordinary links -->
<div id="eq:ft:shen"></div>
$$
\label{eq:ft:shen} \tag{16}
\phi_k(x) = T_k-T_{k+2}, \quad k=0,1, \ldots, N-3,
$$
and implemented as
End of explanation
"""
D2 = FunctionSpace(N, 'U', bc=(0, 0), quad='GC') # quad='GU' is default for U
"""
Explanation: Because of the scaling the expansion coefficients for $\phi_k$ are
$\hat{u}^{\phi}_k=\frac{k+1}{2}\hat{u}^1_k$. Using (14) we get
$$
\hat{u}^{\phi}_k = \frac{1}{2N}\text{dst}^{II}(\boldsymbol{u}/\sin \boldsymbol{\theta})_k, \quad k = 0, 1, \ldots, N-3.
$$
The third basis is also scaled and implemented in Shenfun as $\psi_k = \frac{2}{(k+3)(k+2)}\phi^2_k$,
which can be simplified using Chebyshev polynomials of the second
kind $U_k$
<!-- Equation labels as ordinary links -->
<div id="eq:ft:dirichletU"></div>
$$
\label{eq:ft:dirichletU} \tag{17}
\psi_k(x) = U_k-\frac{k+1}{k+3}U_{k+2}, \quad k=0,1, \ldots, N-3.
$$
We get the basis using
End of explanation
"""
f = Function(D0, buffer=np.random.random(N))
f[-2:] = 0
fb = f.backward().copy()
"""
Explanation: and the expansion coefficients are found as
$\hat{u}^{\psi}_k = \frac{(k+3)(k+2)}{2} \hat{u}^2_k$.
For verification of all the fast transforms we first create a vector
consisting of random expansion coefficients, and then transform
it backwards to physical space
End of explanation
"""
u0 = project(fb, D0)
u1 = project(fb, D1)
u2 = project(fb, D2)
"""
Explanation: Next, we perform the regular projections into the three spaces
D0, D1 and D2, using the default inner product
in $L^2_{\omega^{-1/2}}$ for D0 and D1, whereas $L^2_{\omega^{1/2}}$
is used for D2. Now u0, u1 and u2 will be the
three solution vectors
$\boldsymbol{\hat{u}}^{\varphi}$, $\boldsymbol{\hat{u}}^{\phi}$
and $\boldsymbol{\hat{u}}^{\psi}$, respectively.
End of explanation
"""
theta = np.pi*(2*np.arange(N)+1)/(2*N)
# Test for n=0
dct = fftw.dctn(fb.copy(), type=2)
ck = np.ones(N); ck[0] = 2
d0 = dct(fb/np.sin(theta)**2)/(ck*N)
assert np.linalg.norm(d0-u0) < 1e-8, np.linalg.norm(d0-f0)
# Test for n=1
dst = fftw.dstn(fb.copy(), type=2)
d1 = dst(fb/np.sin(theta))/(2*N)
assert np.linalg.norm(d1-u1) < 1e-8
# Test for n=2
ut = d1
k = np.arange(N)
d2 = Function(D2)
d2[:-2] = (k[:-2]+3)/2/(k[:-2]+1)*ut[:-2]
d2[:-2] = d2[:-2] - 0.5*ut[2:]
assert np.linalg.norm(d2-u2) < 1e-8
"""
Explanation: Now compute the fast transforms and assert that they are equal to u0, u1 and u2
End of explanation
"""
%timeit project(fb, D1)
%timeit dst(fb/np.sin(theta))/(2*N)
"""
Explanation: That's it! If you make it to here with no errors, then the three tests pass, and the fast transforms are equal to the slow ones, at least within given precision.
Let's try some timings
End of explanation
"""
dd = np.sin(theta)*2*N
%timeit dst(fb/dd)
"""
Explanation: We can precompute the sine term, because it does not change
End of explanation
"""
%timeit dct(fb/np.sin(theta)**2)/(ck*N)
"""
Explanation: The other two transforms are approximately the same speed.
End of explanation
"""
|
the-deep-learners/TensorFlow-LiveLessons | notebooks/transfer_learning_in_keras.ipynb | mit | import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
"""
Explanation: Transfer Learning in Keras
In this notebook, we'll cover how to load a pre-trained model (in this case, VGGNet19) and finetune it for a new task: detecting hot dogs.
End of explanation
"""
from keras.applications.vgg19 import VGG19
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint
"""
Explanation: Load dependencies
End of explanation
"""
vgg19 = VGG19(include_top=False,
weights='imagenet',
input_shape=(224,224,3),
pooling=None)
"""
Explanation: Load the pre-trained VGG19 model
End of explanation
"""
vgg19.layers
vgg19.summary()
for layer in vgg19.layers:
layer.trainable = False
"""
Explanation: Freeze all the layers in the base VGGNet19 model
End of explanation
"""
# Instantiate the sequential model and add the VGG19 model:
model = Sequential()
model.add(vgg19)
# Add the custom layers atop the VGG19 model:
model.add(Flatten(name='flattened'))
model.add(Dropout(0.5, name='dropout'))
model.add(Dense(2, activation='softmax', name='predictions'))
model.summary()
"""
Explanation: Add custom classification layers
End of explanation
"""
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
output_dir = 'model_output/transfer_VGG'
modelcheckpoint = ModelCheckpoint(filepath=output_dir+
"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
"""
Explanation: Compile the model for training
End of explanation
"""
# Instantiate two image generator classes:
train_datagen = ImageDataGenerator(
rescale=1.0/255,
data_format='channels_last',
rotation_range=30,
horizontal_flip=True,
fill_mode='reflect')
valid_datagen = ImageDataGenerator(
rescale=1.0/255,
data_format='channels_last')
# Define the batch size:
batch_size=32
# Define the train and validation generators:
train_generator = train_datagen.flow_from_directory(
directory='./hot-dog-not-hot-dog/train',
target_size=(224, 224),
classes=['hot_dog','not_hot_dog'],
class_mode='categorical',
batch_size=batch_size,
shuffle=True,
seed=42)
valid_generator = valid_datagen.flow_from_directory(
directory='./hot-dog-not-hot-dog/test',
target_size=(224, 224),
classes=['hot_dog','not_hot_dog'],
class_mode='categorical',
batch_size=batch_size,
shuffle=True,
seed=42)
"""
Explanation: Prepare the data for training
The dataset is available for download here. You should download the zipfile and extract the contents into a folder called hot-dog-not-hot-dog in the notebooks directory.
End of explanation
"""
model.fit_generator(train_generator, steps_per_epoch=15,
epochs=16, validation_data=valid_generator,
validation_steps=15, callbacks=[modelcheckpoint])
model.load_weights('model_output/transfer_VGG/weights.02.hdf5')
"""
Explanation: Train!
End of explanation
"""
|
kikocorreoso/brythonmagic | notebooks/Highcharts (python) tutorial.ipynb | mit | %load_ext brythonmagic
"""
Explanation: First step
In this tutorial we will use Brython, an implementation of Python written in javascript and Python, to access the Highcharts javascript library and to manage the data to be used in the maps. To integrate Brython in the IPython notebook we are using an extension for the notebook called brythonmagic that provides a new magic cell, <code style="background-color: cyan;">%%brython</code>, that allow us to write and execute Brython code in the notebook.
Installation of the brythonmagic IPython extension
As stated before, we will use Brython, and brythonmagic so first of all we need to load the extension and the Brython library.
So, let's load the extension:
End of explanation
"""
from brythonmagic import load_brython_dev
load_brython_dev()
"""
Explanation: And the brython js lib:
End of explanation
"""
from brythonmagic import load_js_lib
load_js_lib("https://cdnjs.cloudflare.com/ajax/libs/highcharts/5.0.7/highcharts.js")
"""
Explanation: [It is highly recommended that, at least, you read the brythonmagic docs to understand what it does. It is also recommended to have a quick look at the Brython docs].
Warning
In order to load javascript libraries in a safety way you should try to use https instead of http when possible (read more here). If you don't trust the source and/or the source cannot be loaded using https then you could download the javascript library and load it from a local location.
Conventions used in the following tutorial.
In the following tutorial I will try to follow several conventions to try to make it more readable.
Code in cells that are not code cells:
a block of code will appear as follows:
```python
This is a block of code
print("Hello world!")
```
Python/Brython code commented in a line of text will appear as follows, <code style="background-color: cyan;">this is a piece of Python/Brython code inline with the text</code>
Javascript code commented in a line of text will appear as follows, <code style="background-color: yellow;">this is a piece of javascript code inline with the text</code>
Most of new code used in a code cell will be commented in a paragraph starting with <span style="background-color: #90EE90">[NEW CODE]</span>
When the Python and the javascript code is not exactly the same I will try to comment how the code would be in javascript.
What is Highcharts?
Highcharts is an open source, client side JavaScript library for making interactive charts, viewable in nearly any modern web browser. Since it is a client side library, it requires no special server side software or settings — you can use it without even downloading anything!
This tutorial just try to explain a little bit what can be found in the official documentation and try to introduce this library to Python users.
The website for Highcharts is located at http://www.highcharts.com/. To begin, we need to download a copy of Highcharts (or, we can directly link to the library — this is what we will do in the present tutorial). You can download the compressed library as a .zip file.
So, before continuing let's load the Highcharts library.
End of explanation
"""
html = """<div id="hc_ex1" style="width: 700px; height: 300px;"></div>"""
"""
Explanation: First example: A simple line chart
First we create some simple HTML code. This HTML code will contain our chart. We will not use complicated HTML code during the tutorial to keep it simple and to be focused in 'How to create interactive charts in the browser with Python'. There is a lot of amazing resources to learn about HTML and CSS.
End of explanation
"""
%%brython -h html -p
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart':{
'renderTo': 'hc_ex1'
},
'title': {
'text': 'Monthly Average Temperature',
'x': -20 #center
},
'subtitle': {
'text': 'Source: WorldClimate.com',
'x': -20
},
'xAxis': {
'categories': ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
},
'yAxis': {
'title': {
'text': 'Temperature (°C)'
},
'plotLines': [{
'value': 0,
'width': 1,
'color': '#808080'
}]
},
'tooltip': {
'valueSuffix': '°C'
},
'legend': {
'layout': 'vertical',
'align': 'right',
'verticalAlign': 'middle',
'borderWidth': 0
},
'series': [{
'name': 'Tokyo',
'data': [7.0, 6.9, 9.5, 14.5, 18.2, 21.5, 25.2, 26.5, 23.3, 18.3, 13.9, 9.6]
}]
}
hc(config)
"""
Explanation: Now the interesting part. To make Highcharts available to Brython we need to 'load' the Highchart object/namespace to Brython using <code style="background-color: cyan;">Highcharts = window.Highcharts</code>. We will use the <code style="background-color: cyan;">new</code> method injected by Brython to the Javascript object that would behave similarly as if we were using Javascript constructors, (ie functions used with the Javascript keyword <code style="background-color: yellow;">new</code>).
The code is as follows:
End of explanation
"""
html = """<div id="hc_ex2" style="width: 700px; height: 300px;"></div>"""
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart': {
'renderTo': 'hc_ex2',
'backgroundColor': {
'linearGradient': [0, 0, 500, 500],
'stops': [[0, 'rgb(255, 255, 255)'],
[1, 'rgb(200, 200, 255)']]
},
'borderRadius': 10
},
'title': {
'align': 'left',
'text': 'My dummy title',
'style': { "color": "green", "fontSize": "20px" }
},
'subtitle': {
'align': 'right',
'text': 'Ugly subtitle',
'style': { "color": "orange", "fontSize": "12px" }
},
'legend': {
'backgroundColor': 'green',
'borderColor': 'yellow',
'borderRadius': 10,
'borderWidth': 3,
},
'series': [{
'data': [1,2,3,4],
'type': 'line',
'name': 'Name of the series',
'color': 'orange',
}],
'tooltip': {
'backgroundColor': 'gray',
'borderColor': 'yellow',
'borderRadius': 10,
'borderWidth': 3,
},
'xAxis': {
'categories': ['data'] * 4,
'lineWidth': 5,
'lineColor': 'violet',
'gridLineColor': 'violet',
'gridLineWidth': 3,
'title': {'text': 'X axis title'}
},
'yAxis': {
'lineWidth': 5,
'lineColor': 'blue',
'gridLineColor': 'blue',
'gridLineWidth': 3,
'title': {'text': 'Y axis title'}
},
'credits': {
'text': "Pybonacci rules!",
'href': 'https://twitter.com/pybonacci'
}
}
hc(config)
"""
Explanation: Pretty simple!!
Ok, let's dissect the code in the Brython cell above.
<code style="background-color: cyan;">%%brython -h html -p</code> :
Indicates that the code cell is written using Brython and we use some options for the Brython code cell, -h to use the HTML code defined in the html variable located in the cell above and -p to print the final HTML code generated below the generated chart. In this example, the generated code should be something like the following:
```html
<script id="66406" type="text/python">
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart':{
'renderTo': 'hc_ex1'
},
'title': {
'text': 'Monthly Average Temperature',
'x': -20 #center
},
'subtitle': {
'text': 'Source: WorldClimate.com',
'x': -20
},
'xAxis': {
'categories': ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
},
'yAxis': {
'title': {
'text': 'Temperature (°C)'
},
'plotLines': [{
'value': 0,
'width': 1,
'color': '#808080'
}]
},
'tooltip': {
'valueSuffix': '°C'
},
'legend': {
'layout': 'vertical',
'align': 'right',
'verticalAlign': 'middle',
'borderWidth': 0
},
'series': [{
'name': 'Tokyo',
'data': [7.0, 6.9, 9.5, 14.5, 18.2, 21.5, 25.2, 26.5, 23.3, 18.3, 13.9, 9.6]
}]
}
hc(config)
</script>
<div id="brython_container_66406"><div id="hc_ex1" style="width: 700px; height: 300px;"></div></div>
<script type="text/javascript">brython({debug:1, static_stdlib_import: false, ipy_id: ["66406"]});</script>
```
the -p option only provides information and it isn't required to run the Brython code cell.
<code style="background-color: cyan;">hc = Highcharts.Chart.new</code>
Here we are instantiating the chart, i.e., accessing the function to the Brython namespace.
<code style="background-color: cyan;">config = { ... }</code>
Here we are defining the options and data to be used in the chart. More on this later.
<code style="background-color: cyan;">hc(config))</code>
We call the hc function with the data and configuration options defined in the config dict.
Configuring Highcharts
We can configure how the chart will be shown, layout, dimensions, axis, titles, legends,... All this information can be managed on each plot or using a configuration object (in javascript or dictionary in Python).
In the previous example we have seen the config dict. Let's dedicate some time to understand it. The config dict contains a series of dicts and each of these dicts manage several pieces of the chart:
chart dict (complete api): It contains information related with how the chart is presented (background colors, area plot colors, type of chart to be used, where the chart should be rendered (html element), margins and spacings,...). A more complete example would be something like the following:
python
'chart': {
'renderTo': 'The_id_of_the_html_element',
'backgroundColor': 'a_valid_html_object',
'type': 'spline',
'plotBorderWidth': 1,
'plotBorderColor': '#3F4044',
...
}
colors key: The value is a list of strings containing valid html colors. The default colors in the latest highcharts version are:
python
'colors': ['#7cb5ec', '#434348', '#90ed7d', '#f7a35c', '#8085e9',
'#f15c80', '#e4d354', '#8085e8', '#8d4653', '#91e8e1']
credits dict (complete api): This dict allows you to control the credits label in the chart. By default will be shown the credits in the bottom left area of the chart. To control the credits you can use the following:
python
'credits': {
'enabled': Boolean,
'href': String,
'text': String,
...
}
legend dict (complete api): This dict allows you to control how the legend is shown:
python
'legend': {
'enabled': Boolean,
'align': String,
'backgroundColor': String,
...
}
plotOptions dict (complete api): The plotOptions dict is a wrapper object for config objects for each series type. The config objects for each series can also be overridden for each series item as given in the series array. Configuration options for the series are given in three levels. Options for all series in a chart are given in the plotOptions['series'] dict. Then options for all series of a specific type are given in the plotOptions of that type, for example plotOptions[line]. Next, options for one single series are given in the specific series array:
python
'plotOptions': {
'enabled': Boolean,
'align': String,
'backgroundColor': String,
...
}
title and subtitle dicts (complete api for title and for subtitle): this options controls the appeareance of the title and subtitle of the chart. The keys are almost similar for both options:
python
'title': {
'align': String,
'text': String,
...
}
tooltip dict (complete api): Options for the tooltip that appears when the user hovers over a series or point:
python
'tooltip': {
'enabled': Boolean,
'backgroundColor': 'a_valid_html_color',
'borderColor': 'a_valid_html_color',
...
}
xAxis and yAxis dicts (complete api for xAxis and for yAxis): The x axis or category axis and the y axis or the value axis. Normally, xAxis is the horizontal axis and yAxis the vertical axis except if the chart is inverted. The keys are pretty similar for both options:
python
'xAxis': {
'min': float or int,
'max': float or int,
'title': {a dict with options},
'lineColor': 'A_valid_html_color',
...
}
series dict (complete api): The actual series to append to the chart. In addition to the possible options, any member of the plotOptions for that specific type of plot can be added to a series individually. For example, even though a general lineWidth is specified in plotOptions['series'], an individual lineWidth can be specified for each series:
data list: It is a list that can be one of the following options (we will come back to this later):
An array of numerical values. In this case, the numerical values will be interpreted as y values, and x values will be automatically calculated, either starting at 0 and incrementing by 1, or from pointStart and pointInterval given in the plotOptions dict. If the axis has categories, these will be used. This option is not available for range series.
An array of arrays with two values. In this case, the first value is the x value and the second is the y value. If the first value is a string, it is applied as the name of the point, and the x value is incremented following the above rules. For range series, the arrays will be interpreted as [x, low, high]. In this cases, the x value can be skipped altogether to make use of pointStart and pointRange.
An array of objects with named values. In this case the objects are point configuration objects. Range series values are given by low and high.
python
'series': {
'data': [* see above],
'name': String,
'type': String,
...
}
Let's see a more complete (and ugly) example using several configuration options:
End of explanation
"""
%%brython -s globaloptions
from browser import window
Highcharts = window.Highcharts
global_options = {
'colors': ['rgb(0, 107, 164)', 'rgb(255, 128, 114)',
'rgb(171, 171, 171)', 'rgb(89, 89, 89)',
'rgb(95, 158, 209)', 'rgb(200, 82, 0)',
'rgb(137, 137, 137)', 'rgb(162, 200, 236)',
'rgb(256, 188, 121)', 'rgb(207, 207, 207)'],
'chart':{
'plotBackgroundColor': 'rgb(229, 229, 229)'
},
'credits':{
'enabled': False
},
'legend':{
'align': 'right',
'verticalAlign': 'middle',
'layout': 'vertical',
'borderWidth': 0,
'enabled': True
},
'plotOptions':{
'area': {
'fillOpacity': 0.5,
'marker': {'enabled': False},
},
'arearange': {
'fillOpacity': 0.5,
'marker': {'enabled': False},
},
'areaspline': {
'fillOpacity': 0.5,
'marker': {'enabled': False},
},
'areasplinerange': {
'fillOpacity': 0.5,
'marker': {'enabled': False},
},
'bar': {
'borderWidth': 0
},
'boxplot': {
'fillColor': '#FAFAFA',
'lineWidth': 2,
'medianWidth': 4,
'stemDashStyle': 'line',
'stemWidth': 1,
'whiskerLength': '30%',
'whiskerWidth': 2
},
'column': {
'borderWidth': 0
},
'columnrange': {
'borderWidth': 0
},
'errorbar': {
'color': '#fefefe',
'lineWidth': 2
},
'line': {
'marker': {'enabled': False},
'lineWidth': 2
},
'scatter': {
'marker': {
'enabled': True,
'lineWidth': 0,
'symbol': 'circle',
'radius': 5
},
},
'spline': {
'marker': {'enabled': False},
'lineWidth': 2
},
'waterfall': {
'borderWidth': 0
}
},
'subtitle': {
'align': 'center',
'style': {
'color': '#555555',
'fontWeight': 'bold'
}
},
'title': {
'align': 'center',
'text': None,
'style': {
'color': '#000000',
'fontWeight': 'bold'
}
},
'tooltip': {
'backgroundColor': 'rgba(255,255,224,0.5)',
'borderRadius': 5,
'crosshairs': [{
'width': 3,
'color': '#ffffff',
'dashStyle': 'shortdot'
}, {
'width': 3,
'color': '#ffffff',
'dashStyle': 'shortdot'
}],
'hideDelay': 200,
'enabled': True,
'shadow': False,
},
'xAxis': {
'gridLineColor': '#FFFFFF',
'gridLineWidth': 1,
'lineColor': 'rgb(229, 229, 229)',
'tickColor': 'rgb(229, 229, 229)',
'shadow': False,
},
'yAxis': {
'gridLineColor': '#FFFFFF',
'gridLineWidth': 1,
'lineColor': 'rgb(229, 229, 229)',
'tickColor': 'rgb(229, 229, 229)',
'shadow': False,
}
}
Highcharts.setOptions.new(global_options)
"""
Explanation: I got it!!! It's the ugliest chart in the world!!!
Defining options for all the charts
If we want we can define some of the options to be used during a complete session so we don't have to define each of the options when defining a new chart.
We will use <code style="background-color: yellow;">Highcharts.setOptions</code> to create a set of options that could be overwritten on each individual chart if necessary. You can learn more about how to create themes for highcharts here.
In general, I don't like some default values for Highcharts and as Randall Olson has said, sometimes less is more. Also, as I have some degree of color blindness I will use the 'color blind 10' palette of colors from Tableau.
End of explanation
"""
html = """<div id="hc_ex3" style="width: 700px; height: 300px;"></div>"""
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart':{'renderTo': 'hc_ex3'},
'title': {'text': 'Monthly Average Temperature'},
'subtitle': {'text': 'Source: WorldClimate.com'},
'xAxis': {'categories': ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']},
'yAxis': {'title': {'text': 'Temperature (°C)'}},
'tooltip': {'valueSuffix': '°C'},
'series': [{'name': 'Tokyo',
'data': [7.0, 6.9, 9.5, 14.5, 18.2, 21.5, 25.2, 26.5, 23.3, 18.3, 13.9, 9.6]}]
}
hc(config)
"""
Explanation: And now, let's repeat our first example again after the global configuration:
End of explanation
"""
html = """
<div style="float: left;">
<div id="hc_ex4a" style="width: 400px; height: 300px;"></div>
<div id="hc_ex4b" style="width: 400px; height: 300px;"></div>
</div>
<div style="float: left;">
<div id="hc_ex4c" style="width: 400px; height: 300px;"></div>
<div id="hc_ex4d" style="width: 400px; height: 300px;"></div>
</div>"""
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# Bar
config = {
'chart': {'renderTo': 'hc_ex4a', 'type': 'bar'},
'xAxis': {'categories': ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']},
'yAxis': {'title': {'text': 'Temperature (°C)'}},
'tooltip': {'valueSuffix': '°C'},
'series': [{'name': 'Tokyo',
'data': [7.0, 6.9, 9.5, 14.5, 18.2, 21.5, 25.2, 26.5, 23.3, 18.3, 13.9, 9.6]}]
}
hc(config)
# Area
config = {
'chart': {'renderTo': 'hc_ex4b', 'type': 'area'},
'xAxis': {'categories': ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']},
'yAxis': {'title': {'text': 'Temperature (°C)'}},
'tooltip': {'valueSuffix': '°C'},
'series': [{'name': 'Tokyo',
'data': [7.0, 6.9, 9.5, 14.5, 18.2, 21.5, 25.2, 26.5, 23.3, 18.3, 13.9, 9.6]}]
}
hc(config)
# Bubble
config = {
'chart': {'renderTo': 'hc_ex4c', 'type': 'scatter', 'zoomType': 'xy'},
'title': {'text': 'You can pan and zoom'},
'series': [{'name': 'Tokyo',
'data': [[7.0, 6.9], [9.5, 14.5], [18.2, 21.5], [25.2, 26.5], [23.3, 18.3], [13.9, 9.6]]}]
}
hc(config)
# Pie
config = {
'chart': {'renderTo': 'hc_ex4d', 'type': 'pie', 'plotBackgroundColor': 'white'},
'series': [{'name': 'Python scientific libs',
'data': [['scipy', 6.9], ['IPython', 14.5],
['Matplotlib', 21.5], ['Numpy', 26.5], ['Pandas', 18.3]]}]
}
hc(config)
"""
Explanation: Much better than the ugly chart.
And other simple charts will look like the following:
End of explanation
"""
html = """<div id="hc_ex5" style="width: 700px; height: 300px;"></div>"""
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart': {'renderTo': 'hc_ex5'},
'yAxis': {'title': {'text': 'Temperature (°C)'}},
'tooltip': {'valueSuffix': '°C'},
'series': [{'name': 'Tokyo',
'data': [7.0, 6.9, 9.5, 14.5, 18.2, 21.5, 25.2, 26.5, 23.3, 18.3, 13.9, 9.6]},
{'name': 'Madrid',
'data': [3.0, 5.4, 6.5, 12.7, 16.8, 21.4, 26.5, 26.2, 24.3, 17.3, 11.8, 6.7]}]
}
hc(config)
"""
Explanation: <span style="background-color: #90EE90">[NEW CODE]</span> In the previous figures we have included some new options in the chart dict:
<code style="background-color: yellow;">'type': 'type_of_chart'</code>. It indicates which chart will be used to render the data.
<code style="background-color: yellow;">'zoomType': 'xy'</code>. It allows us to zoom after the selection of an area with the mouse.
Also, in the Pie chart, we have used the option <code style="background-color: yellow;">'plotBackgroundColor': 'white'</code> so the global option defined before is not used.
Using data
Before, we have seen three different ways to insert data into a chart. Let's see it again to understand better how it is used:
An array of numerical values. In this case, the numerical values will be interpreted as y values, and x values will be automatically calculated, either starting at 0 and incrementing by 1, or from pointStart and pointInterval given in the plotOptions dict. If the axis has categories, these will be used. This option is not available for range series.
We have seen examples before but let's create a more complete example using this way:
End of explanation
"""
html = """<div id="hc_ex6" style="width: 700px; height: 300px;"></div>"""
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart': {'renderTo': 'hc_ex6', 'type': 'line'},
'yAxis': {'title': {'text': 'Temperature (°C)'}},
'tooltip': {'valueSuffix': '°C'},
'series': [{'name': 'Neverland',
'data': [[1, 6.9], [3, 14.5], [7, 21.5], [8, 26.5], [9, 18.3], [10, 9.6]]}]
}
hc(config)
"""
Explanation: An array of arrays with two values. In this case, the first value is the x value and the second is the y value. If the first value is a string, it is applied as the name of the point, and the x value is incremented following the above rules and the string will be used in the tooltip of the point. For range series, the arrays will be interpreted as [x, low, high]. In this cases, the x value can be skipped altogether to make use of pointStart and pointRange.
This way has been used in the scatter chart previously. In the example above we have seen that the x values start at 0. If we want a different behaviour we could define the x values as follows:
End of explanation
"""
html = """<div id="hc_ex7" style="width: 700px; height: 300px;"></div>"""
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart': {'renderTo': 'hc_ex7', 'type': 'line'},
'yAxis': {'title': {'text': 'Temperature (°C)'}},
'tooltip': {'valueSuffix': '°C'},
'series': [{'name': 'Neverland',
'data': [['Jan', 6.9], ['Mar', 14.5], ['Jul', 21.5], ['Aug', 26.5], ['Sep', 18.3], ['Oct', 9.6]]}]
}
hc(config)
"""
Explanation: If the x values are not a valid number it will be used as a label in the tooltip and the x values will start by 0 and incrementing by 1. For example:
End of explanation
"""
from brythonmagic import load_js_lib
load_js_lib("https://cdnjs.cloudflare.com/ajax/libs/highcharts/5.0.7/highcharts-more.js")
html = """<div id="hc_ex8" style="width: 700px; height: 300px;"></div>"""
%%brython -h html
import random
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data = [{'name': 'Data {}'.format(i+1),
'color': 'rgb(100,50,{0})'.format(random.randrange(0,255)),
'y': random.randrange(0,25),
'x': i+1} for i in range(10)]
config = {
'chart': {'renderTo': 'hc_ex8'},
'yAxis': {'title': {'text': 'Temperature (°C)'}},
'series': [{'data': data, 'type': 'line', 'color': 'black'},
{'data': data, 'type': 'bubble'}]
}
hc(config)
"""
Explanation: An array of objects with named values. In this case the objects are point configuration objects. Range series values are given by low and high.
This is the most complete case as you can define more precisely how the data is displayed.
[HINT] In the next chart we will use the 'bubble' chart type. We must load a new javascript file as the highcharts.js doesn't provide the complete functionality of Highcharts and some type of charts (like 'arearange', 'bubble',...) are included as extras. The new javascript file is highcharts-more.js.
End of explanation
"""
html = """<div id="hc_ex9" style="width: 900px; height: 300px;"></div>"""
%%brython -h html
import random
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data = [[i+1, random.randrange(0,35)] for i in range(100)]
config = {
'chart': {'renderTo': 'hc_ex9', 'type': 'line', 'zoomType': 'x'},
'yAxis': {'title': {'text': 'Wind speed (m/s)'}},
'series': [{'data': data}],
'plotOptions': {
'line': {'dataLabels': {'enabled': True}, 'enableMouseTracking': False}
},
'title': {'text': 'Click and drag to zoom'}
}
hc(config)
"""
Explanation: On each data value we have used a name (shown in the tooltip), a color (used in scatter, bar, column, bubble,..., charts but not in line or area charts, for example) and the x and y values.
Types of charts
Line charts
We already have seen several line charts. Let's see something more advanced. In the following example we will include the data value for each record and we will be able to choose an area of the chart to zoom on the chosen subset.
End of explanation
"""
html = """<div id="hc_ex10" style="width: 900px; height: 300px;"></div>"""
%%brython -h html
import random
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data = [[i+1, random.randrange(10,105)] for i in range(100)]
config = {
'chart': {'renderTo': 'hc_ex10'},
'xAxis': {'min': -20},
'yAxis': {
'title': {'text': 'Mean Daily NO2 (ug/m3)'},
'max': 110,
'minorGridLineWidth': 0,
'gridLineWidth': 0,
'alternateGridColor': None,
'plotBands': [{
'from': 0,
'to': 15,
'color': 'rgba(100,100,255,0.5)',
'label': {
'text': 'Clean air',
'style': {
'color': 'black'
}
}
}, {
'from': 15,
'to': 40,
'color': 'rgba(0,255,0,0.5)',
'label': {
'text': 'Below EU limit',
'style': {
'color': 'black'
}
}
}, {
'from': 40,
'to': 120,
'color': 'rgba(255,0,0,0.5)',
'label': {
'text': 'Above EU limit',
'style': {
'color': 'black'
}
}
}]
},
'series': [{'data': data, 'lineWidth': 2, 'color': 'black'}]
}
hc(config)
"""
Explanation: <span style="background-color: #90EE90">[NEW CODE]</span> In the previous figure we have included some new options in the chart dict:
<code style="background-color: yellow;">'zoomType': 'x'</code>. It allows us to zoom after the selection of an area with the mouse. In this case, the zoom is over the x axis
Also, in the <code style="background-color: yellow;">plotOptions</code> dict, we have included some new options.
<code style="background-color: yellow;">'line': {'dataLabels': {'enabled': True}, 'enableMouseTracking': False}</code>. The <code style="background-color: yellow;">'dataLabels'</code> option allow us to show the value of each record while the <code style="background-color: yellow;">'enableMouseTracking'</code> option allows us to disable the default mouse interaction with the plot (tooltip,...).
In the following plot we are defining some areas in the background to help highlight some aspects of the dataset:
End of explanation
"""
html = """<div id="hc_ex11" style="width: 900px; height: 300px;"></div>"""
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data = [[3**i, i] for i in range(1, 10)]
config = {
'chart': {'renderTo': 'hc_ex11', 'type': 'spline'},
'yAxis': {'type': 'logarithmic', 'opposite': True, 'offset': 30},
'legend': {'align': 'left'},
'series': [{'data': data, 'lineWidth': 4, 'color': 'black'}]
}
hc(config)
"""
Explanation: <span style="background-color: #90EE90">[NEW CODE]</span> In the previous figure we have included some new options in the options dict:
<code style="background-color: yellow;">'xAxis': {'min': -20}</code>. This indicates the minimum value for the x axis.
With the following code we have suppresed the grid lines for the y axis and added some band colors. I think the code is self-explanatory.
<b><pre><div style="background-color: yellow;">'yAxis': {
'title': {'text': 'Mean Daily NO2 (ug/m3)'},
'max': 110,
'minorGridLineWidth': 0,
'gridLineWidth': 0,
'alternateGridColor': None,
'plotBands': [{
'from': 0,
'to': 15,
'color': 'rgba(100,100,255,0.5)',
'label': {
'text': 'Clean air',
'style': {
'color': 'black'
}
}
}, {
'from': 15,
'to': 40,
'color': 'rgba(0,255,0,0.5)',
'label': {
'text': 'Below EU limit',
'style': {
'color': 'black'
}
}
}, {
'from': 40,
'to': 120,
'color': 'rgba(255,0,0,0.5)',
'label': {
'text': 'Above EU limit',
'style': {
'color': 'black'
}
}
}]
}</div></pre></b>
In the next plot we are going to play a little bit with the axis, where is the y axis located?, and the legend?:
End of explanation
"""
html = """<div id="hc_ex12container" style="height: 350px;">
<div id="hc_ex12" style="width: 900px; height: 300px;"></div>
</div>"""
%%brython -h html
from browser.timer import set_interval, clear_interval
from browser import window, document, html
from random import randrange
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data = [[i, randrange(0,10)] for i in range(0, 20)]
data_tmp = data[:]
config = {
'chart': {
'renderTo': 'hc_ex12',
'type': 'spline'
},
'series': [{'data': data, 'lineWidth': 2, 'color': 'black', 'animation': False,
'marker': {'enabled': True}}]
}
hc(config)
### NEW CODE ###
idtimer = None
# A button to animate the plot with new data
document['hc_ex12container'] <= html.BUTTON('Animate', Id = 'anim')
def add_point():
global data_tmp
x = data_tmp[-1][0] + 1
y = randrange(0,10)
data_tmp.append([x, y])
config['series'][0]['data'] = data_tmp[-20:]
hc(config)
def animate(ev):
global idtimer, config, data_tmp
idtimer = set_interval(add_point, 1000)
document['anim'].bind('click', animate)
# A button to stop the plot with new data
document['hc_ex12container'] <= html.BUTTON('Stop', Id = 'stop')
def stop(ev):
global idtimer
clear_interval(idtimer)
document['stop'].bind('click', stop)
# A button to reset the plot with the original values
document['hc_ex12container'] <= html.BUTTON('Reset', Id = 'reset')
def reset(ev):
global idtimer, config, data, data_tmp
if idtimer:
clear_interval(idtimer)
data_tmp = data[:]
config['series'][0]['data'] = data
hc(config)
document['reset'].bind('click', reset)
"""
Explanation: <span style="background-color: #90EE90">[NEW CODE]</span> In the previous figure we have included some new options to modify the y axis:
<code style="background-color: yellow;">'yAxis': {'type': 'logarithmic', 'opposite': True, 'offset': 30}</code>. The axis scale is logarithmic, it is located on the opposite site and is located with an offset of 30px from the plot area.
We can include some interactivity to the plot using some basic controls. With the buttons you can update the plot (every second a new point is created), stop the updates or reset to the original state:
End of explanation
"""
html = """<div id="hc_ex13" style="width: 900px; height: 300px;"></div>"""
%%brython -h html
from random import randrange
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data1 = [[i, randrange(0,10)] for i in range(0, 20)]
data2 = [[i, randrange(0,10)] for i in range(0, 20)]
config = {
'chart': {
'renderTo': 'hc_ex13',
'type': 'area'
},
'series': [{'data': data1, 'dashStyle': 'ShortDot', 'lineWidth': 3},
{'data': data2}]
}
hc(config)
"""
Explanation: <span style="background-color: #90EE90">[NEW CODE]</span> In the previous figure we have included some Brython specific code to animate the figure:
<code style="background-color: yellow;">document['hc_ex12container'].append(html.BUTTON('xxx', Id = 'xxx'))</code>. With this code we are appending some buttons to the html div with id 'hc_ex12container', i.e., the html div element that contains the chart and the buttons. See the Brython docs for more info.
<code style="background-color: yellow;">document['xxx'].bind('click', xxx)</code>. With this code we are attaching some functionality (events) to some DOM elements. See the Brython docs for more info.
Finally, we have defined some functions to manage the events using the browser.timer module. See the Brython docs for more info.
Area charts
As some area charts are quite similar to line plots I will write some examples and I will only explain the code that is relevant:
A simple area chart:
End of explanation
"""
html = """<div id="hc_ex14" style="width: 900px; height: 300px;"></div>"""
%%brython -h html
from random import randrange
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data1 = [[i, randrange(0,10)] for i in range(0, 20)]
data2 = [[i, randrange(0,10)] for i in range(0, 20)]
config = {
'chart': {
'renderTo': 'hc_ex14',
'type': 'area'
},
'series': [{'data': data1, 'lineWidth': 3},
{'data': data2}],
'plotOptions': {'area': {'stacking': 'normal'}},
'tooltip': {'shared': True},
}
hc(config)
"""
Explanation: <span style="background-color: #90EE90">[NEW CODE]</span>
<code style="background-color: yellow;">'dashStyle': 'ShortDot'</code>. This option in the first data series modifies the line defining the area. In this case we have used 'ShortDot' but there are ather options.
A simple stacked area chart:
End of explanation
"""
html = """<div id="hc_ex15" style="width: 900px; height: 300px;"></div>"""
%%brython -h html
from random import randrange
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data1 = [[i, randrange(5,10), randrange(10,15)] for i in range(0, 20)]
data2 = [[i, (lst[1] + lst[2]) / 2.] for i, lst in enumerate(data1)]
config = {
'chart': {
'renderTo': 'hc_ex15'
},
'series': [{'data': data2, 'type': 'line', 'name': 'mean', 'lineWidth': 3, 'color': 'black'},
{'data': data1, 'lineWidth': 1, 'type': 'arearange', 'name': 'extremes'}],
'tooltip': {'shared': True}
}
hc(config)
"""
Explanation: <span style="background-color: #90EE90">[NEW CODE]</span>
<code style="background-color: yellow;">'plotOptions': {'area': {'stacking': 'normal'}}</code>. This indicates that the areas will be stacked. We can choose a 'normal' or a 'percent' stacking. Other chart types like line, bar, columns,.., can be stacked.
<code style="background-color: yellow;">'tooltip': {'shared': True}</code>. This option indicates that the tooltip is shared between the datasets so the tooltip will show the information from all the available datasets.
A simple arearange chart combined with a line plot:
End of explanation
"""
html = """<div id="hc_ex16" style="width: 900px; height: 300px;"></div>"""
%%brython -h html
from random import randrange
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data1 = [[i, -randrange(1,15)] for i in range(0, 20)]
data2 = [[i, randrange(1,15)] for i in range(0, 20)]
config = {
'chart': {
'renderTo': 'hc_ex16',
'type': 'bar'
},
'series': [{'data': data1, 'name': 'negative'},
{'data': data2, 'name': 'positive'}],
'plotOptions': {'bar': {'stacking': 'normal'}},
'tooltip': {'shared': True},
'xAxis': [{'opposite': False}, {'opposite': True, 'linkedTo': 0}]
}
hc(config)
"""
Explanation: Column and bar charts
A bar plot with positive and negative values:
End of explanation
"""
html = """<div id="hc_ex17container">
<div id="hc_ex17" style="position: relative; float: left; width: 700px; height: 300px; margin: 20px;"></div>
<div id="tablediv"></div>
</div>"""
%%brython -h html
from browser import window, document, html
from random import randrange
Highcharts = window.Highcharts
# first we create a table with two series of data, X and Y:
tab = html.TABLE()
tab.style = {'textAlign': 'center', 'width': '50px'}
tab <= html.TR(html.TD('X') + html.TD('Y'))
for i in range(5):
tab <= html.TR(
html.TD(
html.INPUT(
Id = 'x' + str(i), value = randrange(1,5), style = {'width': '50px'}
)
) +
html.TD(
html.INPUT(
Id = 'y' + str(i), value = randrange(1,5), style = {'width': '50px'}
)
)
)
document['tablediv'] <= tab
# Function to retrieve the data from the table
def get_data():
data1 = []
data2 = []
for i in range(5):
print('x' + str(i))
data1.append(float(document['x' + str(i)].value))
data2.append(float(document['y' + str(i)].value))
return [data1, data2]
# Function to update the chart
def update(ev):
global config, hc
datasets = get_data()
config['series'][0]['data'] = datasets[0]
config['series'][1]['data'] = datasets[1]
hc(config)
print(datasets)
# Button and event
document['hc_ex17container'] <= html.BUTTON('Update', Id = 'btn')
document['btn'].bind('click', update)
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
datasets = get_data()
data1 = datasets[0]
data2 = datasets[1]
config = {
'chart': {
'renderTo': 'hc_ex17',
'type': 'column'
},
'series': [{'data': data1, 'name': 'X'},
{'data': data2, 'name': 'Y'}],
'title': {'text': 'Modify the values in the table and update'}
}
hc(config)
"""
Explanation: <span style="background-color: #90EE90">[NEW CODE]</span>
<code style="background-color: yellow;">'xAxis': [{'opposite': False}, {'opposite': True, 'linkedTo': 0}]</code>. We are defining two x axis, the second one in the opposite side of the plot area and the values are linked to the first axis, i.e., both axis are the same but each one on one side of the chart.
A bar plot with interactive values:
End of explanation
"""
html = """<div id="hc_ex18" style="width: 600px; height: 300px;"></div>"""
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# Pie
config = {
'chart': {'renderTo': 'hc_ex18', 'type': 'pie'},
'series': [{'name': 'Python scientific libs', 'innerSize': '60%',
'data': [['scipy', 6.9], ['IPython', 14.5],
['Matplotlib', 21.5], ['Numpy', 26.5], ['Pandas', 18.3]]}]
}
hc(config)
"""
Explanation: <span style="background-color: #90EE90">[NEW CODE]</span>
In this case, there is no relevant Highcharts code and the Brython specific code is out of the scope of this tutorial but it is pretty similar to that seen previously in a previous example.
Pie charts
In the examples above we have seen a simple pie plot. A donut plot would be quite similar:
End of explanation
"""
html = """<div id="hc_ex19" style="width: 600px; height: 300px;"></div>"""
%%brython -h html
from random import randrange
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data1 = [[i + randrange(-2, 2), randrange(i-3,i+3)] for i in range(0, 20)]
data2 = [[i + randrange(-2, 2) + 2, randrange(i-3,i+3)] for i in range(0, 20)]
# Scatter
config = {
'chart': {'renderTo': 'hc_ex19', 'type': 'scatter'},
'series': [{'name': 'Station 1 vs model', 'data': data1},
{'name': 'Station 2 vs model', 'data': data2}],
'xAxis': {'title': {'text': 'Station'}},
'yAxis': {'title': {'text': 'Model'}}
}
hc(config)
"""
Explanation: <span style="background-color: #90EE90">[NEW CODE]</span>
<code style="background-color: yellow;">'innerSize': '50%'</code>. This is the only difference wth the previous pie plot example. It indicates radius where the pie plot should start from the centre.
Scatter and bubble charts
In the examples above we have seen a simple scatter plot. A scatter plot with two datasets is similar:
End of explanation
"""
html = """<div id="hc_ex20" style="width: 600px; height: 300px;"></div>"""
%%brython -h html
from random import randrange
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data = []
for i in range(20):
x = randrange(-10, 10)
y = randrange(-10, 10)
z = randrange(1, 20)
data.append({'x': x, 'y': y, 'z': z,
'color': 'rgb(40,40,{0})'.format(int(z * 255 / 20)),
'marker': {'lineWidth': 1, 'lineColor': 'black'}})
# Scatter
config = {
'chart': {'renderTo': 'hc_ex20', 'type': 'bubble'},
'series': [{'name': 'bubbles', 'data': data}]
}
hc(config)
"""
Explanation: A bubble plot with different colors depending on the size for each record would be as follows:
End of explanation
"""
from brythonmagic import load_js_lib
load_js_lib("https://cdnjs.cloudflare.com/ajax/libs/highcharts/5.0.7/highcharts-3d.js")
"""
Explanation: 3d Charts
For this examples we should first load the 3d highcharts library:
End of explanation
"""
html = """
<div id="hc_ex21" style="width: 900px; height: 400px;"></div>
<div id="sliders">
<table>
<tr>
<td>Alpha Angle</td>
<td>
<input id="R0" type="range" min="0" max="45" value="15"/> <span id="R0-value" class="value"></span>
</td>
</tr>
<tr>
<td>Beta Angle</td>
<td>
<input id="R1" type="range" min="0" max="45" value="15"/> <span id="R1-value" class="value"></span>
</td>
</tr>
</table>
</div>"""
%%brython -h html
from browser import window, document
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart': {
'renderTo': 'hc_ex21',
'plotBackgroundColor': 'white',
'type': 'column',
'margin': 100,
'options3d': {
'enabled': True,
'alpha': 15,
'beta': 15,
'depth': 50,
'viewDistance': 50
}
},
'plotOptions': {'column': {'depth': 25}},
'series': [{'data': [29.9, 71.5, 106.4, 129.2, 144.0, 176.0,
135.6, 148.5, 216.4, 194.1, 95.6, 54.4]}],
'xAxis': {'gridLineColor': '#C0C0C0'},
'yAxis': {'gridLineColor': '#C0C0C0'}
}
columns = hc(config)
def show_values():
document['R0-value'].html = columns.options.chart.options3d.alpha
document['R1-value'].html = columns.options.chart.options3d.beta
show_values()
# activate the sliders
def change_alpha(ev):
columns.options.chart.options3d.alpha = ev.target.value
show_values()
columns.redraw(False)
def change_beta(ev):
columns.options.chart.options3d.beta = ev.target.value
show_values()
columns.redraw(False)
document['R0'].bind('change', change_alpha)
document['R1'].bind('change', change_beta)
"""
Explanation: With the 3d 'module' loaded we can see a new example where some interactivity is added:
End of explanation
"""
from brythonmagic import load_js_lib
load_js_lib("https://cdnjs.cloudflare.com/ajax/libs/highcharts/5.0.7/modules/heatmap.js")
"""
Explanation: <span style="background-color: #90EE90">[NEW CODE]</span>
<code style="background-color: yellow;">'options3d': {'enabled': True,'alpha': 15,'beta': 15,'depth': 50,'viewDistance': 50}'</code>. In this case, we are switching on the 3d options and the initial view of the plot.
You can create 3D pie, donut, scatter,..., plots.
Heat map
In this case, as we did it before, we have to add a new module called heatmap:
End of explanation
"""
html = """<div id="hc_ex22" style="width: 900px; height: 400px;"></div>"""
%%brython -h html
from random import randrange
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data = []
for y in range(7):
for x in range(7):
data.append([y, x, randrange(0,150)])
config = {
'chart': {
'renderTo': 'hc_ex22',
'type': 'heatmap',
'marginTop': 40,
'marginBottom': 40
},
'title': {'text': 'Commits made last week :-P'},
'xAxis': {'categories': ['Numpy', 'Scipy', 'Matplotlib', 'IPython',
'Pandas', 'Brython', 'Brythonmagic']},
'yAxis': {'categories': ['Monday', 'Tuesday', 'Wednesday',
'Thursday', 'Friday', 'Saturday', 'Sunday'],
'title': {'text': None}},
'colorAxis': {'min': 0,
'minColor': '#FFFFFF',
'maxColor': JSConstructor(Highcharts.getOptions)().colors[0]},
'legend': {
'align': 'right',
'layout': 'vertical',
'margin': 0,
'verticalAlign': 'top',
'y': 25,
'symbolHeight': 300
},
'series': [{
'borderWidth': 1,
'data': data,
'dataLabels': {
'enabled': True,
'color': 'black',
'style': {
'textShadow': 'none',
'HcTextStroke': None
}
}
}]
}
hc(config)
"""
Explanation: A simple heatmap would be as follows:
End of explanation
"""
%%brython -s wrapper01
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
class HC:
def __init__(self, container):
self.options = {}
self.options['chart'] = {}
self.options['chart']['renderTo'] = container
self.options['title'] = {}
self.options['title']['text'] = ""
self.options['legend'] = {}
self.options['legend']['enabled'] = False
self.options['subtitle'] = {}
self.options['subtitle']['text'] = ""
self.options['series'] = []
def show(self):
hc(self.options)
html = """<div id="hc_ex23" style="width: 900px; height: 400px;"></div>"""
%%brython -h html -S wrapper01
plt = HC('hc_ex23')
plt.show()
print(plt.options)
"""
Explanation: <span style="background-color: #90EE90">[NEW CODE]</span>
<code style="background-color: yellow;">'colorAxis': {'min': 0, 'minColor': '#FFFFFF', 'maxColor': JSConstructor(Highcharts.getOptions)().colors[0]},</code>. The colorAxis adds a colorbar to the plot.
Other chart types
We already have seen line, bubble, column, bar, donut, scatter, ..., charts. You can create Pyramid, Funnel, Waterfall or others.
Let's use Highcharts in a more pythonic way
All we have seen until now is better that writting plain javascript but the use of Highcharts is not very Pythonic.
Creating a simple wrapper for Highcharts:
Let's start with something very simple
End of explanation
"""
%%brython -s wrapper02
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
class HC:
def __init__(self, container):
self.options = {}
self.options['chart'] = {}
self.options['chart']['renderTo'] = container
self.options['title'] = {}
self.options['title']['text'] = ""
self.options['legend'] = {}
self.options['legend']['enabled'] = False
self.options['subtitle'] = {}
self.options['subtitle']['text'] = ""
self.options['series'] = []
def show(self):
hc(self.options)
def title(self, text):
self.options['title']['text'] = text
#self.show()
def subtitle(self, text):
self.options['subtitle']['text'] = text
#self.show()
html = """<div id="hc_ex24" style="width: 900px; height: 400px;"></div>"""
%%brython -h html -S wrapper02
plt = HC('hc_ex24')
plt.title('Dummy title')
plt.subtitle('Dummy title')
plt.show()
"""
Explanation: Ok, not very exciting. We initialise with an empty dictionary of options and with the show method we pass the options dictionary to Highcharts and we can plot an empty chart. As stated before, not very fascinating. Let's add options to include a title and a subtitle.
[HINT] The -s option in brythonmagic is used to name a brython cell ('wrapper01' is the name for the portion of code defined in the first cell of this section). With the name of that cell the code of the cell can be used in a new Brython cell using the -S option. The use of the -S option would be like an import of the code of the cell with the defined name ('wrapper01' in this case).
End of explanation
"""
%%brython -s wrapper03
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
class HC:
def __init__(self, container):
self.options = {}
self.options['chart'] = {}
self.options['chart']['renderTo'] = container
self.options['title'] = {}
self.options['title']['text'] = ""
self.options['legend'] = {}
self.options['legend']['enabled'] = False
self.options['subtitle'] = {}
self.options['subtitle']['text'] = ""
self.options['series'] = []
def show(self):
hc(self.options)
def title(self, text):
self.options['title']['text'] = text
#self.draw()
def subtitle(self, text):
self.options['subtitle']['text'] = text
#self.draw()
def plot(self, x, y = None, label = None, color = None, linewidth = None):
if y:
data = [[i, j] for i, j in zip(x, y)]
else:
data = x
serie = {'data': data, 'type': 'line'}
if linewidth:
serie['lineWidth'] = linewidth
if label:
serie['name'] = label
if color:
serie['color'] = color
self.options['series'].append(serie)
#self.draw()
html = """<div id="hc_ex25" style="width: 900px; height: 400px;"></div>"""
%%brython -h html -S wrapper03
plt = HC('hc_ex25')
plt.plot([1,2,4,5], [3,6,4,7], label = 'lineplot1', linewidth = 5, color = 'red')
plt.plot([1,2,4,5], [8,5,9,2], label = 'lineplot2', linewidth = 2, color = 'blue')
plt.title('Some line plots')
plt.show()
"""
Explanation: Still not very funny. If you see the two new methods added, title and subtitle, there is one line commented. We can uncomment these lines if we don't want to call the show method but everytime we use a method the chart will be plotted. I didn't see any overhead doing this but let's avoid it for the moment.
Ok, let's do something more interesting plotting some data in a line chart.
End of explanation
"""
%%brython -s wrapper04
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
class HC:
def __init__(self, container):
self.options = {}
self.options['chart'] = {}
self.options['chart']['renderTo'] = container
self.options['title'] = {}
self.options['title']['text'] = ""
self.options['legend'] = {}
self.options['legend']['enabled'] = False
self.options['subtitle'] = {}
self.options['subtitle']['text'] = ""
self.options['series'] = []
def show(self):
hc(self.options)
def title(self, text):
self.options['title']['text'] = text
#self.draw()
def subtitle(self, text):
self.options['subtitle']['text'] = text
#self.draw()
def plot(self, x, y = None, label = None, color = None, linewidth = None):
if y:
data = [[i, j] for i, j in zip(x, y)]
else:
data = x
serie = {'data': data, 'type': 'line'}
if linewidth:
serie['lineWidth'] = linewidth
if label:
serie['name'] = label
if color:
serie['color'] = color
self.options['series'].append(serie)
#self.draw()
def legend(self, loc = 'right'):
self.options['legend']['enabled'] = True
if loc:
self.options['legend']['align'] = loc
#self.draw()
html = """<div id="hc_ex26" style="width: 900px; height: 400px;"></div>"""
%%brython -h html -S wrapper04
plt = HC('hc_ex26')
plt.title('Line plots')
plt.plot([1,2,4,5], [3,6,4,7], label = 'lineplot1', linewidth = 5, color = 'red')
plt.plot([1,2,4,5], [8,5,9,2], label = 'lineplot2', linewidth = 2, color = 'blue')
plt.legend(loc = 'left')
plt.show()
"""
Explanation: Wow!!, with less than 40 lines of Brython code we have a wrapper to do very simple interactive charts using Highcharts in a pythonic way. We added the label keyword in the plot method. In Matplotlib, the label is used by the legend in the case we want to add a legend. Let's add a legend method to provide the label keyword some utility.
End of explanation
"""
%%brython -s wrapper05
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
class HC:
def __init__(self, container):
self.options = {}
self.options['chart'] = {}
self.options['chart']['renderTo'] = container
self.options['title'] = {}
self.options['title']['text'] = ""
self.options['legend'] = {}
self.options['legend']['enabled'] = False
self.options['subtitle'] = {}
self.options['subtitle']['text'] = ""
self.options['series'] = []
def show(self):
hc(self.options)
def title(self, text):
self.options['title']['text'] = text
#self.draw()
def subtitle(self, text):
self.options['subtitle']['text'] = text
#self.draw()
def plot(self, x, y = None, label = None, color = None, linewidth = None):
if y:
data = [[i, j] for i, j in zip(x, y)]
else:
data = x
serie = {'data': data, 'type': 'line'}
if linewidth:
serie['lineWidth'] = linewidth
if label:
serie['name'] = label
if color:
serie['color'] = color
self.options['series'].append(serie)
#self.draw()
def legend(self, loc = 'right'):
self.options['legend']['enabled'] = True
if loc:
self.options['legend']['align'] = loc
#self.draw()
def scatter(self, x, y, label = None, color = None):
data = [[i, j] for i, j in zip(x, y)]
serie = {'data': data, 'type': 'scatter'}
if label:
serie['name'] = label
if color:
serie['color'] = color
self.options['series'].append(serie)
#self.draw()
html = """<div id="hc_ex27" style="width: 900px; height: 400px;"></div>"""
%%brython -h html -S wrapper05
plt = HC('hc_ex27')
plt.title('Line plots')
plt.plot([1,2,4,5], [3,6,4,7], label = 'lineplot1', linewidth = 5, color = 'red')
plt.plot([1,2,4,5], [8,5,9,2], label = 'lineplot2', linewidth = 2, color = 'blue')
plt.scatter([1,2,4,5], [2,4,6,8], label = 'scatter1', color = 'green')
plt.legend(loc = 'left')
plt.show()
"""
Explanation: The behaviour is not similar to that in the pyplot module of the matplotlib library but we can get some basic functionality. We can add a scatter method to combine line plots with scatter plots.
End of explanation
"""
|
jakejhansen/minesweeper_solver | Minesweeper_results.ipynb | mit | # Initialization goes here
import numpy as np
import matplotlib.pyplot as plt
import pickle
import os
import pandas as pd
import math
from matplotlib import rc
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
def smooth(y,factor):
if type(y)!=list:
y = list(y)
return pd.Series(y).rolling(window=factor).mean()#[factor:]
"""
Explanation: Results
Evolutionary Strategies and Reinforcement Learning applied to Minesweeper
Authors
Jacob J. Hansen (s134097), Jakob D. Havtorn (s132315),
Mathias G. Johnsen (s123249) and Andreas T. Kristensen (s144026)
Create Anaconda virtual environment + required packages
In order to run the contents of this notebook, it is recommended to download anaocnda for your platform here
https://www.anaconda.com/download/
and run the following in your terminal to setup everything you need. See file requirements.txt for full package list with version numbering.
conda create --name deep python=3.5.4
source activate deep
conda install jupyter
python -m ipykernel install --user --name deep --display-name "Python (deep)"
conda install -c conda-forge keras
conda install scikit-learn
conda install matplotlib=2.0.2
conda install pandas
pip install gym
pip install mss
Run jupyter notebook using jupyter notebook.
Manually change kernel to Python (deep) within the jupyter notebook if not already running.
Initialization
End of explanation
"""
# Plot rewards and winrate
PG_adjust_axis = 1000000
PG_smoothing = 100
PG_color = [0.1, 0.2, 0.5]
PG_steps = []
PG_reward = []
PG_loss = []
PG_winrate = []
stats = pickle.load(open("Results/Policy-Gradients/stats.p", "rb"))
for stat in stats:
PG_steps.append(stat[1])
PG_reward.append(stat[2]/1.5)
PG_loss.append(stat[3])
PG_winrate.append(stat[4])
PG_steps = pd.Series(PG_steps)/PG_adjust_axis
PG_reward_smooth = smooth(PG_reward,PG_smoothing)
PG_loss_smooth = smooth(PG_loss,PG_smoothing)
PG_winrate_smooth = smooth(PG_winrate,PG_smoothing)
fig = plt.figure(figsize=(6, 6))
ax1 = plt.subplot(311)
plt.plot(PG_steps, PG_reward_smooth, color=PG_color+[1.0])
plt.plot(PG_steps, PG_reward, color=PG_color+[0.1], label='_nolegend_')
plt.setp(ax1.get_xticklabels(), visible=False)
plt.ylabel('Reward')
plt.grid()
ax2 = plt.subplot(312, sharex=ax1)
plt.plot(PG_steps, PG_loss_smooth, color=PG_color+[1.0])
plt.plot(PG_steps, PG_loss, color=PG_color+[0.1], label='_nolegend_')
plt.setp(ax1.get_xticklabels(), visible=False)
plt.ylabel('Loss')
plt.grid()
ax3 = plt.subplot(313, sharex=ax2)
plt.plot(PG_steps, PG_winrate_smooth, color=PG_color+[1.0])
plt.plot(PG_steps, PG_winrate, color=PG_color+[0.1], label='_nolegend_')
plt.xlabel('Steps (millions)')
plt.ylabel('Win Rate')
plt.grid()
plt.tight_layout()
plt.savefig(os.path.join('Results/Policy-Gradients/policy-res.pdf'))
plt.show()
"""
Explanation: Policy Gradient
End of explanation
"""
# Plot rewards and winrate
QL_batch_size = 32
QL_adjust_axis = 1000000
QL_smoothing = 10
QL_color = [0.5,0.1,0.2]
QL_reward_pd = pd.read_csv("Results/Q-Learning/evaluation_total_reward_net2_d_0.csv")
QL_reward_num_pd = pd.read_csv("Results/Q-Learning/evaluation_num_rewards_net2_d_0.csv")
QL_reward_pd['Value'] = QL_reward_pd['Value']/QL_reward_num_pd['Value'] # Scale with the number of rewards
QL_reward_smooth_q = smooth(QL_reward_pd['Value'],QL_smoothing)
QL_steps_reward_q = QL_reward_pd['Step']*QL_batch_size/QL_adjust_axis
QL_reward_q = QL_reward_pd['Value']
QL_loss_pd = pd.read_csv("Results/Q-Learning/train_loss_net2_d_0.csv")
QL_steps_loss = QL_loss_pd['Step']*QL_batch_size/QL_adjust_axis
QL_loss_smooth_q = smooth(QL_loss_pd['Value'],QL_smoothing)
QL_loss_q = QL_loss_pd['Value']
QL_loss_smooth_q = QL_loss_smooth_q[1:840]
QL_loss_q = QL_loss_q[1:840]
QL_steps_loss_q = QL_steps_loss[1:840]
QL_winrate_pd = pd.read_csv("Results/Q-Learning/evaluation_win_rate_net2_d_0.csv")
QL_steps_q = QL_winrate_pd['Step']*QL_batch_size/QL_adjust_axis
QL_winrate_q = QL_winrate_pd['Value']/100
fig = plt.figure(figsize=(6, 6))
ax1 = plt.subplot(311)
plt.plot(QL_steps_reward_q, QL_reward_q, color=QL_color+[1.0])
plt.setp(ax1.get_xticklabels(), visible=False)
plt.ylabel('Reward')
plt.grid()
ax2 = plt.subplot(312, sharex=ax1)
plt.plot(QL_steps_loss_q, QL_loss_smooth_q, color=QL_color+[1.0])
plt.plot(QL_steps_loss_q, QL_loss_q, color=QL_color+[0.1], label='_nolegend_')
plt.setp(ax1.get_xticklabels(), visible=False)
plt.ylabel('Loss')
plt.grid()
ax3 = plt.subplot(313, sharex=ax1)
plt.plot(QL_steps_q, QL_winrate_q, color=QL_color+[1.0])
plt.ylabel('Win rate')
plt.xlabel('Steps (millions)')
plt.grid()
plt.tight_layout()
plt.savefig(os.path.join('Results/Q-Learning/Q-Learning-res_net2_d_0.pdf'))
plt.show()
"""
Explanation: Q-Learning
Q-Learning no Discount
End of explanation
"""
# Plot rewards and winrate
QL_batch_size = 32
QL_adjust_axis = 1000000
QL_smoothing = 10
QL_color = [0.5,0.1,0.2]
QL_reward_pd = pd.read_csv("Results/Q-Learning/evaluation_total_reward_net2_d_0_99.csv")
QL_reward_num_pd = pd.read_csv("Results/Q-Learning/evaluation_num_rewards_net2_d_0_99.csv")
QL_reward_pd['Value'] = QL_reward_pd['Value']/QL_reward_num_pd['Value'] # Scale with the number of rewards
QL_reward_smooth = smooth(QL_reward_pd['Value'],QL_smoothing)
QL_steps_reward = QL_reward_pd['Step']*QL_batch_size/QL_adjust_axis
QL_reward = QL_reward_pd['Value']
QL_loss_pd = pd.read_csv("Results/Q-Learning/train_loss_net2_d_0_99.csv")
QL_steps_loss = QL_loss_pd['Step']*QL_batch_size/QL_adjust_axis
QL_loss_smooth = smooth(QL_loss_pd['Value'],QL_smoothing)
QL_loss = QL_loss_pd['Value']
QL_winrate_pd = pd.read_csv("Results/Q-Learning/evaluation_win_rate_net2_d_0_99.csv")
QL_steps = QL_winrate_pd['Step']*QL_batch_size/QL_adjust_axis
QL_winrate = QL_winrate_pd['Value']/100
fig = plt.figure(figsize=(6, 6))
ax1 = plt.subplot(311)
plt.plot(QL_steps_reward, QL_reward, color=QL_color+[1.0])
plt.setp(ax1.get_xticklabels(), visible=False)
plt.ylabel('Reward')
plt.grid()
ax2 = plt.subplot(312, sharex=ax1)
plt.plot(QL_steps_loss, QL_loss_smooth, color=QL_color+[1.0])
plt.plot(QL_steps_loss, QL_loss, color=QL_color+[0.1], label='_nolegend_')
plt.setp(ax1.get_xticklabels(), visible=False)
plt.ylabel('Loss')
plt.grid()
ax3 = plt.subplot(313, sharex=ax1)
plt.plot(QL_steps, QL_winrate, color=QL_color+[1.0])
plt.ylabel('Win rate')
plt.xlabel('Steps (millions)')
plt.grid()
plt.tight_layout()
plt.savefig(os.path.join('Results/Q-Learning/Q-Learning-res_net2_d_0_99.pdf'))
plt.show()
QL_color_2 = [0.8,0.4,0.5]
fig = plt.figure(figsize=(6, 6))
ax1 = plt.subplot(311)
plt.plot(QL_steps_reward_q, QL_reward_q, color=QL_color+[1.0])
plt.plot(QL_steps_reward, QL_reward, color=QL_color_2+[1.0])
plt.setp(ax1.get_xticklabels(), visible=False)
plt.legend(['$\gamma$=0','$\gamma$=0.99'])
plt.ylabel('Reward')
plt.grid()
ax2 = plt.subplot(312, sharex=ax1)
plt.plot(QL_steps_loss_q, QL_loss_q, color=QL_color+[0.1], label='_nolegend_')
plt.plot(QL_steps_loss, QL_loss, color=QL_color_2+[0.1], label='_nolegend_')
plt.plot(QL_steps_loss_q, QL_loss_smooth_q, color=QL_color+[1.0])
plt.plot(QL_steps_loss, QL_loss_smooth, color=QL_color_2+[1.0])
plt.setp(ax1.get_xticklabels(), visible=False)
plt.ylabel('Loss')
plt.legend(['$\gamma$=0','$\gamma$=0.99'])
plt.grid()
ax3 = plt.subplot(313, sharex=ax1)
plt.plot(QL_steps_q, QL_winrate_q, color=QL_color+[1.0])
plt.plot(QL_steps, QL_winrate, color=QL_color_2+[1.0])
plt.ylabel('Win rate')
plt.legend(['$\gamma$=0','$\gamma$=0.99'])
plt.xlabel('Steps (millions)')
plt.grid()
plt.tight_layout()
plt.savefig(os.path.join('Results/Q-Learning/Q-Learning-res_net2_compare.pdf'))
plt.show()
"""
Explanation: Q-Learning Discount = 0.99
End of explanation
"""
# Plot rewards and winrate
ES_adjust_axis = 1000000
ES_smoothing = 10
ES_color = [0.2,0.5,0.1]
ES_color_test = [0.5,0.8,0.4]
with open(os.path.join('Results/Evolutionary/results.pkl'), 'rb') as f:
ES_results = pickle.load(f)
ES_steps = pd.Series(ES_results['steps'])/ES_adjust_axis
ES_rewards_mean = ES_results['mean_pop_rewards']
ES_rewards_mean_smooth = smooth(ES_rewards_mean,ES_smoothing)
ES_rewards_test = ES_results['test_rewards']
ES_rewards_test_smooth = smooth(ES_rewards_test,ES_smoothing)
ES_winrate = ES_results['win_rate']
fig = plt.figure(figsize=(6, 4))
ax1 = plt.subplot(211)
plt.plot(ES_steps, ES_rewards_mean_smooth, color=ES_color+[1.0])
plt.plot(ES_steps, ES_rewards_mean, color=ES_color+[0.1], label='_nolegend_')
plt.plot(ES_steps, ES_rewards_test_smooth, color=ES_color_test+[1.0])
plt.plot(ES_steps, ES_rewards_test, color=ES_color_test+[0.1], label='_nolegend_')
plt.setp(ax1.get_xticklabels(), visible=False)
plt.ylabel('Reward')
plt.legend(['Mean population reward', 'Test reward'])
plt.grid()
plt.subplot(212, sharex=ax1)
plt.plot(ES_steps, ES_winrate, color=ES_color+[1.0])
plt.xlabel('Steps (millions)')
plt.ylabel('Win rate')
plt.tight_layout()
plt.grid()
plt.savefig(os.path.join('Results/Evolutionary/evo-res.pdf'))
plt.show()
"""
Explanation: Evolutionary Strategies
End of explanation
"""
# Plot winrate in a single plot for comparison
fig = plt.figure(figsize=(6, 4))
plt.plot(PG_steps, PG_winrate, color=PG_color+[1.0])
plt.plot(QL_steps_q, QL_winrate_q, color=QL_color+[1.0])
plt.plot(ES_steps, ES_winrate, color=ES_color+[1.0])
plt.ylabel('Win Rate')
plt.legend(['Policy Gradient','Q-Learning','Evolutionary'])
plt.grid()
plt.tight_layout()
plt.savefig(os.path.join('Results/compare-res.pdf'))
plt.show()
policy_mines = [96.15, 98.46, 99.48, 98.60, 95.68, 89.89, 79.64, 64.83, 48.92, 35.42, 22.39, 14.46]
# Q-learning best model on 6x6 with 6 mines
q_mines = [0.91, 81.38, 60.75, 68.85, 87.29, 93.25, 90.39, 76.96, 52.11, 24.89, 9.44, 3.21]
# Q-learning best model on 6x6 with different number of mines (trained for 1 day with random number of mines
q_mines_random = [75.78, 100, 100, 99.03, 96.05, 88.45, 75.64, 60.37, 43.80, 28.28, 18.05, 11.70]
fig = plt.figure(figsize=(6,4))
plt.plot(range(1,13), policy_mines, color=PG_color+[1.0])
plt.plot(range(1,13), q_mines, color=QL_color+[1.0])
plt.plot(range(1,13), q_mines_random, color=QL_color_2+[1.0])
plt.legend(['Q-Learning'])
plt.title('Win rate vs Number of Bombs on 6x6 Minesweeper Board')
plt.legend(['Policy Gradient','Q-Learning','Q-learning (random mines)'])
plt.ylabel('Win Rate')
plt.xlabel('Number of Bombs')
plt.grid()
plt.tight_layout()
plt.savefig(os.path.join('Results/compare-res_mines.pdf'))
plt.show()
"""
Explanation: Comparison
End of explanation
"""
import os
filepath = os.getcwd()
os.chdir(filepath)
os.chdir('policy_gradients')
os.environ["KERAS_BACKEND"] = "tensorflow"
%run test_condensed_6x6_CNN.py
"""
Explanation: Testing the Agents
Policy Gradients
Execute the block below to test the Policy Gradient Agent on 10000 games.
If you want to see the agent play, you will have to launch the following bash-command from within the /policy_graident-folder:
python3 test_condensed_6x6_CNN.py --display 1
End of explanation
"""
### De-comment the code below to initiate training of the Policy Gradient agent
# import os
# os.chdir(filepath)
# os.chdir('policy_gradients/')
# %run train_condensed_6x6_v4.py
"""
Explanation: Training
Our training script will resume the training from the last checkpoint. You are free to interup training whenver you want, it saves progress every 100. epoch (100 epochs is equivalent to 20000 moves with current batch size).
To prevent overwriting saved the networks we present as our final solution, below is shown traning for an older network and non-converged network, but the process is the same. All the training and test files a found in the /policy_gradients folder
Win Rate is evaluated on 1000 games and printed every 400. epoch
End of explanation
"""
# Load the function
import os
import sys
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
os.chdir(filepath)
sys.path.append('q_learning')
from train import *
tf.reset_default_graph()
# Get the 90.20 win-rate
setup_model("test")
# Get the win-rates for different number of mines
tf.reset_default_graph()
# setup_model("test_random_mines") # runs forever
tf.reset_default_graph()
"""
Explanation: Q-Learning
End of explanation
"""
# Initialization
import multiprocessing as mp
import os
import pstats
import time
import gym
import numpy as np
from keras.layers import Dense, Conv2D, Flatten
from keras.models import Input, Model, Sequential, clone_model, load_model
from keras.optimizers import Adam
os.environ["KERAS_BACKEND"] = "theano"
os.chdir(filepath)
sys.path.append('evolutionary')
sys.path.append('evolutionary/Minesweeper')
from minesweeper_tk import Minesweeper
save_dir = ''
mp.freeze_support()
# Define fitness evaluation function
def fitnessfun(env, model):
total_reward = 0
done = False
observation = env.reset()
steps = 0
while not done and steps < rows*cols-mines:
# Predict action
action = model.predict(observation.reshape((1, rows, cols, n_chs)))
# Mask invalid moves (no need to renormalize when argmaxing)
mask = env.get_validMoves().flatten()
action[0, ~mask] = 0
# Step and get reward
observation, reward, done, info = env.step(np.argmax(action))
total_reward += reward
steps += 1
win = True if done and reward is 0.9 else False
return total_reward, {'steps': steps, 'win': win}
# Define test function
def testfun(env, model, episodes):
total_reward = 0
wins = 0
for i in range(episodes):
observation = env.reset()
done = False
t = 0
while not done and t < rows*cols-mines:
action = model.predict(observation.reshape((1, rows, cols, n_chs)))
observation, reward, done, info = env.step(np.argmax(action))
total_reward += reward
t += 1
if i % 100 == 0:
print('Episode: {: >3d}'.format(i))
wins += 1 if done and reward is 0.9 else 0
return total_reward/episodes, t, wins
# Define environment
rows = 4
cols = 4
mines = 4
OUT = 'FULL'
rewards_structure = {"win": 0.9, "loss": -1, "progress": 0.9, "noprogress": -0.3, "YOLO": -0.3}
env = Minesweeper(display=False, OUT=OUT, ROWS=rows, COLS=cols, MINES=mines, rewards=rewards_structure)
# Define model
obs = env.reset()
n_chs = obs.shape[-1]
n_hidden = [200, 200, 200, 200]
n_outputs = rows*cols
model = Sequential()
# Convs
model.add(Conv2D(15, (5, 5), input_shape=(rows, cols, n_chs), padding='same', activation='relu'))
model.add(Conv2D(35, (3, 3), padding='same', activation='relu'))
# Dense
model.add(Flatten())
model.add(Dense(units=n_hidden[0],
activation='relu',
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_regularizer=None,#l2(reg),
bias_regularizer=None))#l2(reg)))
# Hidden
for n_units in n_hidden[1:]:
model.add(Dense(units=n_units,
activation='relu',
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_regularizer=None,#l2(reg),
bias_regularizer=None))#l2(reg)))
# Output
model.add(Dense(units=n_outputs,
activation='softmax',
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_regularizer=None,
bias_regularizer=None))
model.compile(optimizer='rmsprop', loss='mean_squared_error')
model.summary()
# (Train)
do_train = False
if do_train:
from context import core
from core.strategies import ES
regu = 0.01
nags = 20
lrte = 0.01
sigm = 0.01
cint = 100
nwrk = mp.cpu_count()
e = ES(fun=fitnessfun, model=model, env=env, reg={'L2': regu}, population=nags, learning_rate=lrte, sigma=sigm, workers=nwrk, save_dir=save_dir)
e.evolve(ngns, checkpoint_every=cint, plot_every=cint)
# Load pretrained model
test_episodes = 1000
model = load_model('Results/Evolutionary/model.h5')
env = Minesweeper(display=False, OUT=OUT, ROWS=rows, COLS=cols, MINES=mines, rewards=rewards_structure)
# Run game env and save rewards and winrate for 100 games
average_reward, _, wins = testfun(env, model, test_episodes)
print('Win rate: {}'.format(wins/test_episodes))
print('Average test reward: {}'.format(average_reward))
"""
Explanation: Evolutionary Strategies
End of explanation
"""
|
JJINDAHOUSE/deep-learning | sentiment-rnn/Sentiment_RNN.ipynb | mit | import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
"""
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
"""
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
"""
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
"""
from collections import Counter
# Create your dictionary that maps vocab words to integers here
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
"""
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
"""
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
"""
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
"""
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
"""
Explanation: If you built labels correctly, you should see the next output.
End of explanation
"""
# Filter out that review with 0 length
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
len(non_zero_idx)
"""
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
"""
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
labels = np.array([labels[ii] for ii in non_zero_idx])
"""
Explanation: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
End of explanation
"""
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype = int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
"""
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
"""
features[:10,:100]
"""
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
"""
split_frac = 0.8
split_idx = int(len(features) * 0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x) * 0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
"""
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
"""
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
"""
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
"""
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name = 'inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name = 'labels')
keep_prob = tf.placeholder(tf.float32, name = 'keeep_prob')
"""
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
"""
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
"""
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
"""
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
"""
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
"""
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state = initial_state)
"""
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
"""
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
"""
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
"""
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
"""
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
"""
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
"""
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
"""
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
"""
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
"""
Explanation: Testing
End of explanation
"""
|
gabyx/ApproxMVBB | tests/python/PlotTestResults.ipynb | mpl-2.0 | import sys,os,imp,re
import math
import numpy as np
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib.cm as cmx
from mpl_toolkits.mplot3d import Axes3D
mpl.rcParams['figure.figsize']=(6.0,4.0) #(6.0,4.0)
mpl.rcParams['font.size']=10 #10
mpl.rcParams['savefig.dpi']=400 #72
mpl.rcParams['figure.subplot.bottom']=.1 #.125
plt.rc('font', family='serif')
plt.rc('text', usetex=True)
#inline Shit
%matplotlib inline
%config InlineBackend.figure_format='svg'
%config InlineBackend.rc = {'figure.facecolor': 'white', 'figure.subplot.bottom': 0.125, 'figure.edgecolor': 'white', 'savefig.dpi': 400, 'figure.figsize': (12.0, 8.0), 'font.size': 10}
#GUi shit
%matplotlib tk
mpl.get_configdir()
def plotCube(ax,minP = np.array([-1.0,-1.0,-1.0]), maxP=np.array([1.0,1.0,1.0]),
trans= np.array([0.0,0.0,0.0]),rotationMatrix=np.diag([1,1,1])):
from itertools import product, combinations
r = [-1, 1]
centerPos = (maxP + minP)/2.0;
#print(centerPos)
extent = (maxP - minP)/2.0;
points = np.array([(-1, -1, -1),
(-1, -1, 1),
(-1, 1, -1),
(-1, 1, 1),
(1, -1, -1),
(1, -1, 1),
(1, 1, -1),
(1, 1, 1)]);
for s, e in combinations(points, 2):
if np.sum(np.abs(s-e)) == r[1]-r[0]: # no diagonal lines
p1 = np.array(s,dtype=float); p2 = np.array(e,dtype=float);
#scale points
p1*=extent; p2*=extent;
#rotate and translate points
p1 = rotationMatrix.dot(p1 + centerPos) + trans;
p2 = rotationMatrix.dot(p2+centerPos) + trans;
ax.plot3D(*zip(p1,p2), color="b")
def plotAxis(ax,centerPos,A_IK,plotAxisScale=1):
for i,c in zip([0,1,2],['r','g','b']):
I_eK_i = A_IK[:,i];
lines = list(zip(centerPos,plotAxisScale*I_eK_i+centerPos))
v = Arrow3D(*lines, mutation_scale=50, lw=1, arrowstyle="-|>", color=c);
ax.plot3D(*lines, color=c)
ax.add_artist(v);
def plotAxis2d(ax,centerPos,u,v,plotAxisScale=1):
x = np.vstack((centerPos,plotAxisScale*u+centerPos))
y = np.vstack((centerPos,plotAxisScale*v+centerPos))
ax.plot(x.T[0],x.T[1],'r',lw=2)
ax.plot(y.T[0],y.T[1],'b',lw=2)
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d import proj3d
class Arrow3D(FancyArrowPatch):
def __init__(self, xs, ys, zs, *args, **kwargs):
FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)
self._verts3d = xs, ys, zs
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
FancyArrowPatch.draw(self, renderer)
def axisEqual3D(ax):
extents = np.array([getattr(ax, 'get_{}lim'.format(dim))() for dim in 'xyz'])
sz = extents[:,1] - extents[:,0]
centers = np.mean(extents, axis=1)
maxsize = max(abs(sz))
r = maxsize/2
for ctr, dim in zip(centers, 'xyz'):
getattr(ax, 'set_{}lim'.format(dim))(ctr - r, ctr + r)
# get all files
def loadFiles(folderPath,filePathRegex, keyNames):
files = [ os.path.join(folderPath,f) for f in os.listdir(folderPath) if os.path.isfile( os.path.join(folderPath,f) ) ]
filePaths=dict();
for f in files:
res = re.match(filePathRegex,f)
if(res):
key = res.group(1)
filePaths[key]= dict( [ (keyN, g) for keyN,g in zip(keyNames,res.groups()) ] )
filePaths[key]["path"] = f
return filePaths;
import struct
def readPointsMatrixBinary(filePath):
f = open(filePath, "br")
# read header (rows,columns)
(bigEndian,) = struct.unpack("?",f.read(1));
formatStringBO = "<"; # little endian
dtype = np.dtype("<f8")
if(bigEndian):
formatStringBO = ">";
dtype = np.dtype(">f8")
(rows,cols,nbytes) = struct.unpack("%sQQQ" % formatStringBO , f.read(3*np.int64(0).nbytes))
print("Matrix Binary: " , rows,cols,nbytes, "big Endian:", bigEndian)
return np.fromfile(f,dtype=dtype);
"""
Explanation: Table of Contents
<p><div class="lev1"><a href="#Convex-Hull-Tests"><span class="toc-item-num">1 </span>Convex Hull Tests</a></div><div class="lev1"><a href="#Min-Area-Rectangle-Tests"><span class="toc-item-num">2 </span>Min Area Rectangle Tests</a></div><div class="lev1"><a href="#Diameter-MVBB-Tests"><span class="toc-item-num">3 </span>Diameter MVBB Tests</a></div><div class="lev1"><a href="#MVBB-Tests"><span class="toc-item-num">4 </span>MVBB Tests</a></div>
End of explanation
"""
plt.close("all")
files = loadFiles("./" , ".*ConvexHullTest-(.*?)-(\w+)\.bin", ("name","prec") );
filesOut = loadFiles("./" , ".*ConvexHullTest-(.*?)-(\w+)-Out\.bin", ("name","prec"));
for i,f in files.items():
fOut = filesOut[i]; print(i,f,fOut)
fig = plt.figure("ConvexHullTest"+str(i),(10,10))
if(i!="PointsRandom14M"):
points = readPointsMatrixBinary(f["path"]); points=np.reshape(points,(-1,2))
hullP = readPointsMatrixBinary(fOut["path"]); hullP=np.reshape(hullP,(-1,2))
hullP = np.vstack((hullP,hullP[0]))
plt.plot(hullP.T[0],hullP.T[1],'b-o', ms=20, markerfacecolor='None')
if(i!="PointsRandom14M"):
plt.scatter(points.T[0],points.T[1],c='r')
if(len(points)<300):
nrRange = [(i,p[0],p[1]) for i,p in enumerate(points) ]
for x in nrRange:
plt.annotate('%s' % x[0], xy=x[1:3], textcoords='offset points') # <--
# if(len(hullP)<300):
# nrRange = [(i,p[0],p[1]) for i,p in enumerate(hullP) ]
# for x in nrRange:
# plt.annotate('%s' % x[0], xy=x[1:3], textcoords='offset points') # <--
"""
Explanation: Convex Hull Tests
End of explanation
"""
plt.close("all")
files = loadFiles("./" , ".*MinAreaRectangleTest-(.*?)-(\w+).bin", ("name","prec") );
filesOut = loadFiles("./" , ".*MinAreaRectangleTest-(.*?)-(\w+)-Out\.bin", ("name","prec"));
for i,f in files.items():
if(i in ["NoPoint"]): continue
fOut = filesOut[i]; print(i,f,fOut)
fig = plt.figure("MinAreaRectangleTest"+str(i),(10,10))
ax = fig.gca();
if(i!="PointsRandom10M"):
points = readPointsMatrixBinary(f["path"]); points=np.reshape(points,(-1,2))
rectData = readPointsMatrixBinary(fOut["path"]); rectData=np.reshape(rectData,(-1,2))
rect = rectData[0:4,]
rect = np.vstack([rect,rect[0]])
axis = rectData[4:,]
print(axis)
plt.plot(rect.T[0],rect.T[1],'r-', ms=20, markerfacecolor='None')
plotAxis2d(ax,rect[0],axis[0],axis[1]);
if(i!="PointsRandom10M"):
plt.scatter(points.T[0],points.T[1])
else:
plt.scatter(points.T[0][0:400],points.T[1][0:400])
plt.axis('equal')
"""
Explanation: Min Area Rectangle Tests
End of explanation
"""
plt.close("all")
files = loadFiles("./" , ".*DiameterOOBBTest-(.*?)-(\w+)\.bin", ("name","prec") );
filesOut = loadFiles("./" , ".*DiameterOOBBTest-(.*?)-(\w+)-Out\.txt", ("name","prec") );
filesOut2 = loadFiles("./" , ".*DiameterOOBBTest-(.*?)-(\w+)-Out2\.bin", ("name","prec") );
for i,f in files.items():
fOut = filesOut[i]; fOut2 = filesOut2[i]; print(i,f,fOut,fOut2)
fig = plt.figure("DiameterTest"+str(i),(10,10))
points = readPointsMatrixBinary(f["path"]); points=np.reshape(points,(-1,3))
OOBB = np.atleast_2d(np.loadtxt(fOut["path"]));
sampled = readPointsMatrixBinary(fOut2["path"]); sampled=np.reshape(sampled,(-1,3))
K_min = OOBB[0,0:3]
K_max = OOBB[1,0:3]
A_IK = OOBB[2:,0:3]
center = np.zeros((3,));
#print(A_IK,K_min,K_max,center)
ax = Axes3D(fig)
if(i not in ["PointsRandom14M", "Lucy"] ):
ax.scatter(points.T[0],points.T[1],points.T[2],c='b')
else:
plt.scatter(points.T[0][0:2000],points.T[1][0:2000])
ax.scatter(sampled.T[0],sampled.T[1],sampled.T[2],c='r', marker='o', s=200)
plotCube(ax,K_min,K_max,center,A_IK) # A_IK = R_KI (rotation from I to K)
plotAxis(ax,center,A_IK,1)
plotAxis(ax,center,np.identity(3),0.5)
axisEqual3D(ax)
"""
Explanation: Diameter MVBB Tests
End of explanation
"""
plt.close("all")
files = loadFiles("./" , ".*MVBBTest-(.*?)-(\w+).bin", ("name","prec") );
filesOut = loadFiles("./" , ".*MVBBTest-(.*?)-(\w+)-Out\.txt", ("name","prec"));
for i,f in files.items():
fOut = filesOut[i];
print(i,f,fOut);
fig = plt.figure("MVBBTest" + str(i),(10,10))
points = readPointsMatrixBinary(f["path"]); points=np.reshape(points,(-1,3))
OOBB = np.atleast_2d(np.loadtxt(fOut["path"]));
K_min = OOBB[0,0:3]
K_max = OOBB[1,0:3]
A_IK = OOBB[2:5,0:3]
center = np.zeros((3,));
print(A_IK,K_min,K_max,center)
ax = Axes3D(fig)
if(len(points) < 10000):
ax.scatter(points.T[0],points.T[1],points.T[2],c='b')
else:
ax.scatter(points.T[0][0:10000],points.T[1][0:10000],points.T[2][0:10000],c='b')
plotCube(ax,K_min,K_max,center,A_IK) # A_IK = R_KI (rotation from I to K)
plotAxis(ax,center,A_IK,1)
plotAxis(ax,center,np.identity(3),0.5)
axisEqual3D(ax)
"""
Explanation: MVBB Tests
End of explanation
"""
|
jbocharov-mids/W207-Machine-Learning | John_Bocharov_p1.ipynb | apache-2.0 | # This tells matplotlib not to try opening a new window for each plot.
%matplotlib inline
# Import a bunch of libraries.
import time
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator
from sklearn.pipeline import Pipeline
from sklearn.datasets import fetch_mldata
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix
from sklearn.linear_model import LinearRegression
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
# Set the randomizer seed so results are the same each time.
np.random.seed(0)
"""
Explanation: Project 1: Digit Classification with KNN and Naive Bayes
In this project, you'll implement your own image recognition system for classifying digits. Read through the code and the instructions carefully and add your own code where indicated. Each problem can be addressed succinctly with the included packages -- please don't add any more. Grading will be based on writing clean, commented code, along with a few short answers.
As always, you're welcome to work on the project in groups and discuss ideas on the course wall, but <b> please prepare your own write-up (with your own code). </b>
If you're interested, check out these links related to digit recognition:
Yann Lecun's MNIST benchmarks: http://yann.lecun.com/exdb/mnist/
Stanford Streetview research and data: http://ufldl.stanford.edu/housenumbers/
End of explanation
"""
# Load the digit data either from mldata.org, or once downloaded to data_home, from disk. The data is about 53MB so this cell
# should take a while the first time your run it.
mnist = fetch_mldata('MNIST original', data_home='~/datasets/mnist')
X, Y = mnist.data, mnist.target
# Rescale grayscale values to [0,1].
X = X / 255.0
# Shuffle the input: create a random permutation of the integers between 0 and the number of data points and apply this
# permutation to X and Y.
# NOTE: Each time you run this cell, you'll re-shuffle the data, resulting in a different ordering.
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, Y = X[shuffle], Y[shuffle]
print 'data shape: ', X.shape
print 'label shape:', Y.shape
# Set some variables to hold test, dev, and training data.
test_data, test_labels = X[61000:], Y[61000:]
dev_data, dev_labels = X[60000:61000], Y[60000:61000]
train_data, train_labels = X[:60000], Y[:60000]
mini_train_data, mini_train_labels = X[:1000], Y[:1000]
"""
Explanation: Load the data. Notice that we are splitting the data into training, development, and test. We also have a small subset of the training data called mini_train_data and mini_train_labels that you should use in all the experiments below, unless otherwise noted.
End of explanation
"""
#def P1(num_examples=10):
### STUDENT START ###
# Credit where due... some inspiration drawn from:
# https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/fig/mnist.py
# example_as_pixel_matrix():
# transforms a 784 element pixel into a 28 x 28 pixel matrix
def example_as_pixel_matrix(example):
return np.reshape(example, (-1, 28))
# add_example_to_figure():
# given an existing figure, number of rows, columns, and position,
# adds a subplot with the example to the figure
def add_example_to_figure(example,
figure,
subplot_rows,
subplot_cols,
subplot_number):
matrix = example_as_pixel_matrix(example)
subplot = figure.add_subplot(subplot_rows, subplot_cols, subplot_number)
subplot.imshow(matrix, cmap='Greys', interpolation='Nearest')
# disable tick marks
subplot.set_xticks(np.array([]))
subplot.set_yticks(np.array([]))
# plot_examples():
# given a matrix of examples (digit, example#) => example,
# plots it with digits as rows and examples as columns
def plot_examples(examples):
figure = plt.figure()
shape = np.shape(examples)
rows = shape[0]
columns = shape[1]
subplot_index = 1
for digit, examples_for_digit in enumerate(examples):
for example_index, example in enumerate(examples_for_digit):
add_example_to_figure(example,
figure,
rows,
columns,
subplot_index
)
subplot_index = subplot_index + 1
figure.tight_layout()
plt.show()
# plot_one_example():
# given an example, plots only that example, typically
# for debugging or diagnostics
def plot_one_example(example):
examples = [ [ example ] ]
plot_examples(examples)
# select_indices_of_digit():
# given an array of digit lables, selects the indices of
# labels that match a desired digit
def select_indices_of_digit(labels, digit):
return [i for i, label in enumerate(labels) if label == digit]
# take_n_from():
# code readability sugar for taking a number of elements from an array
def take_n_from(count, array):
return array[:count]
# take_n_examples_by_digit():
# given a data set of examples, a label set, and a parameter n,
# creates a matrix where the rows are the digits 0-9, and the
# columns are the first n examples of each digit
def take_n_examples_by_digit(data, labels, n):
examples = [
data[take_n_from(n, select_indices_of_digit(labels, digit))]
for digit in range(10)
]
return examples
def P1(num_examples=10):
examples = take_n_examples_by_digit(mini_train_data, mini_train_labels, num_examples)
plot_examples(examples)
P1(10)
### STUDENT END ###
#P1(10)
"""
Explanation: (1) Create a 10x10 grid to visualize 10 examples of each digit. Python hints:
plt.rc() for setting the colormap, for example to black and white
plt.subplot() for creating subplots
plt.imshow() for rendering a matrix
np.array.reshape() for reshaping a 1D feature vector into a 2D matrix (for rendering)
End of explanation
"""
#def P2(k_values):
### STUDENT START ###
from sklearn.metrics import accuracy_score
# apply_k_nearest_neighbors():
# given the parameter k, training data and labels, and development data and labels,
# fit a k nearest neighbors classifier using the training data,
# test using development data, and output a report
def apply_k_nearest_neighbors(k,
training_data,
training_labels,
development_data,
development_labels):
neigh = KNeighborsClassifier(n_neighbors = k)
neigh.fit(training_data, training_labels)
predicted_labels = neigh.predict(development_data)
target_names = [ str(i) for i in range(10) ]
print '============ Classification report for k = ' + str(k) + ' ============'
print ''
print(classification_report(
development_labels,
predicted_labels,
target_names = target_names))
return accuracy_score(development_labels, predicted_labels, normalize = True)
def P2(k_values):
return [
apply_k_nearest_neighbors(k,
mini_train_data,
mini_train_labels,
dev_data,
dev_labels)
for k in k_values
]
k_values = [1, 3, 5, 7, 9]
P2(k_values)
### STUDENT END ###
#k_values = [1, 3, 5, 7, 9]
#P2(k_values)
"""
Explanation: (2) Evaluate a K-Nearest-Neighbors model with k = [1,3,5,7,9] using the mini training set. Report accuracy on the dev set. For k=1, show precision, recall, and F1 for each label. Which is the most difficult digit?
KNeighborsClassifier() for fitting and predicting
classification_report() for producing precision, recall, F1 results
End of explanation
"""
#def P3(train_sizes, accuracies):
### STUDENT START ###
# k_nearest_neighbors_timed_accuracy():
# given the parameter k, training data and labels, and development data and labels,
# fit a k nearest neighbors classifier using the training data,
# test using development data, and return the number of examples, prediction time,
# and accuracy as a Python dictionary
def k_nearest_neighbors_timed_accuracy(k,
training_data,
training_labels,
development_data,
development_labels):
neigh = KNeighborsClassifier(n_neighbors = k)
neigh.fit(training_data, training_labels)
start = time.time()
predicted_labels = neigh.predict(development_data)
end = time.time()
examples, dimensions = np.shape(training_data)
accuracy = accuracy_score(development_labels, predicted_labels, normalize = True)
return { 'examples' : examples, 'time' : end-start, 'accuracy' : accuracy }
def P3(train_sizes, accuracies):
k = 1
for train_size in train_sizes:
# sample train_size examples from the training set
current_train_data, current_train_labels = X[:train_size], Y[:train_size]
results = k_nearest_neighbors_timed_accuracy(k,
current_train_data,
current_train_labels,
dev_data,
dev_labels)
print(results)
accuracies.append(results['accuracy'])
train_sizes = [100, 200, 400, 800, 1600, 3200, 6400, 12800, 25000]
accuracies = [ ]
P3(train_sizes, accuracies)
### STUDENT END ###
#train_sizes = [100, 200, 400, 800, 1600, 3200, 6400, 12800, 25000]
#accuracies = []
#P3(train_sizes, accuracies)
"""
Explanation: ANSWER: The most difficult digit is 9, as measured by f1-score
(3) Using k=1, report dev set accuracy for the training set sizes below. Also, measure the amount of time needed for prediction with each training size.
time.time() gives a wall clock value you can use for timing operations
End of explanation
"""
#def P4():
### STUDENT START ###
from sklearn.linear_model import LogisticRegression
# fit_linear_regression():
# given arrays of training data sizes and corresponding accuracies,
# train and return a linear regression model for predicting accuracies
def fit_linear_regression(train_sizes, accuracies):
train_sizes_matrix = [ [ train_size ] for train_size in train_sizes ]
linear = LinearRegression()
linear.fit(train_sizes_matrix, accuracies)
return linear
# fit_logistic_regression():
# given arrays of training data sizes and corresponding accuracies,
# train and return a logistic regression model for predicting accuracies
def fit_logistic_regression(train_sizes, accuracies):
train_sizes_matrix = [ [ train_size ] for train_size in train_sizes ]
logistic = LogisticRegression()
logistic.fit(train_sizes_matrix, accuracies)
return logistic
def P4():
full_training_size = 60000
linear = fit_linear_regression(train_sizes, accuracies)
linear_prediction = linear.predict(full_training_size)
print('Linear model prediction for '
+ str(full_training_size) + ' : ' + str(linear_prediction[0]))
logistic = fit_logistic_regression(train_sizes, accuracies)
logistic_prediction = logistic.predict(full_training_size)
print('Logistic model prediction for '
+ str(full_training_size) + ' : ' + str(logistic_prediction[0]))
P4()
### STUDENT END ###
#P4()
"""
Explanation: (4) Fit a regression model that predicts accuracy from training size. What does it predict for n=60000? What's wrong with using regression here? Can you apply a transformation that makes the predictions more reasonable?
Remember that the sklearn fit() functions take an input matrix X and output vector Y. So each input example in X is a vector, even if it contains only a single value.
End of explanation
"""
#def P5():
### STUDENT START ###
# train_k_nearest_neighbors():
# given the parameter k, training data and labels, and development data and labels,
# fit a k nearest neighbors classifier using the training data
def train_k_nearest_neighbors(k,
training_data,
training_labels):
neigh = KNeighborsClassifier(n_neighbors = k)
neigh.fit(training_data, training_labels)
return neigh
# most_confused():
# given a confusion matrix
# returns a sequence that comprises the two most confused digits, and errors between them
def most_confused(confusion):
rows, columns = np.shape(confusion)
worst_row, worst_column, worst_errors = 0, 1, 0
# iterate through the upper triangle, ignoring the diagonals
# confused is the sum for each pair of indices
for row in range(rows):
for column in range(row + 1, columns):
errors = confusion[row][column] + confusion[column][row]
if errors > worst_errors:
worst_row, worst_column, worst_errors = row, column, errors
return ( worst_row, worst_column, worst_errors )
# select_pairwire_error_indices():
# given a predictions vector, actual label vector, and the digits of interest
# returns an array of indices where the digits were confused
def select_pairwire_error_indices(predictions, labels, confused_low, confused_high):
error_indices = [ ]
for i, prediction in enumerate(predictions):
label = labels[i]
if ((prediction == confused_low and label == confused_high) or
(prediction == confused_high and label == confused_low)):
error_indices.append(i)
return error_indices
def P5():
k = 1
neigh = train_k_nearest_neighbors(k, train_data, train_labels)
development_predicted = neigh.predict(dev_data)
confusion = confusion_matrix(dev_labels, development_predicted)
confused_low, confused_high, confusion_errors = most_confused(confusion)
print('Most confused digits are: ' + str(confused_low) + ' and ' + str(confused_high)
+ ', with ' + str(confusion_errors) + ' total confusion errors')
error_indices = select_pairwire_error_indices(
development_predicted, dev_labels, confused_low, confused_high)
error_examples = [ dev_data[error_indices] ]
plot_examples(error_examples)
return confusion
P5()
### STUDENT END ###
#P5()
"""
Explanation: ANSWER: OLS/Linear models aren't designed to respect probibility range (0,1) and can produce probabilities > 1 or < 0 (e.g. 1.24). A Logistic Regression model is a great straightforward fix as it produces predictions in valid probability range (0.0 - 1.0) by design.
Fit a 1-NN and output a confusion matrix for the dev data. Use the confusion matrix to identify the most confused pair of digits, and display a few example mistakes.
confusion_matrix() produces a confusion matrix
End of explanation
"""
import itertools
# blur():
# blurs an image by averaging adjacent pixels
def blur(image):
pixel_matrix = example_as_pixel_matrix(image)
blurred_image = []
rows, columns = np.shape(pixel_matrix)
for row in range(rows):
for column in range(columns):
# take the mean of the 9-pixel neighborhood (in clause)
# but guard against running off the edges of the matrix (if clause)
value = np.mean(list(
pixel_matrix[i][j]
for i, j
in itertools.product(
range(row - 1, row + 2),
range(column - 1, column + 2)
)
if (i >= 0) and (j >= 0) and (i < rows) and (j < columns)
))
blurred_image.append(value)
return blurred_image
# blur_images():
# blurs a collection of images
def blur_images(images):
blurred = [ blur(image) for image in images ]
return blurred
# Do this in batches since iPythonNB seems to hang on large batches
train_data_0k = train_data[:10000]
blurred_train_data_0k = blur_images(train_data_0k)
train_data_1k = train_data[10000:20000]
blurred_train_data_1k = blur_images(train_data_1k)
train_data_2k = train_data[20000:30000]
blurred_train_data_2k = blur_images(train_data_2k)
train_data_3k = train_data[30000:40000]
blurred_train_data_3k = blur_images(train_data_3k)
train_data_4k = train_data[40000:50000]
blurred_train_data_4k = blur_images(train_data_4k)
train_data_5k = train_data[50000:60000]
blurred_train_data_5k = blur_images(train_data_5k)
blurred_dev_data = blur_images(dev_data)
blurred_train_data = (
blurred_train_data_0k
+ blurred_train_data_1k
+ blurred_train_data_2k
+ blurred_train_data_3k
+ blurred_train_data_4k
+ blurred_train_data_5k
)
#def P6():
### STUDENT START ###
def P6():
k = 1
neigh_blurred_train = train_k_nearest_neighbors(k, blurred_train_data, train_labels)
neigh_unblurred_train = train_k_nearest_neighbors(k, train_data, train_labels)
predicted_blurred_train_unblurred_dev = (
neigh_blurred_train.predict(dev_data)
)
predicted_unblurred_train_blurred_dev = (
neigh_unblurred_train.predict(blurred_dev_data)
)
predicted_blurred_train_blurred_dev = (
neigh_blurred_train.predict(blurred_dev_data)
)
print 'Accuracy for blurred training, unblurred dev:'
print(accuracy_score(
dev_labels, predicted_blurred_train_unblurred_dev, normalize = True))
print 'Accuracy for unblurred training, blurred dev:'
print(accuracy_score(
dev_labels, predicted_unblurred_train_blurred_dev, normalize = True))
print 'Accuracy for blurred training, blurred dev:'
print(accuracy_score(
dev_labels, predicted_blurred_train_blurred_dev, normalize = True))
P6()
### STUDENT END ###
#P6()
"""
Explanation: (6) A common image processing technique is to smooth an image by blurring. The idea is that the value of a particular pixel is estimated as the weighted combination of the original value and the values around it. Typically, the blurring is Gaussian -- that is, the weight of a pixel's influence is determined by a Gaussian function over the distance to the relevant pixel.
Implement a simplified Gaussian blur by just using the 8 neighboring pixels: the smoothed value of a pixel is a weighted combination of the original value and the 8 neighboring values. Try applying your blur filter in 3 ways:
- preprocess the training data but not the dev data
- preprocess the dev data but not the training data
- preprocess both training and dev data
Note that there are Guassian blur filters available, for example in scipy.ndimage.filters. You're welcome to experiment with those, but you are likely to get the best results with the simplified version I described above.
End of explanation
"""
#def P7():
### STUDENT START ###
from sklearn.metrics import accuracy_score
# binarize_example():
# Turn all pixels below 0.5 (or threshold) -> 0, greater -> 1
def binarize_example(example, threshold = 0.5):
binarized = [ 1 if value > threshold else 0 for value in example ]
return binarized
# binarize_examples():
# Apply binarization to a set of example
def binarize_examples(examples, threshold = 0.5):
binarized = [ binarize_example(example, threshold) for example in examples ]
return binarized
# ternarize_example():
# Turn all pixels below 1/3 (or threshold) -> 0, 1/3 through 2/3 -> 1, greater -> 2
def ternarize_example(example, threshold_low = 0.33333333, threshold_high = 0.66666666):
ternarized = [
0 if value < threshold_low else 1 if value < threshold_high else 2
for value in example
]
return ternarized
# ternarize_examples():
# Apply ternarization to a set of example
def ternarize_examples(examples, threshold_low = 0.33333333, threshold_high = 0.66666666):
ternarized = [
ternarize_example(example, threshold_low, threshold_high)
for example in examples
]
return ternarized
def P7():
binarized_train_data = binarize_examples(train_data)
binary_naive_bayes = BernoulliNB()
binary_naive_bayes.fit(binarized_train_data, train_labels)
binarized_dev_data = binarize_examples(dev_data)
binary_naive_bayes_predicted = binary_naive_bayes.predict(binarized_dev_data)
target_names = [ str(i) for i in range(10) ]
print '============ Classification report for binarized ============'
print ''
print(classification_report(
dev_labels,
binary_naive_bayes_predicted,
target_names = target_names))
print ' Accuracy score: '
print(accuracy_score(dev_labels, binary_naive_bayes_predicted, normalize = True))
ternarized_train_data = ternarize_examples(train_data)
ternary_naive_bayes = MultinomialNB()
ternary_naive_bayes.fit(ternarized_train_data, train_labels)
ternarized_dev_data = ternarize_examples(dev_data)
ternary_naive_bayes_predicted = ternary_naive_bayes.predict(ternarized_dev_data)
print '============ Classification report for ternarized ============'
print ''
print(classification_report(
dev_labels,
ternary_naive_bayes_predicted,
target_names = target_names))
print ' Accuracy score: '
print(accuracy_score(dev_labels, ternary_naive_bayes_predicted, normalize = True))
P7()
### STUDENT END ###
#P7()
"""
Explanation: ANSWER: Blurring the training but not the development data
(7) Fit a Naive Bayes classifier and report accuracy on the dev data. Remember that Naive Bayes estimates P(feature|label). While sklearn can handle real-valued features, let's start by mapping the pixel values to either 0 or 1. You can do this as a preprocessing step, or with the binarize argument. With binary-valued features, you can use BernoulliNB. Next try mapping the pixel values to 0, 1, or 2, representing white, grey, or black. This mapping requires MultinomialNB. Does the multi-class version improve the results? Why or why not?
End of explanation
"""
#def P8(alphas):
### STUDENT START ###
def P8(alphas):
binarized_train_data = binarize_examples(train_data)
bernoulli_naive_bayes = BernoulliNB()
grid_search = GridSearchCV(bernoulli_naive_bayes, alphas, verbose = 3)
grid_search.fit(binarized_train_data, train_labels)
return grid_search
alphas = {'alpha': [0.0, 0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 2.0, 10.0]}
nb = P8(alphas)
print nb.best_params_
### STUDENT END ###
#alphas = {'alpha': [0.0, 0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 2.0, 10.0]}
#nb = P8(alphas)
#print nb.best_params_
"""
Explanation: ANSWER:
(8) Use GridSearchCV to perform a search over values of alpha (the Laplace smoothing parameter) in a Bernoulli NB model. What is the best value for alpha? What is the accuracy when alpha=0? Is this what you'd expect?
Note that GridSearchCV partitions the training data so the results will be a bit different than if you used the dev data for evaluation.
End of explanation
"""
#def P9():
### STUDENT END ###
def train_and_score_gaussian(
training_data, training_labels, development_data, development_labels):
model = GaussianNB().fit(training_data, training_labels)
predictions = model.predict(development_data)
print(accuracy_score(development_labels, predictions, normalize = True))
return model
def P9():
print 'Accuracy score of Gaussian Naive Bayes (uncorrected): '
gaussian_naive_bayes = train_and_score_gaussian(
train_data, train_labels, dev_data, dev_labels)
theta = gaussian_naive_bayes.theta_
for digit in range(10):
theta_figure = plt.figure()
theta_hist = plt.hist(theta[digit], bins = 100)
theta_hist_title = plt.title('Theta distribution for the digit ' + str(digit))
plt.show()
sigma = gaussian_naive_bayes.sigma_
for digit in range(10):
sigma_figure = plt.figure()
sigma_hist = plt.hist(theta[digit], bins = 100)
sigma_hist_title = plt.title('Sigma distribution for the digit ' + str(digit))
plt.show()
return gaussian_naive_bayes
gnb = P9()
# Attempts to improve were unsuccessful, see attempts below
print('Issue: Many features have variance 0, ')
print('which means they "contribute" but the contribution is noise')
examples, pixels = np.shape(train_data)
def select_signal_pixel_indices(data):
indices = [ ]
examples, pixels = np.shape(data)
for pixel in range(pixels):
has_signal = False
for example in range(examples):
if data[example][pixel] > 0.0:
has_signal = True
if has_signal:
indices.append(pixel)
return indices
pixels_with_signal = select_signal_pixel_indices(train_data)
def select_pixels_with_signal(data, pixels_with_signal):
examples, pixels = np.shape(data)
selected = [
data[example][pixels_with_signal]
for example in range(examples)
]
return selected
signal_train_data = select_pixels_with_signal(train_data, pixels_with_signal)
signal_dev_data = select_pixels_with_signal(dev_data, pixels_with_signal)
print('Attempt #0 : only select non-0 pixels ')
evaluate_gaussian(signal_train_data, train_labels, signal_dev_data, dev_labels)
def transform_attempt1(pixel):
return np.log(0.1 + pixel)
vectorized_transform_attempt1 = np.vectorize(transform_attempt1)
mapped_train_data = vectorized_transform_attempt1(train_data)
mapped_dev_data = vectorized_transform_attempt1(dev_data)
print('Attempt #1 : transform each pixel with log(0.1 + pixel) ')
evaluate_gaussian(mapped_train_data, train_labels, mapped_dev_data, dev_labels)
def transform_attempt2(pixel):
return 0.0 if pixel < 0.0001 else 1.0
vectorized_transform_attempt2 = np.vectorize(transform_attempt2)
mapped_train_data = vectorized_transform_attempt2(train_data)
mapped_dev_data = vectorized_transform_attempt2(dev_data)
print('Attempt #2 : binarize all pixels with a very low threshold ')
evaluate_gaussian(mapped_train_data, train_labels, mapped_dev_data, dev_labels)
### STUDENT END ###
#gnb = P9()
"""
Explanation: ANSWER: The best value for alpha is 0.0001
When alpha is 0, the accuracy is about one tenth. The effect of alpha = 0 is to ignore the training data, so this leaves the model to just pick a class. Single there are 10 (0-9) values with roughly equal distributions among the bins, always picking the same class is expected to have about 1/10 chances of being right.
(9) Try training a model using GuassianNB, which is intended for real-valued features, and evaluate on the dev data. You'll notice that it doesn't work so well. Try to diagnose the problem. You should be able to find a simple fix that returns the accuracy to around the same rate as BernoulliNB. Explain your solution.
Hint: examine the parameters estimated by the fit() method, theta_ and sigma_.
End of explanation
"""
#def P10(num_examples):
### STUDENT START ###
def generate_example(log_probabilities):
pixels = [
1.0 if np.random.rand() <= np.exp( log_probability ) else 0.0
for log_probability in log_probabilities
]
return pixels
# more than 10 x 10 gets scaled too small
def plot_10_examples(binary_naive_bayes):
per_digit_log_probabilities = binary_naive_bayes.feature_log_prob_
examples = [
[
generate_example(per_digit_log_probabilities[digit])
for example in range(10)
]
for digit in range(10)
]
plot_examples(examples)
def P10(num_examples):
binarized_train_data = binarize_examples(train_data)
binary_naive_bayes = BernoulliNB().fit(binarized_train_data, train_labels)
page = 0
while page < num_examples:
plot_10_examples(binary_naive_bayes)
page = page + 10
P10(20)
### STUDENT END ###
#P10(20)
"""
Explanation: ANSWER:
(10) Because Naive Bayes is a generative model, we can use the trained model to generate digits. Train a BernoulliNB model and then generate a 10x20 grid with 20 examples of each digit. Because you're using a Bernoulli model, each pixel output will be either 0 or 1. How do the generated digits compare to the training digits?
You can use np.random.rand() to generate random numbers from a uniform distribution
The estimated probability of each pixel is stored in feature_log_prob_. You'll need to use np.exp() to convert a log probability back to a probability.
End of explanation
"""
#def P11(buckets, correct, total):
### STUDENT START ###
buckets = [0.5, 0.9, 0.999, 0.99999, 0.9999999, 0.999999999, 0.99999999999, 0.9999999999999, 1.0]
correct = [0 for i in buckets]
total = [0 for i in buckets]
def train_binarized_bernoulli(training_data, training_labels, alpha = 0.0001):
binarized_train_data = binarize_examples(training_data)
binary_naive_bayes = BernoulliNB(alpha = alpha)
binary_naive_bayes.fit(binarized_train_data, training_labels)
return binary_naive_bayes
def find_bucket_index(buckets, posterior):
index = None
for i in range(len(buckets)):
if index == None and posterior <= buckets[i]:
index = i
return index
def score_by_posterior_buckets(
binary_naive_bayes, test_data, test_labels,
buckets, correct, total):
predictions = binary_naive_bayes.predict(test_data)
posteriors = binary_naive_bayes.predict_proba(test_data)
confidences = [
posteriors[index][predictions[index]]
for index in range(len(predictions))
]
for index, confidence in enumerate(confidences):
bucket_index = find_bucket_index(buckets, confidence)
total[bucket_index] = total[bucket_index] + 1
if predictions[index] == test_labels[index]:
correct[bucket_index] = correct[bucket_index] + 1
def P11(buckets, correct, total):
binary_naive_bayes = train_binarized_bernoulli(
train_data, train_labels)
binarized_dev_data = binarize_examples(dev_data)
score_by_posterior_buckets(binary_naive_bayes, binarized_dev_data, dev_labels,
buckets, correct, total)
P11(buckets, correct, total)
for i in range(len(buckets)):
if (total[i] > 0): accuracy = float(correct[i]) / float(total[i])
print 'p(pred) <= %.13f total = %3d accuracy = %.3f' %(buckets[i], total[i], accuracy)
### STUDENT END ###
#buckets = [0.5, 0.9, 0.999, 0.99999, 0.9999999, 0.999999999, 0.99999999999, 0.9999999999999, 1.0]
#correct = [0 for i in buckets]
#total = [0 for i in buckets]
#P11(buckets, correct, total)
#for i in range(len(buckets)):
# accuracy = 0.0
# if (total[i] > 0): accuracy = correct[i] / total[i]
# print 'p(pred) <= %.13f total = %3d accuracy = %.3f' %(buckets[i], total[i], accuracy)
"""
Explanation: ANSWER: Many of the generated digits are recognizable. However, they lack the connected lines of handdrawn digits because each pixel is sampled independently.
(11) Remember that a strongly calibrated classifier is rougly 90% accurate when the posterior probability of the predicted class is 0.9. A weakly calibrated classifier is more accurate when the posterior is 90% than when it is 80%. A poorly calibrated classifier has no positive correlation between posterior and accuracy.
Train a BernoulliNB model with a reasonable alpha value. For each posterior bucket (think of a bin in a histogram), you want to estimate the classifier's accuracy. So for each prediction, find the bucket the maximum posterior belongs to and update the "correct" and "total" counters.
How would you characterize the calibration for the Naive Bayes model?
End of explanation
"""
#def P12():
### STUDENT START ###
### STUDENT END ###
#P12()
"""
Explanation: ANSWER: The model is poorly calibrated - all probably buckets are over-confident, many drastically so.
(12) EXTRA CREDIT
Try designing extra features to see if you can improve the performance of Naive Bayes on the dev set. Here are a few ideas to get you started:
- Try summing the pixel values in each row and each column.
- Try counting the number of enclosed regions; 8 usually has 2 enclosed regions, 9 usually has 1, and 7 usually has 0.
Make sure you comment your code well!
End of explanation
"""
|
georgetown-analytics/machine-learning | archive/notebook/nist_clustering.ipynb | mit | from lxml import html
import requests
from __future__ import print_function
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.cluster import KMeans, MiniBatchKMeans
from time import time
"""
Explanation: Clustering NIST headlines and descriptions
adapted from https://github.com/star-is-here/open_data_day_dc
Introduction:
In this workshop we show you an example of a workflow in data science from initial data ingestion, cleaning, modeling, and ultimately clustering. In this example we scrape the news feed of the National Institute of Standards and Technology (NIST). For those not in the know, NIST is comprised of multiple research centers which include:
* Center for Nanoscale Science and Technology (CNST)
* Engineering Laboratory (EL)
* Information Technology Laboratory (ITL)
* NIST Center for Neutron Research (NCNR)
* Material Measurement Laboratory (MML)
* Physical Measurement Laboratory (PML)
This makes it an easy target for topic modeling, a way of identifying patterns in a corpus that uses clustering.
End of explanation
"""
print("Retrieving data from NIST...")
# Retrieve the data from the web page.
page = requests.get('https://www.nist.gov/news-events/news/search?combine=&field_campus_tid=All&term_node_tid_depth_1=All&date_filter%5Bmin%5D%5Bdate%5D=January+01%2C+2016&date_filter%5Bmax%5D%5Bdate%5D=June+30%2C+2016&items_per_page=200')
# Use html module to parse it out and store in tree.
tree = html.fromstring(page.content)
# Create list of news headlines and descriptions.
# This required obtaining the xpath of the elements by examining the web page.
list_of_headlines = tree.xpath('//h3[@class="nist-teaser__title"]/a/text()')
list_of_descriptions = tree.xpath('//div[@class="field-body field--body nist-body nist-teaser__content"]/text()')
#Combine each headline and description into one value in a list
news=[]
for each_headline in list_of_headlines:
for each_description in list_of_descriptions:
news.append(each_headline+each_description)
print("Last item in list retrieved: %s" % news[-1])
"""
Explanation: Get the Data
Building the list of headlines and descriptions
We request NIST news based on the following URL, 'http://www.nist.gov/allnews.cfm?s=01-01-2014&e=12-31-2014'. For this workshop, we look at only 2014 news articles posted on the NIST website.
We then pass that retrieved content to our HTML parser and search for a specific div class, "select_portal_module_wrapper" which is assigned to every headline and every description (headlines receive a strong tag and descriptions receive a p tag).
We then merge both the headline and description into one entry in the list because we don't need to differentiate between title and description.
End of explanation
"""
print("Extracting features from the training dataset using a sparse vectorizer")
t0 = time()
# Create a sparse word occurrence frequency matrix of the most frequent words
# with the following parameters:
# Maximum document frequency = half the total documents
# Minimum document frequency = two documents
# Toss out common English stop words.
vectorizer = TfidfVectorizer(input=news, max_df=0.5, min_df=2, stop_words='english')
# This calculates the counts
X = vectorizer.fit_transform(news)
print("done in %fs" % (time() - t0))
print("n_samples: %d, n_features: %d" % X.shape)
print()
"""
Explanation: Term Frequency-Inverse Document Frequency
The weight of a term that occurs in a document is proportional to the term frequency.
Term frequency is the number of times a term occurs in a document.
Inverse document frequency diminishes the weight of terms that occur very frequently in the document set and increases the weight of terms that occur rarely.
Convert collection of documents to TF-IDF matrix
We now call a TF-IDF vectorizer to create a sparse matrix with term frequency-inverse document frequency weights:
End of explanation
"""
# Set the number of clusters to 15
k = 15
# Initialize the kMeans cluster model.
km = KMeans(n_clusters=k, init='k-means++', max_iter=100)
print("Clustering sparse data with %s" % km)
t0 = time()
# Pass the model our sparse matrix with the TF-IDF counts.
km.fit(X)
print("done in %0.3fs" % (time() - t0))
print()
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(k):
print("Cluster %d:" % (i+1), end='')
for ind in order_centroids[i, :10]:
print(' %s' % terms[ind], end='')
print()
"""
Explanation: Let's do some clustering!
I happen to know there are 15 subject areas at NIST:
- Bioscience & Health
- Building and Fire Research
- Chemistry
- Electronics & Telecommunications
- Energy
- Environment/Climate
- Information Technology
- Manufacturing
- Materials Science
- Math
- Nanotechnology
- Physics
- Public Safety & Security
- Quality
- Transportation
So, why don't we cheat and set the number of clusters to 15?
Then we call the KMeans clustering model from sklearn and set an upper bound to the number of iterations for fitting the data to the model.
Finally we list out each centroid and the top 10 terms associated with each centroid.
End of explanation
"""
|
skaae/Recipes | examples/ImageNet Pretrained Network (VGG_S).ipynb | mit | !wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg_cnn_s.pkl
"""
Explanation: Introduction
This example demonstrates using a network pretrained on ImageNet for classification. The model used was converted from the VGG_CNN_S model (http://arxiv.org/abs/1405.3531) in Caffe's Model Zoo.
For details of the conversion process, see the example notebook "Using a Caffe Pretrained Network - CIFAR10".
License
The model is licensed for non-commercial use only
Download the model (393 MB)
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import lasagne
from lasagne.layers import InputLayer, DenseLayer, DropoutLayer
from lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer
from lasagne.layers import MaxPool2DLayer as PoolLayer
from lasagne.layers import LocalResponseNormalization2DLayer as NormLayer
from lasagne.utils import floatX
"""
Explanation: Setup
End of explanation
"""
net = {}
net['input'] = InputLayer((None, 3, 224, 224))
net['conv1'] = ConvLayer(net['input'], num_filters=96, filter_size=7, stride=2)
net['norm1'] = NormLayer(net['conv1'], alpha=0.0001) # caffe has alpha = alpha * pool_size
net['pool1'] = PoolLayer(net['norm1'], pool_size=3, stride=3)
net['conv2'] = ConvLayer(net['pool1'], num_filters=256, filter_size=5)
net['pool2'] = PoolLayer(net['conv2'], pool_size=2, stride=2)
net['conv3'] = ConvLayer(net['pool2'], num_filters=512, filter_size=3, pad=1)
net['conv4'] = ConvLayer(net['conv3'], num_filters=512, filter_size=3, pad=1)
net['conv5'] = ConvLayer(net['conv4'], num_filters=512, filter_size=3, pad=1)
net['pool5'] = PoolLayer(net['conv5'], pool_size=3, stride=3)
net['fc6'] = DenseLayer(net['pool5'], num_units=4096)
net['drop6'] = DropoutLayer(net['fc6'], p=0.5)
net['fc7'] = DenseLayer(net['drop6'], num_units=4096)
net['drop7'] = DropoutLayer(net['fc7'], p=0.5)
net['fc8'] = DenseLayer(net['drop7'], num_units=1000, nonlinearity=lasagne.nonlinearities.softmax)
output_layer = net['fc8']
"""
Explanation: Define the network
End of explanation
"""
import pickle
model = pickle.load(open('vgg_cnn_s.pkl'))
CLASSES = model['synset words']
MEAN_IMAGE = model['mean image']
lasagne.layers.set_all_param_values(output_layer, model['values'])
"""
Explanation: Load the model parameters and metadata
End of explanation
"""
import urllib
index = urllib.urlopen('http://www.image-net.org/challenges/LSVRC/2012/ori_urls/indexval.html').read()
image_urls = index.split('<br>')
np.random.shuffle(image_urls)
image_urls = image_urls[:5]
"""
Explanation: Trying it out
Get some test images
We'll download the ILSVRC2012 validation URLs and pick a few at random
End of explanation
"""
import skimage.transform
def prep_image(url):
ext = url.split('.')[-1]
im = plt.imread(urllib.urlopen(url), ext)
# Resize so smallest dim = 256, preserving aspect ratio
h, w, _ = im.shape
if h < w:
im = skimage.transform.resize(im, (256, w*256/h), preserve_range=True)
else:
im = skimage.transform.resize(im, (h*256/w, 256), preserve_range=True)
# Central crop to 224x224
h, w, _ = im.shape
im = im[h//2-112:h//2+112, w//2-112:w//2+112]
rawim = np.copy(im).astype('uint8')
# Shuffle axes to c01
im = np.swapaxes(np.swapaxes(im, 1, 2), 0, 1)
# Convert to BGR
im = im[::-1, :, :]
im = im - MEAN_IMAGE
return rawim, floatX(im[np.newaxis])
"""
Explanation: Helper to fetch and preprocess images
End of explanation
"""
for url in image_urls:
try:
rawim, im = prep_image(url)
prob = np.array(lasagne.layers.get_output(output_layer, im, deterministic=True).eval())
top5 = np.argsort(prob[0])[-1:-6:-1]
plt.figure()
plt.imshow(rawim.astype('uint8'))
plt.axis('off')
for n, label in enumerate(top5):
plt.text(250, 70 + n * 20, '{}. {}'.format(n+1, CLASSES[label]), fontsize=14)
except IOError:
print('bad url: ' + url)
"""
Explanation: Process test images and print top 5 predicted labels
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.1/examples/single_spots.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: Single Star with Spots
Setup
IMPORTANT NOTE: if using spots on contact systems or single stars, make sure to use 2.1.15 or later as the 2.1.15 release fixed a bug affecting spots in these systems.
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_star()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.add_spot(radius=30, colat=80, long=0, relteff=0.9)
"""
Explanation: Adding Spots
Let's add one spot to our star. Since there is only one star, the spot will automatically attach without needing to provide component (as is needed in the binary with spots example
End of explanation
"""
print b['spot']
"""
Explanation: Spot Parameters
A spot is defined by the colatitude and longitude of its center, its angular radius, and the ratio of temperature of the spot to the local intrinsic value.
End of explanation
"""
times = np.linspace(0, 10, 11)
b.set_value('period', 10)
b.add_dataset('mesh', times=times, columns=['teffs'])
b.run_compute(distortion_method='rotstar', irrad_method='none')
afig, mplfig = b.plot(x='us', y='vs', fc='teffs',
animate=True, save='single_spots_1.gif', save_kwargs={'writer': 'imagemagick'})
"""
Explanation: The 'colat' parameter defines the colatitude on the star measured from its North (spin) Pole. The 'long' parameter measures the longitude of the spot - with longitude = 0 being defined as pointing towards the observer at t0 for a single star. See the spots tutorial for more details.
End of explanation
"""
b.set_value('t0', 5)
b.run_compute(distortion_method='rotstar', irrad_method='none')
afig, mplfig = b.plot(x='us', y='vs', fc='teffs',
animate=True, save='single_spots_2.gif', save_kwargs={'writer': 'imagemagick'})
"""
Explanation: If we set t0 to 5 instead of zero, then the spot will cross the line-of-sight at t=5 (since the spot's longitude is 0).
End of explanation
"""
b.set_value('incl', 0)
b.run_compute(distortion_method='rotstar', irrad_method='none')
afig, mplfig = b.plot(x='us', y='vs', fc='teffs',
animate=True, save='single_spots_3.gif', save_kwargs={'writer': 'imagemagick'})
"""
Explanation: And if we change the inclination to 0, we'll be looking at the north pole of the star. This clearly illustrates the right-handed rotation of the star. At time=t0=5 the spot will now be pointing in the negative y-direction.
End of explanation
"""
|
Upward-Spiral-Science/claritycontrol | code/Advanced Texture Based Clarity Visualization.ipynb | apache-2.0 | import os
PATH="/Users/albertlee/claritycontrol/code/scripts"
os.chdir(PATH)
import clearity as cl # I wrote this module for easier operations on data
import matplotlib.pyplot as plt
import jgraph as ig
import clearity.resources as rs
import csv,gc # garbage memory collection :)
import matplotlib
#import matplotlib.pyplot as plt
import numpy as np
from skimage import data, img_as_float
from skimage import exposure
BINS=32 # histogram bins
RANGE=(10.0,300.0)
matplotlib.rcParams['font.size'] = 8
def plot_img_and_hist(img, axes, bins=256):
"""Plot an image along with its histogram and cumulative histogram.
"""
img = img_as_float(img)
ax_img, ax_hist = axes
ax_cdf = ax_hist.twinx()
# Display image
ax_img.imshow(img, cmap=plt.cm.gray)
ax_img.set_axis_off()
ax_img.set_adjustable('box-forced')
# Display histogram
ax_hist.hist(img.ravel(), bins=bins, histtype='step', color='black')
ax_hist.ticklabel_format(axis='y', style='scientific', scilimits=(0, 0))
ax_hist.set_xlabel('Pixel intensity')
ax_hist.set_xlim(0, 1)
ax_hist.set_yticks([])
# Display cumulative distribution
img_cdf, bins = exposure.cumulative_distribution(img, bins)
ax_cdf.plot(bins, img_cdf, 'r')
ax_cdf.set_yticks([])
return ax_img, ax_hist, ax_cdf
import nibabel as nb
im = nb.load('../data/raw/Fear187.nii')
im = im.get_data()
img = im[:,:,:,0]
print img
# Equalization
img_eq = exposure.equalize_hist(im)
img_eqfinal = nb.Nifti1Image(img_eq, np.eye(4))
nb.save(img_eqfinal, "../data/raw/Fear187.nii")
"""
Explanation: Albert Lee
Step 1: Get .nii file conversion for your data
This can be easily done using Jimmy Shen's MatLab package for NifTi analysis (http://www.mathworks.com/matlabcentral/fileexchange/8797-tools-for-nifti-and-analyze-image).
After downloading the zip file, extract and then add in the files to the folder that you want to convert. After adding that folder into your MatLab directory, simply type in the following two lines (modifying for the file names).
nii=load_nii(‘filename_with_hdr_extension’);
save_nii(nii, ‘desired_filename_for_niftifile.nii’);
Now you have converted to .nii successfully! NOTE THIS PROCESS WILL TAKE A HUGE AMOUNT OF TIME SINCE THE FILES ARE VERY LARGE.
Step 2: Histogram Equalization
IMPORTANT: This code was very largely modified from Scikit's histogram equalization algorithm. All credit should go to them.
End of explanation
"""
c = cl.Clarity("Fear187")
c.loadImg().imgToPoints(threshold=0.02,sample=0.3).showHistogram(bins=256)
"""
Explanation: Step 3: Visualization IMPORTANT DO NOT RUN UNLESS YOU HAVE A SUPERCOMPUTER - UNREALISTIC TO RUN AT THIS TIME
Now that the data has been transformed we can visualize the new data.
End of explanation
"""
import clarity as cl
import clarity.resources as rs
c = cl.Clarity("Fear187")
c.loadImg().imgToPoints(threshold=0.02,sample=0.3).showHistogram(bins=256)
c.loadImg().imgToPoints(threshold=0.04,sample=0.5).savePoints()
c.loadPoints().show()
"""
Explanation: Let's compare the results to the pre-equalized histogram image data.
End of explanation
"""
|
sujitpal/polydlot | src/pytorch/06-echo-sequence-prediction.ipynb | apache-2.0 | from __future__ import division, print_function
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
import torch
import torch.nn as nn
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
import os
import shutil
%matplotlib inline
DATA_DIR = "../../data"
NUM_CLASSES = 10
SEQUENCE_LENGTH = 5
BATCH_SIZE = 32
NUM_EPOCHS = 20
NUM_SAMPLES = 5000
# train_to_test, train_to_val
DATA_SPLITS = [0.7, 0.9]
EMBED_SIZE = NUM_CLASSES
# will vary from run to run but constant across the run
PREDICT_COL = np.random.randint(SEQUENCE_LENGTH)
MODEL_FILE = os.path.join(DATA_DIR, "torch-06-seq-pred-{:d}.model")
LEARNING_RATE = 1e-2
"""
Explanation: Echo Sequence Prediction
This is the first toy model from Jason Brownlee's Long Short Term Memory Networks with Python book. His book has implementations using Keras. This notebook contains an implementation using Pytorch. From section 6.2 of the book:
The echo sequence prediction problem is a contrived problem for demonstrating the memory
capability of the Vanilla LSTM. The task is that, given a sequence of random integers as input, to output the value of a random integer at a specific time input step that is not specified to the model.
For example, given the input sequence of random integers [5, 3, 2] and the chosen time
step was the second value, then the expected output is 3. Technically, this is a sequence
classification problem; it is formulated as a many-to-one prediction problem, where there are
multiple input time steps and one output time step at the end of the sequence.
End of explanation
"""
def generate_data(seq_len, num_classes, predict_col, num_samples):
ohe = OneHotEncoder(n_values=num_classes)
xs, ys = [], []
for i in range(num_samples):
random_seq = np.random.randint(0, num_classes, seq_len)
xs.append(ohe.fit_transform(random_seq.reshape(-1, 1)).todense())
ys.append(random_seq[predict_col])
X = np.array(xs)
y = np.array(ys)
return X, y
X, y = generate_data(SEQUENCE_LENGTH, NUM_CLASSES, PREDICT_COL, NUM_SAMPLES)
print(X.shape, y.shape)
def split_dataset(X, y, data_splits):
Xtv, Xtest, ytv, ytest = train_test_split(X, y, train_size=data_splits[0],
random_state=42)
Xtrain, Xval, ytrain, yval = train_test_split(Xtv, ytv, train_size=data_splits[1],
random_state=42)
return Xtrain, ytrain, Xval, yval, Xtest, ytest
Xtrain, ytrain, Xval, yval, Xtest, ytest = split_dataset(X, y, DATA_SPLITS)
print(Xtrain.shape, ytrain.shape, Xval.shape, yval.shape, Xtest.shape, ytest.shape)
"""
Explanation: Prepare Data
Torch LSTMs expect their data as 3D tensors of shape (SEQUENCE_LENGTH, BATCH_SIZE, EMBEDDING_SIZE), according to this page. We set up our data as an array of shape (SEQUENCE_LENGTH, BATCH_SIZE, NUM_CLASSES). Embedding is 1-hot encoding.
End of explanation
"""
class EchoClassifier(nn.Module):
"""
Input: one-hot encoding (?, 25, 10)
LSTM: output dimension (512,), extract context vector
FCN: output dimension 100, softmax
"""
def __init__(self, seq_len, input_dim, hidden_dim, output_dim):
super(EchoClassifier, self).__init__()
self.hidden_dim = hidden_dim
# define layers
self.lstm = nn.LSTM(input_dim, hidden_dim, 1,
batch_first=True,
dropout=0.2)
self.fc1 = nn.Linear(hidden_dim, output_dim)
self.softmax = nn.Softmax()
def forward(self, x):
if torch.cuda.is_available():
hidden = (Variable(torch.randn(1, x.size(0), self.hidden_dim).cuda()),
Variable(torch.randn(1, x.size(0), self.hidden_dim).cuda()))
else:
hidden = (Variable(torch.randn(1, x.size(0), self.hidden_dim)),
Variable(torch.randn(1, x.size(0), self.hidden_dim)))
out, hidden = self.lstm(x, hidden)
out = self.fc1(out[:, -1, :])
out = self.softmax(out)
return out
model = EchoClassifier(SEQUENCE_LENGTH, EMBED_SIZE, 25, NUM_CLASSES)
if torch.cuda.is_available():
model.cuda()
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
"""
Explanation: Define Network
End of explanation
"""
def compute_accuracy(pred_var, true_var):
if torch.cuda.is_available():
ypred = pred_var.cpu().data.numpy()
ytrue = true_var.cpu().data.numpy()
else:
ypred = pred_var.data.numpy()
ytrue = true_var.data.numpy()
return accuracy_score(ypred, ytrue)
history = []
for epoch in range(NUM_EPOCHS):
num_batches = Xtrain.shape[0] // BATCH_SIZE
shuffled_indices = np.random.permutation(np.arange(Xtrain.shape[0]))
train_loss, train_acc = 0., 0.
for bid in range(num_batches):
Xbatch_data = Xtrain[shuffled_indices[bid * BATCH_SIZE : (bid+1) * BATCH_SIZE]]
ybatch_data = ytrain[shuffled_indices[bid * BATCH_SIZE : (bid+1) * BATCH_SIZE]]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
ybatch = Variable(torch.from_numpy(ybatch_data).long())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
ybatch = ybatch.cuda()
# initialize gradients
optimizer.zero_grad()
# forward
Ybatch_ = model(Xbatch)
loss = loss_fn(Ybatch_, ybatch)
# backward
loss.backward()
train_loss += loss.data[0]
_, ybatch_ = Ybatch_.max(1)
train_acc += compute_accuracy(ybatch_, ybatch)
optimizer.step()
# compute training loss and accuracy
train_loss /= num_batches
train_acc /= num_batches
# compute validation loss and accuracy
val_loss, val_acc = 0., 0.
num_val_batches = Xval.shape[0] // BATCH_SIZE
for bid in range(num_val_batches):
# data
Xbatch_data = Xval[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
ybatch_data = yval[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
ybatch = Variable(torch.from_numpy(ybatch_data).long())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
ybatch = ybatch.cuda()
Ybatch_ = model(Xbatch)
loss = loss_fn(Ybatch_, ybatch)
val_loss += loss.data[0]
_, ybatch_ = Ybatch_.max(1)
val_acc += compute_accuracy(ybatch_, ybatch)
val_loss /= num_val_batches
val_acc /= num_val_batches
torch.save(model.state_dict(), MODEL_FILE.format(epoch+1))
print("Epoch {:2d}/{:d}: loss={:.3f}, acc={:.3f}, val_loss={:.3f}, val_acc={:.3f}"
.format((epoch+1), NUM_EPOCHS, train_loss, train_acc, val_loss, val_acc))
history.append((train_loss, val_loss, train_acc, val_acc))
losses = [x[0] for x in history]
val_losses = [x[1] for x in history]
accs = [x[2] for x in history]
val_accs = [x[3] for x in history]
plt.subplot(211)
plt.title("Accuracy")
plt.plot(accs, color="r", label="train")
plt.plot(val_accs, color="b", label="valid")
plt.legend(loc="best")
plt.subplot(212)
plt.title("Loss")
plt.plot(losses, color="r", label="train")
plt.plot(val_losses, color="b", label="valid")
plt.legend(loc="best")
plt.tight_layout()
plt.show()
"""
Explanation: Train Network
End of explanation
"""
saved_model = EchoClassifier(SEQUENCE_LENGTH, EMBED_SIZE, 25, NUM_CLASSES)
saved_model.load_state_dict(torch.load(MODEL_FILE.format(10)))
if torch.cuda.is_available():
saved_model.cuda()
ylabels, ypreds = [], []
num_test_batches = Xtest.shape[0] // BATCH_SIZE
for bid in range(num_test_batches):
Xbatch_data = Xtest[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
ybatch_data = ytest[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
ybatch = Variable(torch.from_numpy(ybatch_data).long())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
ybatch = ybatch.cuda()
Ybatch_ = saved_model(Xbatch)
_, ybatch_ = Ybatch_.max(1)
if torch.cuda.is_available():
ylabels.extend(ybatch.cpu().data.numpy())
ypreds.extend(ybatch_.cpu().data.numpy())
else:
ylabels.extend(ybatch.data.numpy())
ypreds.extend(ybatch_.data.numpy())
print("Test accuracy: {:.3f}".format(accuracy_score(ylabels, ypreds)))
print("Confusion matrix")
print(confusion_matrix(ylabels, ypreds))
"""
Explanation: Test Network
End of explanation
"""
from IPython.core.display import display, HTML
def maybe_highlight(x, j):
if j == PREDICT_COL:
return """<span style="background-color:#FFFF00">""" + str(x) + "</span>"
else:
return str(x)
start = np.random.randint(Xtest.shape[0] - 10)
rand_seqs = np.argmax(Xtest[start:start + 10], axis=2)
rand_labels = ylabels[start:start + 10]
rand_preds = ypreds[start:start + 10]
html_str = ""
for i in range(10):
seq_str = "".join([maybe_highlight(x, j) for j, x in enumerate(rand_seqs[i])])
html_str += "{:s} {:d} {:d}<br/>".format(seq_str, rand_labels[i], rand_preds[i])
display(HTML(html_str))
for i in range(10):
os.remove(MODEL_FILE.format(i + 1))
"""
Explanation: Print Random block of test data with labels and predictions
End of explanation
"""
|
jochym/abinitio-workshop | notebooks/FononyDFT.ipynb | cc0-1.0 | # przygotowanie rysunku
t=ase.io.read('FIGs/MnAs_phM06.traj',index=':')
v=view(t,viewer='nglview')
v.custom_colors({'Mn':'green','As':'blue'})
v.view._remote_call("setSize", target="Widget", args=["500px", "500px"])
#v.view.center_view()
v.view.background='#ffc'
v.view.parameters=dict(clipDist=-200)
# wyświetlenie rysunku
v
"""
Explanation: Warsztaty modelowania w nanofizyce
Drgania struktury krystalicznej - fonony
Jan Łażewski
Zakład Komputerowych Badań Materiałów,
Instytut Fizyki Jądrowej PAN, Kraków
<br />
<img src="FIGs/tyt.png" style="height:180px; margin-top:100px" />
<img src="FIGs/tyt0.png" style="height:650px" />
<img src="FIGs/tyt0.gif" style="height:650px" />
<img src="FIGs/tyt1.gif" style="height:650px" />
<img src="FIGs/tyt2.gif" style="height:650px" />
<img src="FIGs/tyt3.gif" style="height:650px" />
End of explanation
"""
# przygotowanie rysunku
t=ase.io.read('FIGs/MnAs_softM48_20.traj',index=':')
v=view(t,viewer='nglview')
v.custom_colors({'Mn':'green','As':'blue'})
v.view._remote_call("setSize", target="Widget", args=["500px", "500px"])
#v.view.center_view()
v.view.background='#ffc'
v.view.parameters=dict(clipDist=-200)
# wyświetlenie rysunku
v
"""
Explanation: Dygresja na marginesie
<br />
Wykład został przygotowany w postaci tzw. żywej prezentacji w środowisku IPython Notebook
<br />
<font color="blue">
szczególne podziękowania dla prof. Pawła Jochyma za zachętę i okazaną pomoc
</font>
End of explanation
"""
import ase.io
from ase.data import covalent_radii
from ase.data.colors import cpk_colors
from ipywidgets import IntSlider, FloatSlider, interactive, fixed, HBox
from glob import glob
#scene.caption= """Right button drag or Ctrl-drag to rotate "camera" to view scene.
#To zoom, drag with middle button or Alt/Option depressed, or use scroll wheel.
# On a two-button mouse, middle is left + right.
#Touch screen: pinch/extend to zoom, swipe or two-finger rotate."""
#clr={'Mn':color.green, 'As':color.blue}
clr={25:(0,0.51,0), 33:(0,0,0.95)}
def broom():
for o in scene.objects:
o.visible=False
class crystal:
def __init__( self, trj, fscale=1, rscale=1 ):
self.trj=trj
self.rscale=rscale
self.fscale=10**fscale
self.n=0
scene.fov=0.05
scene.background=vec(1,1,0.8)
scene.center=vec(*tuple(trj[0].get_center_of_mass()))
self.atoms=[sphere(pos=vec(*tuple(r)),
radius=self.rscale*covalent_radii[Z],
color=vec(*clr[Z]))
for Z, r in zip(trj[0].get_atomic_numbers(), trj[0].get_positions())]
self.forces=[arrow(pos=vec(*tuple(r)),
axis=vec(*tuple(f)),
color=color.red)
for r, f in zip(self.trj[0].get_positions(), self.fscale*trj[0].get_forces())]
def update_frame( self, n=None, fscale=None, rscale=None ):
if n is not None:
self.n=n
if fscale is not None:
self.fscale=10**fscale
if rscale is not None:
self.rscale=rscale
for a, fv, r, f, Z in zip(self.atoms, self.forces,
self.trj[self.n].get_positions(),
self.fscale*self.trj[self.n].get_forces(),
self.trj[self.n].get_atomic_numbers()):
a.pos=vec(*tuple(r))
a.radius=self.rscale*covalent_radii[Z]
fv.pos=vec(*tuple(r))
fv.axis=vec(*tuple(f))
def show(self, maxr=5, maxf=2.5):
w1=interactive(self.update_frame,
n=IntSlider(min=0,max=len(self.trj)-1,step=1,value=0,
description='n:'),
fscale=fixed(None),
rscale=fixed(None));
w2=interactive(self.update_frame,
n=fixed(None),
fscale=FloatSlider(min=1,max=maxf,value=1.0,
description='siła:'),
rscale=fixed(None));
w3=interactive(self.update_frame, n=fixed(None),
fscale=fixed(None),
rscale=FloatSlider(min=0.1,max=maxr,value=1.0,
description='promień:'));
return HBox([w1, w2, w3])
from vpython import scene, vec, sphere, arrow, color
broom()
#trj=ase.io.Trajectory('data/md_T_1000.traj')
#trj=ase.io.read('FIGs/OUTCAR1', index=':')
trj=[ase.io.read(fn) for fn in sorted(glob('FIGs/OUTCAR_??'))]
c=crystal(trj)
c.show(1,2.5)
"""
Explanation: Opis drgań kryształu w przybliżeniu harmonicznym - metoda bezpośrednia
<br />
K. Parlinski, Z.Q. Li, and Y. Kawazoe, Phys. Rev. Lett. 78, 4063 (1997).
K. Parlinski, PHONON software (Cracow, Poland, 1997-2017), <br />
http://wolf.ifj.edu.pl/phonon/, http://www.computingformaterials.com/
Rozwinięcie energii potencjalnej w przybliżeniu harmonicznym
$$
\newcommand{\mb}{\boldsymbol}
\newcommand{\h}{\hspace{-0.05cm}}
$$
W przybliżeniu harmonicznym energię potencjalną sieci krystalicznej
$E\bigl(\bigl{ {\mb R}({\mb n},\mu)\bigr}\bigr)$ można rozwinąć w szereg względem wychyleń ${\mb U}({\mb n},\mu)$ atomów <br />
z ich położeń równowagi:
$$
\begin{eqnarray}
E\bigl({ {\mb R}}\bigr)\; =\; E_0\;\; + & & \h
\sum_{{\mb n},\mu}\frac{\partial
E\bigl(\bigl{{\mb R}({\mb n},\mu)\bigr}\bigr) }
{\partial {\mb U}({\mb n},\mu)}\,
{\mb U}({\mb n},\mu) \nonumber \
+ & & \h
\frac{1}{2}\sum_{\substack{{\mb n},\mu \ {\mb m},\nu}}
\frac{\partial^2E\bigl(\bigl{ {\mb R}({\mb n},\mu)\bigr}\bigr) }
{\partial {\mb U}({\mb n},\mu)\,\partial {\mb U}({\mb m},\nu)}
\,{\mb U}({\mb n},\mu)\,{\mb U}({\mb m},\nu)\, \text{,}
%\label{eq:harmE}
\end{eqnarray}
$$
gdzie ${\mb R}({\mb n},\mu)$ oznacza pozycję atomu $\mu$ w komórce
prymitywnej ${\mb n}$, a sumowanie przebiega po całym krysztale.
Macierz stałych siłowych
<br />
W rozwinięciu $\,E\bigl(\bigl{ {\mb R}({\mb n},\mu)\bigr}\bigr)$ pierwsze pochodne znikają z warunków minimum energii w równowadze, zaś drugie pochodne definiują macierz stałych siłowych:
$$
\begin{eqnarray}
\Phi_{ij}({\mb n},\mu;{\mb m},\nu ) =
\frac{\partial^2 E\bigl(\bigl{ {\mb R}({\mb n},\mu)\bigr}\bigr) }
{\partial U_i({\mb n},\mu)\,\partial U_j({\mb m},\nu)}\, \text{,}
\end{eqnarray}
$$
reprezentującą oddziaływanie między atomami w krysztale.
<br />
<img src="FIGs/springModel1.png" style="height:300px; margin-top:-45px; margin-left:650px" />
Siły Hellmanna-Feynmana (HF)
<br />
Każde wychylenie ${\mb U}({\mb n},\mu)$, nawet pojedynczego atomu, powoduje powstanie sił HF na wszystkich atomach w układzie:
\begin{eqnarray}
{\mb F}({\mb n},\mu) = - \frac{\partial E}{\partial {\mb R}({\mb n},\mu)}\, .
\end{eqnarray}
End of explanation
"""
!jupyter nbconvert FononyDFT.ipynb --to slides
"""
Explanation: Zależność sił HF od stałych siłowych
<br />
W przybliżeniu harmonicznym z rozwinięcia energii otrzymujemy zależność:
$$
\begin{eqnarray}
F_i({\mb n},\mu) = -\sum_{{\mb m},\nu,j}
\Phi_{ij}({\mb n},\mu;{\mb m},\nu )
\,U_j({\mb m},\nu) .
\end{eqnarray}
$$
Macierz dynamiczna
<br />
Macierz dynamiczną definiujemy jako transformatę Fouriera macierzy stałych
siłowych:
$$
\begin{eqnarray}
{\mb D}({\mb k};\mu,\nu) = \frac{1}{\sqrt{M_\mu M_\nu}}
\sum_{\mb m} {\mb \Phi}(0,\mu;{\mb m},\nu)\;
\text{e}^{-2\pi i{\mb k}\cdot[ {\mb R}(0,\mu)-{\mb R}({\mb m},\nu)]} \text{,}
\end{eqnarray}
$$
gdzie ${\mb k}$ jest wektorem falowym, $M_\mu$ i $M_\nu$ masami atomów
$\mu$ i $\nu$, zaś sumowanie przebiega po wszystkich atomach kryształu.
Równanie własne
<br />
Rozwiązanie równania własnego:
$$
\begin{eqnarray}
{\mb D}({\mb k})\,{\mb e}({\mb k},j) = \omega^2({\mb k},j)\,{\mb e}({\mb k},j)
\end{eqnarray}
$$
poprzez diagonalizację macierzy dynamicznej daje częstości drgań
$\,\omega({\mb k},j)$
<br />
i wektory polaryzacji $\,{\mb e}({\mb k},j)$.
<img src="FIGs/DC2.png" style="width:1600px" />
Modelowane układy
<br />
<img src="FIGs/str1.png" style="width:1600px" />
Modelowane układy
<br />
<img src="FIGs/str2.png" style="width:1600px" />
Modelowane układy
<br />
<img src="FIGs/str3.png" style="width:1600px" />
Modelowane układy
<br />
<img src="FIGs/str4.png" style="width:1600px" />
Modelowane układy
<br />
<img src="FIGs/str5.png" style="width:1600px" />
Modelowane układy
<br />
<img src="FIGs/str6.png" style="width:1600px" />
Modelowane układy
<br />
<img src="FIGs/str7.png" style="width:1600px" />
Modelowane układy
<br />
<img src="FIGs/str8.png" style="width:1600px" />
Modelowane układy
<br />
<img src="FIGs/str9.png" style="width:1600px" />
Wyliczane własności dynamiczne
<br />
Metodą bezpośrednią można wyliczyć między innymi następujące własności materiałowe:
fononowe relacje dyspersji
intensywności pików fononowych
parametry Grüneissena
wektory polaryzacji + animacja
całkowite i cząstkowe widma DOS
funkcje termodynamiczne: E, S, F, G, c$_\text{v}$
czynniki Debye'a-Waller'a
rozszerzalność cieplna
rozpraszanie neutronów i X
diagramy fazowe, granice stabilności struktur
PRZYKŁAD: przejście fazowe w MnAs
(file:///home/lazewski/pubs_txt/others/orals/warsztaty2017/warsztatyJL17.pdf)
End of explanation
"""
|
Kaggle/learntools | notebooks/game_ai/raw/tut3.ipynb | apache-2.0 | #$HIDE_INPUT$
import random
import numpy as np
# Gets board at next step if agent drops piece in selected column
def drop_piece(grid, col, mark, config):
next_grid = grid.copy()
for row in range(config.rows-1, -1, -1):
if next_grid[row][col] == 0:
break
next_grid[row][col] = mark
return next_grid
# Helper function for get_heuristic: checks if window satisfies heuristic conditions
def check_window(window, num_discs, piece, config):
return (window.count(piece) == num_discs and window.count(0) == config.inarow-num_discs)
# Helper function for get_heuristic: counts number of windows satisfying specified heuristic conditions
def count_windows(grid, num_discs, piece, config):
num_windows = 0
# horizontal
for row in range(config.rows):
for col in range(config.columns-(config.inarow-1)):
window = list(grid[row, col:col+config.inarow])
if check_window(window, num_discs, piece, config):
num_windows += 1
# vertical
for row in range(config.rows-(config.inarow-1)):
for col in range(config.columns):
window = list(grid[row:row+config.inarow, col])
if check_window(window, num_discs, piece, config):
num_windows += 1
# positive diagonal
for row in range(config.rows-(config.inarow-1)):
for col in range(config.columns-(config.inarow-1)):
window = list(grid[range(row, row+config.inarow), range(col, col+config.inarow)])
if check_window(window, num_discs, piece, config):
num_windows += 1
# negative diagonal
for row in range(config.inarow-1, config.rows):
for col in range(config.columns-(config.inarow-1)):
window = list(grid[range(row, row-config.inarow, -1), range(col, col+config.inarow)])
if check_window(window, num_discs, piece, config):
num_windows += 1
return num_windows
"""
Explanation: Introduction
In the previous tutorial, you learned how to build an agent with one-step lookahead. This agent performs reasonably well, but definitely still has room for improvement! For instance, consider the potential moves in the figure below. (Note that we use zero-based numbering for the columns, so the leftmost column corresponds to col=0, the next column corresponds to col=1, and so on.)
<center>
<img src="https://i.imgur.com/aAYyy2I.png" width=90%><br/>
</center>
With one-step lookahead, the red player picks one of column 5 or 6, each with 50% probability. But, column 5 is clearly a bad move, as it lets the opponent win the game in only one more turn. Unfortunately, the agent doesn't know this, because it can only look one move into the future.
In this tutorial, you'll use the minimax algorithm to help the agent look farther into the future and make better-informed decisions.
Minimax
We'd like to leverage information from deeper in the game tree. For now, assume we work with a depth of 3. This way, when deciding its move, the agent considers all possible game boards that can result from
1. the agent's move,
2. the opponent's move, and
3. the agent's next move.
We'll work with a visual example. For simplicity, we assume that at each turn, both the agent and opponent have only two possible moves. Each of the blue rectangles in the figure below corresponds to a different game board.
<center>
<img src="https://i.imgur.com/BrRe7Bu.png" width=90%><br/>
</center>
We have labeled each of the "leaf nodes" at the bottom of the tree with the score from the heuristic. (We use made-up scores in the figure. In the code, we'll use the same heuristic from the previous tutorial.) As before, the current game board is at the top of the figure, and the agent's goal is to end up with a score that's as high as possible.
But notice that the agent no longer has complete control over its score -- after the agent makes its move, the opponent selects its own move. And, the opponent's selection can prove disastrous for the agent! In particular,
- If the agent chooses the left branch, the opponent can force a score of -1.
- If the agent chooses the right branch, the opponent can force a score of +10.
Take the time now to check this in the figure, to make sure it makes sense to you!
With this in mind, you might argue that the right branch is the better choice for the agent, since it is the less risky option. Sure, it gives up the possibility of getting the large score (+40) that can only be accessed on the left branch, but it also guarantees that the agent gets at least +10 points.
This is the main idea behind the minimax algorithm: the agent chooses moves to get a score that is as high as possible, and it assumes the opponent will counteract this by choosing moves to force the score to be as low as possible. That is, the agent and opponent have opposing goals, and we assume the opponent plays optimally.
So, in practice, how does the agent use this assumption to select a move? We illustrate the agent's thought process in the figure below.
<center>
<img src="https://i.imgur.com/bWezUC3.png" width=90%><br/>
</center>
In the example, minimax assigns the move on the left a score of -1, and the move on the right is assigned a score of +10. So, the agent will select the move on the right.
Code
We'll use several functions from the previous tutorial. These are defined in the hidden code cell below. (Click on the "Code" button below if you'd like to view them.)
End of explanation
"""
# Helper function for minimax: calculates value of heuristic for grid
def get_heuristic(grid, mark, config):
num_threes = count_windows(grid, 3, mark, config)
num_fours = count_windows(grid, 4, mark, config)
num_threes_opp = count_windows(grid, 3, mark%2+1, config)
num_fours_opp = count_windows(grid, 4, mark%2+1, config)
score = num_threes - 1e2*num_threes_opp - 1e4*num_fours_opp + 1e6*num_fours
return score
"""
Explanation: We'll also need to slightly modify the heuristic from the previous tutorial, since the opponent is now able to modify the game board.
<center>
<img src="https://i.imgur.com/vQ8b1aX.png" width=70%><br/>
</center>
In particular, we need to check if the opponent has won the game by playing a disc. The new heuristic looks at each group of four adjacent locations in a (horizontal, vertical, or diagonal) line and assigns:
- 1000000 (1e6) points if the agent has four discs in a row (the agent won),
- 1 point if the agent filled three spots, and the remaining spot is empty (the agent wins if it fills in the empty spot),
- -100 points if the opponent filled three spots, and the remaining spot is empty (the opponent wins by filling in the empty spot), and
- -10000 (-1e4) points if the opponent has four discs in a row (the opponent won).
This is defined in the code cell below.
End of explanation
"""
# Uses minimax to calculate value of dropping piece in selected column
def score_move(grid, col, mark, config, nsteps):
next_grid = drop_piece(grid, col, mark, config)
score = minimax(next_grid, nsteps-1, False, mark, config)
return score
# Helper function for minimax: checks if agent or opponent has four in a row in the window
def is_terminal_window(window, config):
return window.count(1) == config.inarow or window.count(2) == config.inarow
# Helper function for minimax: checks if game has ended
def is_terminal_node(grid, config):
# Check for draw
if list(grid[0, :]).count(0) == 0:
return True
# Check for win: horizontal, vertical, or diagonal
# horizontal
for row in range(config.rows):
for col in range(config.columns-(config.inarow-1)):
window = list(grid[row, col:col+config.inarow])
if is_terminal_window(window, config):
return True
# vertical
for row in range(config.rows-(config.inarow-1)):
for col in range(config.columns):
window = list(grid[row:row+config.inarow, col])
if is_terminal_window(window, config):
return True
# positive diagonal
for row in range(config.rows-(config.inarow-1)):
for col in range(config.columns-(config.inarow-1)):
window = list(grid[range(row, row+config.inarow), range(col, col+config.inarow)])
if is_terminal_window(window, config):
return True
# negative diagonal
for row in range(config.inarow-1, config.rows):
for col in range(config.columns-(config.inarow-1)):
window = list(grid[range(row, row-config.inarow, -1), range(col, col+config.inarow)])
if is_terminal_window(window, config):
return True
return False
# Minimax implementation
def minimax(node, depth, maximizingPlayer, mark, config):
is_terminal = is_terminal_node(node, config)
valid_moves = [c for c in range(config.columns) if node[0][c] == 0]
if depth == 0 or is_terminal:
return get_heuristic(node, mark, config)
if maximizingPlayer:
value = -np.Inf
for col in valid_moves:
child = drop_piece(node, col, mark, config)
value = max(value, minimax(child, depth-1, False, mark, config))
return value
else:
value = np.Inf
for col in valid_moves:
child = drop_piece(node, col, mark%2+1, config)
value = min(value, minimax(child, depth-1, True, mark, config))
return value
"""
Explanation: In the next code cell, we define a few additional functions that we'll need for the minimax agent.
End of explanation
"""
# How deep to make the game tree: higher values take longer to run!
N_STEPS = 3
def agent(obs, config):
# Get list of valid moves
valid_moves = [c for c in range(config.columns) if obs.board[c] == 0]
# Convert the board to a 2D grid
grid = np.asarray(obs.board).reshape(config.rows, config.columns)
# Use the heuristic to assign a score to each possible board in the next step
scores = dict(zip(valid_moves, [score_move(grid, col, obs.mark, config, N_STEPS) for col in valid_moves]))
# Get a list of columns (moves) that maximize the heuristic
max_cols = [key for key in scores.keys() if scores[key] == max(scores.values())]
# Select at random from the maximizing columns
return random.choice(max_cols)
"""
Explanation: We won't describe the minimax implementation in detail, but if you want to read more technical pseudocode, here's the description from Wikipedia. (Note that the pseudocode can be safely skipped!)
<center>
<img src="https://i.imgur.com/BwP9tMD.png" width=60%>
</center>
Finally, we implement the minimax agent in the competition format. The N_STEPS variable is used to set the depth of the tree.
End of explanation
"""
from kaggle_environments import make, evaluate
# Create the game environment
env = make("connectx")
# Two random agents play one game round
env.run([agent, "random"])
# Show the game
env.render(mode="ipython")
"""
Explanation: In the next code cell, we see the outcome of one game round against a random agent.
End of explanation
"""
#$HIDE_INPUT$
def get_win_percentages(agent1, agent2, n_rounds=100):
# Use default Connect Four setup
config = {'rows': 6, 'columns': 7, 'inarow': 4}
# Agent 1 goes first (roughly) half the time
outcomes = evaluate("connectx", [agent1, agent2], config, [], n_rounds//2)
# Agent 2 goes first (roughly) half the time
outcomes += [[b,a] for [a,b] in evaluate("connectx", [agent2, agent1], config, [], n_rounds-n_rounds//2)]
print("Agent 1 Win Percentage:", np.round(outcomes.count([1,-1])/len(outcomes), 2))
print("Agent 2 Win Percentage:", np.round(outcomes.count([-1,1])/len(outcomes), 2))
print("Number of Invalid Plays by Agent 1:", outcomes.count([None, 0]))
print("Number of Invalid Plays by Agent 2:", outcomes.count([0, None]))
get_win_percentages(agent1=agent, agent2="random", n_rounds=50)
"""
Explanation: And we check how we can expect it to perform on average.
End of explanation
"""
|
ogoann/StatisticalMethods | notes/InferenceSandbox.ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 5.0)
# the model parameters
a = np.pi
b = 1.6818
# my arbitrary constants
mu_x = np.exp(1.0) # see definitions above
tau_x = 1.0
s = 1.0
N = 50 # number of data points
# get some x's and y's
x = mu_x + tau_x*np.random.randn(N)
y = a + b*x + s*np.random.randn(N)
plt.plot(x, y, 'o');
"""
Explanation: Inference Sandbox
In this notebook, we'll mock up some data from the linear model, as reviewed here. Then it's your job to implement a Metropolis sampler and constrain the posterior distriubtion. The goal is to play with various strategies for accelerating the convergence and acceptance rate of the chain. Remember to check the convergence and stationarity of your chains, and compare them to the known analytic posterior for this problem!
Generate a data set:
End of explanation
"""
def lnPost(params, x, y):
# This is written for clarity rather than numerical efficiency. Feel free to tweak it.
a = params[0]
b = params[1]
lnp = 0.0
# Using informative priors to achieve faster convergence is cheating in this exercise!
# But this is where you would add them.
lnp += -0.5*np.sum((a+b*x - y)**2)
return lnp
"""
Explanation: Package up a log-posterior function.
End of explanation
"""
class ExactPosterior:
def __init__(self, x, y, a0, b0):
X = np.matrix(np.vstack([np.ones(len(x)), x]).T)
Y = np.matrix(y).T
self.invcov = X.T * X
self.covariance = np.linalg.inv(self.invcov)
self.mean = self.covariance * X.T * Y
self.a_array = np.arange(0.0, 6.0, 0.02)
self.b_array = np.arange(0.0, 3.25, 0.02)
self.P_of_a = np.array([self.marg_a(a) for a in self.a_array])
self.P_of_b = np.array([self.marg_b(b) for b in self.b_array])
self.P_of_ab = np.array([[self.lnpost(a,b) for a in self.a_array] for b in self.b_array])
self.P_of_ab = np.exp(self.P_of_ab)
self.renorm = 1.0/np.sum(self.P_of_ab)
self.P_of_ab = self.P_of_ab * self.renorm
self.levels = scipy.stats.chi2.cdf(np.arange(1,4)**2, 1) # confidence levels corresponding to contours below
self.contourLevels = self.renorm*np.exp(self.lnpost(a0,b0)-0.5*scipy.stats.chi2.ppf(self.levels, 2))
def lnpost(self, a, b): # the 2D posterior
z = self.mean - np.matrix([[a],[b]])
return -0.5 * (z.T * self.invcov * z)[0,0]
def marg_a(self, a): # marginal posterior of a
return scipy.stats.norm.pdf(a, self.mean[0,0], np.sqrt(self.covariance[0,0]))
def marg_b(self, b): # marginal posterior of b
return scipy.stats.norm.pdf(b, self.mean[1,0], np.sqrt(self.covariance[1,1]))
exact = ExactPosterior(x, y, a, b)
"""
Explanation: Convenience functions encoding the exact posterior:
End of explanation
"""
plt.plot(exact.a_array, exact.P_of_a);
plt.plot(exact.b_array, exact.P_of_b);
plt.contour(exact.a_array, exact.b_array, exact.P_of_ab, colors='blue', levels=exact.contourLevels);
plt.plot(a, b, 'o', color='red');
"""
Explanation: Demo some plots of the exact posterior distribution
End of explanation
"""
Nsamples = # fill in a number
samples = np.zeros((Nsamples, 2))
# put any more global definitions here
for i in range(Nsamples):
a_try, b_try = proposal() # propose new parameter value(s)
lnp_try = lnPost([a_try,b_try], x, y) # calculate posterior density for the proposal
if we_accept_this_proposal(lnp_try, lnp_current):
# do something
else:
# do something else
plt.rcParams['figure.figsize'] = (12.0, 3.0)
plt.plot(samples[:,0]);
plt.plot(samples[:,1]);
plt.rcParams['figure.figsize'] = (5.0, 5.0)
plt.plot(samples[:,0], samples[:,1]);
plt.rcParams['figure.figsize'] = (5.0, 5.0)
plt.hist(samples[:,0], 20, normed=True, color='cyan');
plt.plot(exact.a_array, exact.P_of_a, color='red');
plt.rcParams['figure.figsize'] = (5.0, 5.0)
plt.hist(samples[:,1], 20, normed=True, color='cyan');
plt.plot(exact.b_array, exact.P_of_b, color='red');
# If you know how to easily overlay the 2D sample and theoretical confidence regions, by all means do so.
"""
Explanation: Ok, you're almost ready to go! A decidely minimal stub of a Metropolis loop appears below; of course, you don't need to stick exactly with this layout. Once again, after running a chain, be sure to
visually inspect traces of each parameter to see whether they appear converged
compare the marginal and joint posterior distributions to the exact solution to check whether they've converged to the correct distribution
(see the snippets farther down)
If you think you have a sampler that works well, use it to run some more chains from different starting points and compare them both visually and using the numerical convergence criteria covered in class.
Once you have a working sampler, the question is: how can we make it converge faster? Experiment! We'll compare notes in a bit.
End of explanation
"""
|
glouppe/scikit-optimize | examples/bayesian-optimization.ipynb | bsd-3-clause | import numpy as np
np.random.seed(777)
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (10, 6)
"""
Explanation: Bayesian optimization with skopt
Gilles Louppe, Manoj Kumar July 2016.
End of explanation
"""
noise_level = 0.1
def f(x, noise_level=noise_level):
return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) + np.random.randn() * noise_level
"""
Explanation: Problem statement
We are interested in solving $$x^* = \arg \min_x f(x)$$ under the constraints that
$f$ is a black box for which no closed form is known (nor its gradients);
$f$ is expensive to evaluate;
evaluations $y = f(x)$ of may be noisy.
Disclaimer. If you do not have these constraints, then there is certainly a better optimization algorithm than Bayesian optimization.
Bayesian optimization loop
For $t=1:T$:
Given observations $(x_i, y_i=f(x_i))$ for $i=1:t$, build a probabilistic model for the objective $f$. Integrate out all possible true functions, using Gaussian process regression.
optimize a cheap acquisition/utility function $u$ based on the posterior distribution for sampling the next point.
$$x_{t+1} = \arg \min_x u(x)$$
Exploit uncertainty to balance exploration against exploitation.
Sample the next observation $y_{t+1}$ at $x_{t+1}$.
Acquisition functions
Acquisition functions $\text{u}(x)$ specify which sample $x$ should be tried next:
Lower confidence bound: $\text{LCB}(x) = \mu_{GP}(x) + \kappa \sigma_{GP}(x)$;
Probability of improvement: $-\text{PI}(x) = -P(f(x) \geq f(x_t^+) + \kappa) $;
Expected improvement: $-\text{EI}(x) = -\mathbb{E} [f(x) - f(x_t^+)] $;
where $x_t^+$ is the best point observed so far.
In most cases, acquisition functions provide knobs (e.g., $\kappa$) for
controlling the exploration-exploitation trade-off.
- Search in regions where $\mu_{GP}(x)$ is high (exploitation)
- Probe regions where uncertainty $\sigma_{GP}(x)$ is high (exploration)
Toy example
Let assume the following noisy function $f$:
End of explanation
"""
# Plot f(x) + contours
x = np.linspace(-2, 2, 400).reshape(-1, 1)
fx = [f(x_i, noise_level=0.0) for x_i in x]
plt.plot(x, fx, "r--", label="True (unknown)")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate(([fx_i - 1.9600 * noise_level for fx_i in fx],
[fx_i + 1.9600 * noise_level for fx_i in fx[::-1]])),
alpha=.2, fc="r", ec="None")
plt.legend()
plt.grid()
plt.show()
"""
Explanation: Note. In skopt, functions $f$ are assumed to take as input a 1D vector $x$ represented as an array-like and to return a scalar $f(x)$.
End of explanation
"""
from skopt import gp_minimize
from skopt.acquisition import gaussian_lcb
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import Matern
# Note that we have fixed the hyperparameters of the kernel, because it is
# sufficient for this easy problem.
gp = GaussianProcessRegressor(kernel=Matern(length_scale_bounds="fixed"),
alpha=noise_level**2, random_state=0)
res = gp_minimize(f, # the function to minimize
[(-2.0, 2.0)], # the bounds on each dimension of x
x0=[0.], # the starting point
acq="LCB", # the acquisition function (optional)
base_estimator=gp, # a GP estimator (optional)
n_calls=15, # the number of evaluations of f including at x0
n_random_starts=0, # the number of random initialization points
random_state=777)
"""
Explanation: Bayesian optimization based on gaussian process regression is implemented in skopt.gp_minimize and can be carried out as follows:
End of explanation
"""
"x^*=%.4f, f(x^*)=%.4f" % (res.x[0], res.fun)
"""
Explanation: Accordingly, the approximated minimum is found to be:
End of explanation
"""
for key, value in sorted(res.items()):
print(key, "=", value)
print()
"""
Explanation: For further inspection of the results, attributes of the res named tuple provide the following information:
x [float]: location of the minimum.
fun [float]: function value at the minimum.
models: surrogate models used for each iteration.
x_iters [array]: location of function evaluation for each
iteration.
func_vals [array]: function value for each iteration.
space [Space]: the optimization space.
specs [dict]: parameters passed to the function.
End of explanation
"""
from skopt.plots import plot_convergence
plot_convergence(res)
"""
Explanation: Together these attributes can be used to visually inspect the results of the minimization, such as the convergence trace or the acquisition function at the last iteration:
End of explanation
"""
plt.rcParams["figure.figsize"] = (20, 20)
x = np.linspace(-2, 2, 400).reshape(-1, 1)
fx = np.array([f(x_i, noise_level=0.0) for x_i in x])
# Plot first five iterations.
for n_iter in range(5):
gp = res.models[n_iter]
curr_x_iters = res.x_iters[: n_iter+1]
curr_func_vals = res.func_vals[: n_iter+1]
# Plot true function.
plt.subplot(5, 2, 2*n_iter+1)
plt.plot(x, fx, "r--", label="True (unknown)")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate([fx - 1.9600 * noise_level, fx[::-1] + 1.9600 * noise_level]),
alpha=.2, fc="r", ec="None")
# Plot GP(x) + contours
y_pred, sigma = gp.predict(x, return_std=True)
plt.plot(x, y_pred, "g--", label=r"$\mu_{GP}(x)$")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate([y_pred - 1.9600 * sigma,
(y_pred + 1.9600 * sigma)[::-1]]),
alpha=.2, fc="g", ec="None")
# Plot sampled points
plt.plot(curr_x_iters, curr_func_vals,
"r.", markersize=15, label="Observations")
plt.title(r"$x_{%d} = %.4f, f(x_{%d}) = %.4f$" % (
n_iter, res.x_iters[n_iter][0], n_iter, res.func_vals[n_iter]))
plt.grid()
if n_iter == 0:
plt.legend(loc="best", prop={'size': 8}, numpoints=1)
plt.subplot(5, 2, 2*n_iter+2)
acq = gaussian_lcb(x, gp)
plt.plot(x, acq, "b", label="LCB(x)")
plt.fill_between(x.ravel(), -2.0, acq.ravel(), alpha=0.3, color='blue')
next_x = np.asarray(res.x_iters[n_iter + 1])
next_acq = gaussian_lcb(next_x.reshape(-1, 1), gp)
plt.plot(next_x[0], next_acq, "bo", markersize=10, label="Next query point")
plt.grid()
if n_iter == 0:
plt.legend(loc="best", prop={'size': 12}, numpoints=1)
plt.suptitle("Sequential model-based minimization using gp_minimize.", fontsize=20)
plt.show()
"""
Explanation: Let us visually examine
The approximation of the fit gp model to the original function.
The acquistion values (The lower confidence bound) that determine the next point to be queried.
At the points closer to the points previously evaluated at, the variance dips to zero.
The first column shows the following:
1. The true function.
2. The approximation to the original function by the gaussian process model
3. How sure the GP is about the function.
The second column shows the acquisition function values after every surrogate model is fit. It is possible that we do not choose the global minimum but a local minimum depending on the minimizer used to minimize the acquisition function.
End of explanation
"""
# Plot f(x) + contours
plt.rcParams["figure.figsize"] = (10, 6)
x = np.linspace(-2, 2, 400).reshape(-1, 1)
fx = [f(x_i, noise_level=0.0) for x_i in x]
plt.plot(x, fx, "r--", label="True (unknown)")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate(([fx_i - 1.9600 * noise_level for fx_i in fx],
[fx_i + 1.9600 * noise_level for fx_i in fx[::-1]])),
alpha=.2, fc="r", ec="None")
# Plot GP(x) + concours
gp = res.models[-1]
y_pred, sigma = gp.predict(x, return_std=True)
plt.plot(x, y_pred, "g--", label=r"$\mu_{GP}(x)$")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate([y_pred - 1.9600 * sigma,
(y_pred + 1.9600 * sigma)[::-1]]),
alpha=.2, fc="g", ec="None")
# Plot sampled points
plt.plot(res.x_iters,
res.func_vals,
"r.", markersize=15, label="Observations")
# Plot LCB(x) + next query point
acq = gaussian_lcb(x, gp)
plt.plot(x, gaussian_lcb(x, gp), "b", label="LCB(x)")
next_x = np.argmin(acq)
plt.plot([x[next_x]], [acq[next_x]], "b.", markersize=15, label="Next query point")
plt.title(r"$x^* = %.4f, f(x^*) = %.4f$" % (res.x[0], res.fun))
plt.legend(loc="best")
plt.grid()
plt.show()
"""
Explanation: Finally, as we increase the number of points, the GP model approaches the actual function. The final few points are cluttered around the minimum because the GP does not gain anything more by further exploration.
End of explanation
"""
|
jmschrei/pomegranate | tutorials/old/Tutorial_6_Markov_Chain.ipynb | mit | from pomegranate import *
%pylab inline
d1 = DiscreteDistribution({'A': 0.10, 'C': 0.40, 'G': 0.40, 'T': 0.10})
d2 = ConditionalProbabilityTable([['A', 'A', 0.10],
['A', 'C', 0.50],
['A', 'G', 0.30],
['A', 'T', 0.10],
['C', 'A', 0.10],
['C', 'C', 0.40],
['C', 'T', 0.40],
['C', 'G', 0.10],
['G', 'A', 0.05],
['G', 'C', 0.45],
['G', 'G', 0.45],
['G', 'T', 0.05],
['T', 'A', 0.20],
['T', 'C', 0.30],
['T', 'G', 0.30],
['T', 'T', 0.20]], [d1])
clf = MarkovChain([d1, d2])
"""
Explanation: Markov Chains
author: Jacob Schreiber <br>
contact: jmschreiber91@gmail.com
Markov Chains are a simple model based on conditional probability, where a sequence is modelled as the product of conditional probabilities. A n-th order Markov chain looks back n emissions to base its conditional probability on. For example, a 3rd order Markov chain models $P(X_{t} | X_{t-1}, X_{t-2}, X_{t-3})$.
However, a full Markov model needs to model the first observations, and the first n-1 observations. The first observation can't really be modelled well using $P(X_{t} | X_{t-1}, X_{t-2}, X_{t-3})$, but can be modelled by $P(X_{t})$. The second observation has to be modelled by $P(X_{t} | X_{t-1} )$. This means that these distributions have to be passed into the Markov chain as well.
We can initialize a Markov chain easily enough by passing in a list of the distributions.
End of explanation
"""
clf.log_probability( list('CAGCATCAGT') )
clf.log_probability( list('C') )
clf.log_probability( list('CACATCACGACTAATGATAAT') )
"""
Explanation: Markov chains have log probability, fit, summarize, and from summaries methods implemented. They do not have classification capabilities by themselves, but when combined with a Naive Bayes classifier can be used to do discrimination between multiple models (see the Naive Bayes tutorial notebook).
Lets see the log probability of some data.
End of explanation
"""
clf.fit( map( list, ('CAGCATCAGT', 'C', 'ATATAGAGATAAGCT', 'GCGCAAGT', 'GCATTGC', 'CACATCACGACTAATGATAAT') ) )
print clf.log_probability( list('CAGCATCAGT') )
print clf.log_probability( list('C') )
print clf.log_probability( list('CACATCACGACTAATGATAAT') )
print clf.distributions[0]
print clf.distributions[1]
"""
Explanation: We can fit the model to sequences which we pass in, and as expected, get better performance on sequences which we train on.
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive2/feature_engineering/solutions/4_keras_adv_feat_eng.ipynb | apache-2.0 | import datetime
import logging
import os
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import layers
from tensorflow.keras import models
# set TF error log verbosity
logging.getLogger("tensorflow").setLevel(logging.ERROR)
print(tf.version.VERSION)
"""
Explanation: LAB04: Advanced Feature Engineering in Keras
Learning Objectives
Process temporal feature columns in Keras
Use Lambda layers to perform feature engineering on geolocation features
Create bucketized and crossed feature columns
Introduction
In this notebook, we use Keras to build a taxifare price prediction model and utilize feature engineering to improve the fare amount prediction for NYC taxi cab rides.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Set up environment variables and load necessary libraries
We will start by importing the necessary libraries for this lab.
End of explanation
"""
if not os.path.isdir("../data"):
os.makedirs("../data")
!gsutil cp gs://cloud-training-demos/feat_eng/data/*.csv ../data
"""
Explanation: Load taxifare dataset
The Taxi Fare dataset for this lab is 106,545 rows and has been pre-processed and split for use in this lab. Note that the dataset is the same as used in the Big Query feature engineering labs. The fare_amount is the target, the continuous value we’ll train a model to predict.
First, let's download the .csv data by copying the data from a cloud storage bucket.
End of explanation
"""
!ls -l ../data/*.csv
!head ../data/*.csv
"""
Explanation: Let's check that the files were copied correctly and look like we expect them to.
End of explanation
"""
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = ['pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count']
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
# A function to define features and labesl
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
# A utility method to create a tf.data dataset from a Pandas Dataframe
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = tf.data.experimental.make_csv_dataset(pattern,
batch_size,
CSV_COLUMNS,
DEFAULTS)
dataset = dataset.map(features_and_labels) # features, label
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
"""
Explanation: Create an input pipeline
Typically, you will use a two step proces to build the pipeline. Step one is to define the columns of data; i.e., which column we're predicting for, and the default values. Step 2 is to define two functions - a function to define the features and label you want to use and a function to load the training data. Also, note that pickup_datetime is a string and we will need to handle this in our feature engineered model.
End of explanation
"""
# Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred): # Root mean square error
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# input layer
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
# feature_columns
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Constructor for DenseFeatures takes a list of numeric columns
dnn_inputs = layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = layers.Dense(1, activation='linear', name='fare')(h2)
model = models.Model(inputs, output)
# compile model
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
"""
Explanation: Create a Baseline DNN Model in Keras
Now let's build the Deep Neural Network (DNN) model in Keras using the functional API. Unlike the sequential API, we will need to specify the input and hidden layers. Note that we are creating a linear regression baseline model with no feature engineering. Recall that a baseline model is a solution to a problem without applying any machine learning techniques.
End of explanation
"""
model = build_dnn_model()
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
"""
Explanation: We'll build our DNN model and inspect the model architecture.
End of explanation
"""
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 59621 * 5
NUM_EVALS = 5
NUM_EVAL_EXAMPLES = 14906
trainds = load_dataset('../data/taxi-train*',
TRAIN_BATCH_SIZE,
tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('../data/taxi-valid*',
1000,
tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
"""
Explanation: Train the model
To train the model, simply call model.fit(). Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.
We start by setting up the environment variables for training, creating the input pipeline datasets, and then train our baseline DNN model.
End of explanation
"""
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'mse'])
"""
Explanation: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.
End of explanation
"""
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string),
}, steps=1)
"""
Explanation: Predict with the model locally
To predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for. Next we note the fare price at this geolocation and pickup_datetime.
End of explanation
"""
# TODO 1a
def parse_datetime(s):
if type(s) is not str:
s = s.numpy().decode('utf-8')
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
# TODO 1b
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
# TODO 1c
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),
ts_in)
"""
Explanation: Improve Model Performance Using Feature Engineering
We now improve our model's performance by creating the following feature engineering types: Temporal, Categorical, and Geolocation.
Temporal Feature Columns
We incorporate the temporal feature pickup_datetime. As noted earlier, pickup_datetime is a string and we will need to handle this within the model. First, you will include the pickup_datetime as a feature and then you will need to modify the model to handle our string feature.
End of explanation
"""
# TODO 2
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
"""
Explanation: Geolocation/Coordinate Feature Columns
The pick-up/drop-off longitude and latitude data are crucial to predicting the fare amount as fare amounts in NYC taxis are largely determined by the distance traveled. As such, we need to teach the model the Euclidean distance between the pick-up and drop-off points.
Recall that latitude and longitude allows us to specify any location on Earth using a set of coordinates. In our training data set, we restricted our data points to only pickups and drop offs within NYC. New York city has an approximate longitude range of -74.05 to -73.75 and a latitude range of 40.63 to 40.85.
Computing Euclidean distance
The dataset contains information regarding the pickup and drop off coordinates. However, there is no information regarding the distance between the pickup and drop off points. Therefore, we create a new feature that calculates the distance between each pair of pickup and drop off points. We can do this using the Euclidean Distance, which is the straight-line distance between any two coordinate points.
End of explanation
"""
def scale_longitude(lon_column):
return (lon_column + 78)/8.
"""
Explanation: Scaling latitude and longitude
It is very important for numerical variables to get scaled before they are "fed" into the neural network. Here we use min-max scaling (also called normalization) on the geolocation fetures. Later in our model, you will see that these values are shifted and rescaled so that they end up ranging from 0 to 1.
First, we create a function named 'scale_longitude', where we pass in all the longitudinal values and add 78 to each value. Note that our scaling longitude ranges from -70 to -78. Thus, the value 78 is the maximum longitudinal value. The delta or difference between -70 and -78 is 8. We add 78 to each longitidunal value and then divide by 8 to return a scaled value.
End of explanation
"""
def scale_latitude(lat_column):
return (lat_column - 37)/8.
"""
Explanation: Next, we create a function named 'scale_latitude', where we pass in all the latitudinal values and subtract 37 from each value. Note that our scaling longitude ranges from -37 to -45. Thus, the value 37 is the minimal latitudinal value. The delta or difference between -37 and -45 is 8. We subtract 37 from each latitudinal value and then divide by 8 to return a scaled value.
End of explanation
"""
def transform(inputs, numeric_cols, string_cols, nbuckets):
print("Inputs before features transformation: {}".format(inputs.keys()))
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in numeric_cols
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ['pickup_longitude', 'dropoff_longitude']:
transformed[lon_col] = layers.Lambda(
scale_longitude,
name="scale_{}".format(lon_col))(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ['pickup_latitude', 'dropoff_latitude']:
transformed[lat_col] = layers.Lambda(
scale_latitude,
name='scale_{}'.format(lat_col))(inputs[lat_col])
# TODO 2
# add Euclidean distance
transformed['euclidean'] = layers.Lambda(
euclidean,
name='euclidean')([inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']])
feature_columns['euclidean'] = fc.numeric_column('euclidean')
# TODO 3
# create bucketized features
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns['pickup_latitude'], latbuckets)
b_dlat = fc.bucketized_column(
feature_columns['dropoff_latitude'], latbuckets)
b_plon = fc.bucketized_column(
feature_columns['pickup_longitude'], lonbuckets)
b_dlon = fc.bucketized_column(
feature_columns['dropoff_longitude'], lonbuckets)
# TODO 3
# create crossed columns
ploc = fc.crossed_column([b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column([b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
# create embedding columns
feature_columns['pickup_and_dropoff'] = fc.embedding_column(pd_pair, 100)
print("Transformed features: {}".format(transformed.keys()))
print("Feature columns: {}".format(feature_columns.keys()))
return transformed, feature_columns
"""
Explanation: Putting it all together
We now create two new "geo" functions for our model. We create a function called "euclidean" to initialize our geolocation parameters. We then create a function called transform. The transform function passes our numerical and string column features as inputs to the model, scales geolocation features, then creates the Euclian distance as a transformed variable with the geolocation features. Lastly, we bucketize the latitude and longitude features.
End of explanation
"""
NBUCKETS = 10
# DNN MODEL
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# input layer is all float except for pickup_datetime which is a string
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(inputs,
numeric_cols=NUMERIC_COLS,
string_cols=STRING_COLS,
nbuckets=NBUCKETS)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = layers.Dense(1, activation='linear', name='fare')(h2)
model = models.Model(inputs, output)
# Compile model
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
model = build_dnn_model()
"""
Explanation: Next, we'll create our DNN model now with the engineered features. We'll set NBUCKETS = 10 to specify 10 buckets when bucketizing the latitude and longitude.
End of explanation
"""
tf.keras.utils.plot_model(model, 'dnn_model_engineered.png', show_shapes=False, rankdir='LR')
trainds = load_dataset('../data/taxi-train*',
TRAIN_BATCH_SIZE,
tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('../data/taxi-valid*',
1000,
tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS+3,
steps_per_epoch=steps_per_epoch)
"""
Explanation: Let's see how our model architecture has changed now.
End of explanation
"""
plot_curves(history, ['loss', 'mse'])
"""
Explanation: As before, let's visualize the DNN model layers.
End of explanation
"""
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string),
}, steps=1)
"""
Explanation: Let's a prediction with this new model with engineered features on the example we had above.
End of explanation
"""
|
rasbt/python-machine-learning-book | code/ch05/ch05.ipynb | mit | %load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -p numpy,scipy,matplotlib,sklearn
"""
Explanation: Copyright (c) 2015-2017 Sebastian Raschka
https://github.com/rasbt/python-machine-learning-book
MIT License
Python Machine Learning - Code Examples
Chapter 5 - Compressing Data via Dimensionality Reduction
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
"""
from IPython.display import Image
%matplotlib inline
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
"""
Explanation: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see: https://github.com/rasbt/watermark.
<br>
<br>
Overview
Unsupervised dimensionality reduction via principal component analysis 128
Total and explained variance
Feature transformation
Principal component analysis in scikit-learn
Supervised data compression via linear discriminant analysis
Computing the scatter matrices
Selecting linear discriminants for the new feature subspace
Projecting samples onto the new feature space
LDA via scikit-learn
Using kernel principal component analysis for nonlinear mappings
Kernel functions and the kernel trick
Implementing a kernel principal component analysis in Python
Example 1 – separating half-moon shapes
Example 2 – separating concentric circles
Projecting new data points
Kernel principal component analysis in scikit-learn
Summary
<br>
<br>
End of explanation
"""
Image(filename='./images/05_01.png', width=400)
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
"""
Explanation: Unsupervised dimensionality reduction via principal component analysis
End of explanation
"""
df_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
"""
Explanation: <hr>
Note:
If the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at ./../datasets/wine/wine.data.
Or you could fetch it via
End of explanation
"""
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
"""
Explanation: <hr>
Splitting the data into 70% training and 30% test subsets.
End of explanation
"""
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
"""
Explanation: Standardizing the data.
End of explanation
"""
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
"""
Explanation: Note
Accidentally, I wrote X_test_std = sc.fit_transform(X_test) instead of X_test_std = sc.transform(X_test). In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.
My initial typo reflects a common mistake is that some people are not re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.
Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):
train_1: 10 cm -> class_2
train_2: 20 cm -> class_2
train_3: 30 cm -> class_1
mean: 20, std.: 8.2
After standardization, the transformed feature values are
train_std_1: -1.21 -> class_2
train_std_2: 0 -> class_2
train_std_3: 1.21 -> class_1
Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:
new_4: 5 cm -> class ?
new_5: 6 cm -> class ?
new_6: 7 cm -> class ?
If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.
new_std_4: -1.21 -> class 2
new_std_5: 0 -> class 2
new_std_6: 1.21 -> class 1
However, if we use the parameters from your "training set standardization," we'd get the values:
sample5: -18.37 -> class 2
sample6: -17.15 -> class 2
sample7: -15.92 -> class 2
The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.
Eigendecomposition of the covariance matrix.
End of explanation
"""
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/pca1.png', dpi=300)
plt.show()
"""
Explanation: Note:
Above, I used the numpy.linalg.eig function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors.
<pre>>>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)</pre>
This is not really a "mistake," but probably suboptimal. It would be better to use numpy.linalg.eigh in such cases, which has been designed for Hermetian matrices. The latter always returns real eigenvalues; whereas the numerically less stable np.linalg.eig can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.)
<br>
<br>
Total and explained variance
End of explanation
"""
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
# Note: I added the `key=lambda k: k[0]` in the sort call above
# just like I used it further below in the LDA section.
# This is to avoid problems if there are ties in the eigenvalue
# arrays (i.e., the sorting algorithm will only regard the
# first element of the tuples, now).
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
"""
Explanation: <br>
<br>
Feature transformation
End of explanation
"""
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
X_train_std[0].dot(w)
"""
Explanation: Note
Depending on which version of NumPy and LAPACK you are using, you may obtain the the Matrix W with its signs flipped. E.g., the matrix shown in the book was printed as:
[[ 0.14669811 0.50417079]
[-0.24224554 0.24216889]
[-0.02993442 0.28698484]
[-0.25519002 -0.06468718]
[ 0.12079772 0.22995385]
[ 0.38934455 0.09363991]
[ 0.42326486 0.01088622]
[-0.30634956 0.01870216]
[ 0.30572219 0.03040352]
[-0.09869191 0.54527081]
Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have
$$\Sigma v = \lambda v,$$
where $\lambda$ is our eigenvalue,
then $-v$ is also an eigenvector that has the same eigenvalue, since
$$\Sigma(-v) = -\Sigma v = -\lambda v = \lambda(-v).$$
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.6,
c=cmap(idx),
edgecolor='black',
marker=markers[idx],
label=cl)
"""
Explanation: <br>
<br>
Principal component analysis in scikit-learn
End of explanation
"""
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca3.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca4.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
"""
Explanation: Training logistic regression classifier using the first 2 principal components.
End of explanation
"""
Image(filename='./images/05_06.png', width=400)
"""
Explanation: <br>
<br>
Supervised data compression via linear discriminant analysis
End of explanation
"""
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
"""
Explanation: <br>
<br>
Computing the scatter matrices
Calculate the mean vectors for each class:
End of explanation
"""
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
"""
Explanation: Compute the within-class scatter matrix:
End of explanation
"""
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
"""
Explanation: Better: covariance matrix since classes are not equally distributed:
End of explanation
"""
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
"""
Explanation: Compute the between-class scatter matrix:
End of explanation
"""
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
"""
Explanation: <br>
<br>
Selecting linear discriminants for the new feature subspace
Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
End of explanation
"""
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/lda1.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
"""
Explanation: Note:
Above, I used the numpy.linalg.eig function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors.
<pre>>>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)</pre>
This is not really a "mistake," but probably suboptimal. It would be better to use numpy.linalg.eigh in such cases, which has been designed for Hermetian matrices. The latter always returns real eigenvalues; whereas the numerically less stable np.linalg.eig can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.)
Sort eigenvectors in decreasing order of the eigenvalues:
End of explanation
"""
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0] * (-1),
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/lda2.png', dpi=300)
plt.show()
"""
Explanation: <br>
<br>
Projecting samples onto the new feature space
End of explanation
"""
if Version(sklearn_version) < '0.18':
from sklearn.lda import LDA
else:
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda3.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda4.png', dpi=300)
plt.show()
"""
Explanation: <br>
<br>
LDA via scikit-learn
End of explanation
"""
Image(filename='./images/05_11.png', width=500)
"""
Explanation: <br>
<br>
Using kernel principal component analysis for nonlinear mappings
End of explanation
"""
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from numpy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.linalg.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
return X_pc
"""
Explanation: <br>
<br>
Implementing a kernel principal component analysis in Python
End of explanation
"""
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/half_moon_1.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/half_moon_2.png', dpi=300)
plt.show()
from matplotlib.ticker import FormatStrFormatter
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.tight_layout()
# plt.savefig('./figures/half_moon_3.png', dpi=300)
plt.show()
"""
Explanation: <br>
Example 1: Separating half-moon shapes
End of explanation
"""
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/circles_1.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_2.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_3.png', dpi=300)
plt.show()
"""
Explanation: <br>
Example 2: Separating concentric circles
End of explanation
"""
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[-i] for i in range(1, n_components + 1)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[-1]
x_new
x_proj = alphas[-1] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X[:-1, :], gamma=15, n_components=1)
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_new = X[-1]
x_reproj = project_x(x_new, X[:-1], gamma=15, alphas=alphas, lambdas=lambdas)
plt.scatter(alphas[y[:-1] == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y[:-1] == 1, 0], np.zeros((49)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.scatter(alphas[y[:-1] == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y[:-1] == 1, 0], np.zeros((49)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='some point [1.8713, 0.0093]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
"""
Explanation: <br>
<br>
Projecting new data points
End of explanation
"""
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('./figures/scikit_kpca.png', dpi=300)
plt.show()
"""
Explanation: <br>
<br>
Kernel principal component analysis in scikit-learn
End of explanation
"""
|
mdeff/ntds_2017 | projects/reports/stackoverflow_network/StackOverflowNetworkAnalysis.ipynb | mit | %matplotlib inline
import os
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
from scipy import sparse
# Own modules
import DataProcessing as proc
import DataCleaning as clean
import NetworkAnalysis as analysis
import Classification as classification
import NetworkEvolution as evol
"""
Explanation: Stack Overflow Network Analysis
Claas Brüß, Simon Romanski and Maximilian Rünz
NOTE: Originally we claimed that the data visualization used in this project will be credited for the Data Visualization course. As we finally chose a very different approach in Data Visualization, this claim does not hold anymore. Apart from using the same data set, these are two clearly seperated projects.
1 Introduction
There are comprehensive studies on how groups form and work together in a face-to-face team work scenario. However, even though the development of the internet has allowed open platforms for collaboration in a massive scale, less research has been conducted on patterns of collaboration in that domain.
This project analyzes the stackoverflow community representing it as a graph. We are applying network analysis methods to subcommunities for libraries like Numpy in order to understand the structure of the community.
In a next stage we are comparing the communities to theoretical network models as well as to real network models to obtain insights about work patterns in the community. Finally we will compare those insights with proven psychological models of group work theorey to gain intuition about knowledge transfer and work in these communites. We will show that the shape of group work and knowledge transfer changes by the means of online communities.
End of explanation
"""
# Paths that will be used
posts_path = os.path.join("Posts.xml")
questions_path = os.path.join("Questions.json")
answers_path = os.path.join("Answers.json")
edge_list_path = os.path.join("Edges.json")
edge_list_tag_path = os.path.join("Tags")
"""
Explanation: 2 Data processing
The data behind this project was provided by Stack Overflow itself. They release frequent data dumps on archive.org. We have analyzed all posts from Stack Overflow with the stackoverflow.com-Posts.7z file. The compressed file contains a list of all questions and answeres formatted as xml.
Note: As the analyzed xml file is more than 50 GB big, the data processing takes several hours. The data processing part can be skipped, the uploaded zip contains the constructed edge lists.
End of explanation
"""
%%time
# Create JSON for questions and answers
proc.split_qa_json_all(questions_path, answers_path, posts_path)
"""
Explanation: 2.1 Extract meaningful features
Before we started our analysis we extracted the features of the posts which are interesting for us. The provided features can be looked up in this text file. We selected the following features for our analysis:
* PostTypeId (Question/Answer)
* Id
* ParentId
* AcceptedAnswerId
* CreationDate
* Score
* OwnerUserId
* Tags
Based on the PostType the posts are then stored then stored in the questions or answers JSON file.
End of explanation
"""
%%time
# create edge list
proc.create_edge_list_all(questions_path, answers_path, edge_list_path)
"""
Explanation: 2.2 Create edge list
Having selected the meaningful features, we started to create the graph. Each answer is matched with its corresponding question. The result are two nodes and one edge: The nodes represent Stack Overflow users. The edge connectes those users from the inquirer to the respondent.
End of explanation
"""
%%time
# split in file for each tag
proc.split_edge_list_tags(edge_list_tag_path, edge_list_path)
"""
Explanation: 2.3 Split networks by tags
After we tried to analyze the whole Stack Overflow network and came to the conclusion, that it is simpy too big, we decided to split the network by tags. Therefore we created one edge list per Tag (e.g numpy).
End of explanation
"""
%%time
# order by time
proc.order_edge_lists_tags_time(edge_list_tag_path)
"""
Explanation: 2.4 Order edges by time
In order to simplify the analysis of the network evolution, we ordered the edges based on the creation date of the answer.
End of explanation
"""
%%time
proc.edge_lists_to_txt(edge_list_tag_path)
"""
Explanation: 2.5 Format edge list to txt files
Last but not least, the json files are converted into txt edge list, so that they can be read easily by networkx.
End of explanation
"""
network_path = os.path.join("Tags", "numpy_complete_ordered_list.txt")
network = nx.read_edgelist(network_path,nodetype=int, data=(('time',int),('votes_q', int),('votes_a', int),('accepted', bool)))
network_directed = nx.read_edgelist(network_path, create_using=nx.DiGraph(), nodetype=int, data=(('time',int),('votes_q', int),('votes_a', int),('accepted', bool)))
"""
Explanation: 3 Data Cleaning
During the network analysis we noticed that it makes sense to clean the created network. Therefore, we implemented several filters:
* Filter by attribute
* Filter by degree
* Filter by component
* Remove self loops
The most frequently used filter is the filter for attributes. Using this filter we remove questions and answers with negative votes, as they are not helpful for the community.
Furthermore this filter will be used to analyse the evolution of the network.
End of explanation
"""
# in epoche
min_time = -1
max_time = -1
min_q_votes = 0
max_q_votes = -1
min_a_votes = 0
max_a_votes = -1
accepted = -1
min_degree = -1
max_degree = -1
only_gc = False
no_self_loops = True
"""
Explanation: The whole stackoverflow community has more than 8 million users, 15 million questions and 23 million answers on different aspects of different libraries, programming languages and operating systems.
Hence, we decided to focus on specific widely-used libraries for our investigation. In our case we perform data analysis for a commonly used python library Numpy and compare it to another python library Matlplotlib as well as to a heavily used library for C++ called Eigen. Ultimately we will compare the entire Python community with the Numpy community.
For numpy we then create two networks: one directed and one undirected for different analysis purposes.
End of explanation
"""
network_cleaned = clean.filter_network_attributes(network, min_time, max_time,\
min_q_votes, max_q_votes, min_a_votes, max_a_votes, accepted)
network_direted_cleaned = clean.filter_network_attributes(network_directed, min_time, max_time,\
min_q_votes, max_q_votes, min_a_votes, max_a_votes, accepted, directed=True)
"""
Explanation: Filter by attributes
End of explanation
"""
network_cleaned = clean.filter_network_node_degree(network_cleaned, min_degree, max_degree)
network_direted_cleaned = clean.filter_network_node_degree(network_direted_cleaned, min_degree, max_degree)
"""
Explanation: Filter by node degree
End of explanation
"""
if only_gc:
network_cleaned = clean.filter_network_gc(network_cleaned)
network_direted_cleaned = clean.filter_network_gc(network_direted_cleaned)
"""
Explanation: Only use giant component
End of explanation
"""
if no_self_loops:
network_cleaned = clean.filter_selfloops(network_cleaned)
network_direted_cleaned = clean.filter_selfloops(network_direted_cleaned)
"""
Explanation: Remove self loops
End of explanation
"""
analysis.get_number_nodes(network_cleaned)
"""
Explanation: 3 Data Exploration: Network properties
We are starting with some basic properties of the subcommunity. Each node in our graph represents one user.
End of explanation
"""
analysis.get_number_edges(network_cleaned)
"""
Explanation: We can see that we have roughly 20.000 users.
End of explanation
"""
analysis.get_number_connected_components(network_cleaned)
"""
Explanation: And we have roughly 37.231 edges corresponding to one answer each.
End of explanation
"""
analysis.get_size_giant_component(network_cleaned)
analysis.plot_ranking_component_size(network_cleaned)
analysis.get_number_self_loops(network_cleaned)
"""
Explanation: The number of connected components describes the amount of completely seperated groups in the community that do not interfere with each other. A deeper analysis shows that there is in fact one big group and many very small groups.
End of explanation
"""
analysis.get_avg_degree(network_cleaned)
"""
Explanation: The self-loops represent answers that users have given to their own questions. As this seems to be counterintuitive, we have removed those in our data cleaning.
End of explanation
"""
analysis.get_cluster_coefficient(network_cleaned)
"""
Explanation: The average degree for the numpy network is 3.5, we will evaluate this in the exploitation part of this report.
End of explanation
"""
analysis.get_max_degree(network_cleaned)
"""
Explanation: The cluster coefficient is relatively low due to the model of our network. More details on that are also following in the data exploitation chapter.
End of explanation
"""
analysis.plot_degree_hist(network_cleaned)
analysis.plot_degree_scatter(network_cleaned)
"""
Explanation: The maximal degree of a node is tremendously higher than the average degree. This is a little suspicous. Therefore, we will have a look at the distribution of the node degrees.
Degree distribution
To get insights about the user behaviour, i.e., how many question do users ask and answer we war plotting a degree distribution in the following:
End of explanation
"""
analysis.plot_in_degree_hist(network_direted_cleaned)
analysis.plot_in_degree_scatter(network_direted_cleaned)
analysis.plot_out_degree_hist(network_direted_cleaned)
analysis.plot_out_degree_scatter(network_direted_cleaned)
"""
Explanation: These plots show a distribution that demonstrates a high number of highly connected nodes. This superlineaer distribution plot imply a hub-and-spoke topology of this network. Note the double-logarithmic scaling of the scatter plot.
End of explanation
"""
analysis.analyze_attribute_q_votes(network_cleaned)
analysis.analyze_attribute_a_votes(network_cleaned)
"""
Explanation: Like before can observe that incoming and outgoing degree distributions both demonstrate bevaior associated with networks with hub-and-spoke topology. The outgoing distribution exhibits this even stronger than the incoming distribution indicating that certain members of the community carry an overpropotional workload in answering questions posed on the platform.
Attribute Distribution
End of explanation
"""
analysis.analyze_attribute_time(network_cleaned)
"""
Explanation: Another interesting property is the votes per edge as their distribution is another valuable metric to understand how user activity is concentrated on certain areas.
End of explanation
"""
for file in ["python_complete_ordered_list.txt",\
"matplotlib_complete_ordered_list.txt",\
"eigen_complete_ordered_list.txt"]:
print(file)
analysis.analyze_basic_file(file)
print()
"""
Explanation: In general it is interesting to see the evoulution of a network over time. We can see that in this case the amount of edges is conitnuously increasing.
Comparison
It is instructive to compare the characteristics of the different subcommunities:
End of explanation
"""
analysis.plot_degree_scatter(network_cleaned)
"""
Explanation: At first one can notice, that the ratio between number of nodes and size of the giant component is quite large for all networks. Only a few inactive users are not connected to the real community. Furthermore, the number of connected components is correlated to the number of nodes of the given network.
For the average degree one can notice, that the libraries have an average degree around 2-3, but python as programing language has an average degree of more than 6!
The cluster coefficient does not seem to be related to network types.
All in all we can say, that libraries behave similary even if the programming language is different (eigen is written in c++). Built networks of programming language have higher degree.
4 Data Exploitation
It is possible to create different graph models based on the actual graph to understand how our real world model fits into these theoretical concepts. This would be a first helpful step to understand the underlying network model. We were planning to build an Erdős–Rényi and a Barabási-Albert graph based on the following assumptions:
If NN denotes the number of nodes and LL the number of edges,
we can calculate the Erdős–Rényi graph parameter pp as follows:
$p = \frac{2L}{((N-1)N)}$
Respectively, we can caluclate the parameter m for the Barabási-Albert graph.
$m =(\frac{L}{N})+1$
We noticed that building the graphs for all the subcommunities takes a lot of time as our graphs are relatively big. Hence, we are aiming for a more efficient approach to understand the network model.
The key difference between the two models is that the Barabási-Albert model describes a scale-free network and the Erdős–Rényi model describes a random network. Hence, the degree distribution is more likely to show how similar each model is to the original graph.
This is the abstraction that we would like to draw after the comparison whatsoever. As a result we can also compare the degree distribution with the two distributions representing the random network model and the scale free model. That is the Poisson distribution and the power-law distribution respectively.
<img src="files/imgs/RandomNetworkDegreeDistribution.png"/>
<img src="files/imgs/ScaleFreeNetworkDegreeDistribution.png"/>
They can be a bit hard to distinguish when we are handling real data in a linearly scaled graph. However, if we take the logarithm of both axes, they are clearly distinguishable.
End of explanation
"""
analysis.get_gamma_power_law(network_cleaned)
analysis.get_in_gamma_power_law(network_direted_cleaned)
analysis.get_out_gamma_power_law(network_direted_cleaned)
"""
Explanation: The plots yield that the underlying graph of the Numpy subcommunity is neither a scale-free network nor a random network. Its degree distribution follows a super linear network.
We can still calculate γ of the distribution to compare it to other networks
Scale Free Networks
In scale free networks there exists a linear dependency between the logarithm of the probability and the logarithm of the degree:
$log(p(k)) \sim -y log(k) $
Gamma can then be calculated by fitting a linear regression between $log(p(k))$ and $log(k)$. The slope of the regression line is the gamma of the scale free network.
End of explanation
"""
networks = evol.split_network(network)
evol.plot_t_n(networks)
"""
Explanation: For now we will keep these numbers in mind. We will come back to them later.
Network evolution
In most cases the growth of a network is clearly correlated with time. Most models simply regard the time until a new node joins the network as a time step. In real networks this time can widely differ and therefore we decided to plot over time to look at the changes in network in a certain timeframe rather than a certain magnitude of change. Nonetheless it is implied that more and more nodes join the network over the weeks.
End of explanation
"""
evol.plot_t_k_avg(networks)
"""
Explanation: <img src="files/imgs/WWWYearNodes.png"/>
Number of Nodes over Time
In order to gain a better understanding of how these networks evolve over time we observed various network attributes over time to network models such as Barabási-Albert Model.
These two curves depict the number of nodes present in the network plotted over time. It’s clear that the growth of the network is accelerating in both cases.
End of explanation
"""
evol.plot_t_k_max(networks)
"""
Explanation: ⟨ k⟩ over time
The average degree of the nodes within the network closely follows a logistic curve converging at an average degree value of about 3.6.
End of explanation
"""
evol.plot_n_c(networks)
"""
Explanation: k<sub>max</sub> over Time
The growing number of nodes accelerates the rise in maximum degree .The acelleration or stronger than linear growth of the maximum degree suggests that new nodes show a tendency to connect to already highly connected nodes. This superlinear preferential attachment incates that we will see high values of α in the following plots.
End of explanation
"""
evol.plot_k_avg_k_max(networks)
"""
Explanation: <img src="files/imgs/EvolClustering.png"/>
End of explanation
"""
evol.DegreeDynamics(network_cleaned, 20)
# n = 100
analysis.plot_degree_scatter(networks[1246320000000])
# n = 1000
analysis.plot_degree_scatter(networks[1302566400000])
# n = 10000
analysis.plot_degree_scatter(networks[1430179200000])
# all
analysis.plot_degree_scatter(network_cleaned)
"""
Explanation: <img src="files/imgs/EvolHubs.png"/>
k<sub>max</sub> over ⟨ k⟩
As indicated by the superlinear growth of maximum degree over time, we also see superlinear behvaior in the plot of maximum degree over average degree with values of α > 2.5.
End of explanation
"""
classification.classify_users(network_directed)
"""
Explanation: <img src="files/imgs/EvolDegree.png"/>
Degree Dynamic and Degree Distribution in different Stages
In this section we compare the degree dynamics of the network nodes between the network implied from the StackOverflow data and network closely following the bevaiour of the Barabási-Albert Model. For this we selected degree distribution with similiar node counts N. While the Barabási-Albert network shows linearity in the degree distributions plots and the rise in the degree of the node plot lines, the and distributon plots of the StackOverflow community network show clear superlinear behaviour. The degree dynamic plot is prefiltered and only shows the plot lines for nodes that eventually reach a degree higher than 100. In these highly connected nodes we see a very quick development towards them becoming hubs in the network instagating the a topology that leans towards hub-and-spoke. In contrast to the Barabási-Albert network with a scale-free topology and power law dominated distributions.
Comparison to other real world networks
<img src="files/imgs/OtherNetworks.png"/>
The Stack Overflow Numpy network behaves reagaring the ratio bewteen edges and nodes similary to the smaller networks as Power Grid and E. Coli Metabolism. It also behaves similiar to their gamma.
But if one has a look at the parameters for the Python network, they are much closer to communication networks as the Science collaboration and Citation Network. This is most probably due to our tag selection. It is likely, that the complete Stack Overflow Networks behaves very similar to the Communication Networks with a gamma of up to 5.
Classifier
In order to automatically detect super users we will train an unsupervised k-means clustering. This clustering is based on the attributes of nodes like:
* in degree
* out degree
* average question votes
* average answer votes
Kmeans follows an iterative approach:
In the first steps the data points are assigned the label of the nearest cluster.
$S_i^{(t)} = \big { x_p : \big \| x_p - \mu^{(t)}_i \big \|^2 \le \big \| x_p - \mu^{(t)}_j \big \|^2 \ \forall j, 1 \le j \le k \big}$
After that a new center for each cluster is computed:
$\mu^{(t+1)}i = \frac{1}{|S^{(t)}_i|} \sum{x_j \in S^{(t)}_i} x_j$
This procedure is repeated until the location of clusters is not changing anymore.
End of explanation
"""
print("Gamma total: {}".format(analysis.get_gamma_power_law(network_cleaned)))
print("Gamma in: {}".format(analysis.get_in_gamma_power_law(network_direted_cleaned)))
print("Gamma out: {}".format(analysis.get_out_gamma_power_law(network_direted_cleaned)))
"""
Explanation: We can see that there are very active users with label 0, who have answered ten questions and asked fourteen questions in average.
The users from label 4 are also asking and answering several times, but they ask very good questions which score in average 85 votes.
Users with label 1 are very inactive. They were only active once. These users are colored in green, located very close to the coordinates origin.
The super users are further away from the origin.
Group work models
In the beginning of this report we were aiming for an intuitive understanding of how collaboration networks like Stackoverflow work. Consequently, we will try to compare our extracted information with two common models for group work theory.
First of all we have to define what exactly “work” is in Stackoverflow. As it is impossible to infer information about the actual project people are working on, we cannot measure the actual project work outcomes of individuals. However, we can define the knowledge transfer, i.e. answering of questions, as actual work.
The Belbin Team Inventory describes different roles of people that is emerging from the formation of the group and was presented in Management Teams: Why They Succeed or Fail (1981). The extended Belbin Team inventory consists of the following types:
Plants are creative generators of ideas.
Resource Investigators provides enthusiasm at the start of a project and seizes contacts and opportunities.
Coordinators have a talent for seeing the big picture and are therefore like to become the leader of the team.
Shaper are driven by a lot of energy and the urge to perform. Therefore they usually make sure that all possibilities are considered and shake things up if necessary.
Monitor Evaluators unemotional observers of the project and team.
Teamworkers ensure that is the team running effectively and without friction.
Implementers take suggestions and ideas and turns them into action.
Completers are perfectionists and double-check the final outcome of the work.
Specialists are experts in their own particular field and typically transfer this knowledge to others. Usually the stick to their domain of expertise.
While it is hard to identify some of the types, e.g. the identification of a completer would require analysis of the whole answer text, it possible to find some similarities between our users
We calculated the different γ for incoming and outgoing degrees, i.e. how many hubs do we have for users based on asking questions and how many hubs do we have for answering questions.
End of explanation
"""
|
XinyiGong/pymks | notebooks/filter.ipynb | mit | %matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Filter Example
This example demonstrates the connection between MKS and signal
processing for a 1D filter. It shows that the filter is in fact the
same as the influence coefficients and, thus, applying the predict
method provided by the MKSLocalizationnModel is in essence just applying a filter.
End of explanation
"""
x0 = -10.
x1 = 10.
x = np.linspace(x0, x1, 1000)
def F(x):
return np.exp(-abs(x)) * np.cos(2 * np.pi * x)
p = plt.plot(x, F(x), color='#1a9850')
"""
Explanation: Here we construct a filter, $F$, such that
$$F\left(x\right) = e^{-|x|} \cos{\left(2\pi x\right)} $$
We want to show that if $F$ is used to generate sample calibration
data for the MKS, then the calculated influence coefficients are in
fact just $F$.
End of explanation
"""
import scipy.ndimage
n_space = 101
n_sample = 50
np.random.seed(201)
x = np.linspace(x0, x1, n_space)
X = np.random.random((n_sample, n_space))
y = np.array([scipy.ndimage.convolve(xx, F(x), mode='wrap') for xx in X])
"""
Explanation: Next we generate the sample data (X, y) using
scipy.ndimage.convolve. This performs the convolution
$$ p\left[ s \right] = \sum_r F\left[r\right] X\left[r - s\right] $$
for each sample.
End of explanation
"""
from pymks import MKSLocalizationModel
from pymks import PrimitiveBasis
prim_basis = PrimitiveBasis(n_states=2, domain=[0, 1])
model = MKSLocalizationModel(basis=prim_basis)
"""
Explanation: For this problem, a basis is unnecessary as no discretization is
required in order to reproduce the convolution with the MKS localization. Using
the ContinuousIndicatorBasis with n_states=2 is the equivalent of a
non-discretized convolution in space.
End of explanation
"""
model.fit(X, y)
"""
Explanation: Fit the model using the data generated by $F$.
End of explanation
"""
y_pred = model.predict(X)
print y[0, :4]
print y_pred[0, :4]
"""
Explanation: To check for internal consistency, we can compare the predicted
output with the original for a few values
End of explanation
"""
plt.plot(x, F(x), label=r'$F$', color='#1a9850')
plt.plot(x, -model.coeff[:,0] + model.coeff[:, 1],
'k--', label=r'$\alpha$')
l = plt.legend()
"""
Explanation: With a slight linear manipulation of the coefficients, they agree perfectly with the shape of the filter, $F$.
End of explanation
"""
|
planetlabs/notebooks | jupyter-notebooks/ship-detector/01_ship_detector.ipynb | apache-2.0 | sample_data_file_name = 'data/1056417_2017-03-08_RE3_3A_Visual_clip.tif'
"""
Explanation: Detect ships in Planet data
This notebook demonstrates how to detect and count objects in satellite imagery using algorithms from Python's scikit-image library. In this example, we'll look for ships in a small area in the San Francisco Bay and generate a PNG of each ship with an outline around it.
Input Parameters
This is a sample image that was generated using the Clip and Ship API. To test this with your own image, replace the parameters below.
End of explanation
"""
import skimage.io
from IPython.display import Image
# Read image into scimage package
img = skimage.io.imread(sample_data_file_name)
skimage.io.imsave('output/original.png', img)
# Display original image
display(Image(filename='output/original.png'))
"""
Explanation: Original image
Below is a predefined image that has been clipped from the Planet API using Clip and Ship. This is the image that we are going to detect ships in.
End of explanation
"""
import json
from osgeo import gdal, osr
import numpy
from skimage.segmentation import felzenszwalb
from skimage.segmentation import mark_boundaries
from skimage.measure import regionprops
# Prepare result structure
result = {
"ship_count": 0,
"ships": []
}
# Open image with gdal
ds = gdal.Open(sample_data_file_name)
xoff, a, b, yoff, d, e = ds.GetGeoTransform()
# Get projection information from source image
ds_proj = ds.GetProjectionRef()
ds_srs = osr.SpatialReference(ds_proj)
# Get the source image's geographic coordinate system (the 'GEOGCS' node of ds_srs)
geogcs = ds_srs.CloneGeogCS()
# Set up a transformation between projected coordinates (x, y) & geographic coordinates (lat, lon)
transform = osr.CoordinateTransformation(ds_srs, geogcs)
# Convert multi-channel image it into red, green and blueb[, alpha] channels
red, green, blue, alpha = numpy.rollaxis(numpy.array(img), axis=-1)
# Mask: threshold + stops canny detecting image boundary edges
mask = red > 75
# Create mask for edge detection
skimage.io.imsave('output/mask.png', mask * 255)
# Use Felzenszwalb algo to find segements
segments_fz = felzenszwalb(numpy.dstack((mask, mask, mask)),
scale=5000,
sigma=3.1,
min_size=25)
# Build labeled mask to show where ships were dectected
segmented_img = mark_boundaries(mask, segments_fz)
skimage.io.imsave('output/mask_labeled.png', segmented_img)
# Count ships and save image of each boat clipped from masked image
for idx, ship in enumerate(regionprops(segments_fz)):
# If area matches that of a stanard ship, count it
if (ship.area >= 300 and ship.area <= 10000):
# Incrment count
result['ship_count'] += 1
# Create ship thumbnail
x, y = (int(numpy.average([ship.bbox[0],
ship.bbox[2]])),
int(numpy.average([ship.bbox[1],
ship.bbox[3]])))
sx, ex = max(x - 35, 0), min(x + 35, img.shape[0] - 1)
sy, ey = max(y - 35, 0), min(y + 35, img.shape[1] - 1)
img_ship = img[sx:ex, sy:ey]
skimage.io.imsave('output/ship-%s.png' % str(idx + 1), img_ship)
# Get global coordinates from pixel x, y coords
projected_x = a * y + b * x + xoff
projected_y = d * y + e * x + yoff
# Transform from projected x, y to geographic lat, lng
(lat, lng, elev) = transform.TransformPoint(projected_x, projected_y)
# Add ship to results cluster
result["ships"].append({
"id": idx + 1,
"lat": lat,
"lng": lng
})
# Display results
print(json.dumps(result, indent=2))
#Display mask used for ship detection.
display(Image(filename='output/mask.png'))
# Display labled mask where we detected ships
display(Image(filename='output/mask_labeled.png'))
# Display each individual ship cropped out of the original image
for idx,ship in enumerate(result['ships']):
print("Ship "+ str(idx + 1))
display(Image(filename='output/ship-' + str(idx + 1) + '.png'))
"""
Explanation: Run the ship detection algorithm
End of explanation
"""
|
nipy/brainx | brainx/notebooks/detect_partition_degeneracy.ipynb | bsd-3-clause | %matplotlib inline
import os
import numpy as np
import networkx as nx
from glob import glob
from matplotlib import pyplot as plt
from brainx.util import threshold_adjacency_matrix
"""
Explanation: Evaluating Partitions for Degeneracy
<br>
This notebook is designed to allow BrainX users to evaluate degeneracies in the partitions produced by the algorithms:
- modularity.simulated_annealing
- modularity.newman_partition
<br>
Specific concerns that this notebook will allow users to evalue are:
1. How these algorithms handle disconnected components (i.e. individual nodes or sets of nodes that are disconnected from the rest of the graph.)
Future goals of this notebook will be to:
- demonstrate concrete examples of algorithm degeneracy / unintuitive behavior
<br>
Imports
End of explanation
"""
# Note: Users may need to adjust how the partition is loaded.
# Current version assumes that it is saved in json format.
def detect_degeneracy(subjs, results_dir, partfiles, matfiles, cost_val, output=True):
'''
Checks to see if modules contain disconnected components...
Loads partition and matrix, finds subgraphs composed of nodes in each partition,
then checks if partition subgraph is connected. If not, prints connected components
within each partition subgraph.
VARIABLES:
subjs: list of strs
list of subjects to analyze
partfiles : list of strs
list of partition files for each subject
matfiles : list of strs
list of adjacency matrices for each subject
cost_val : str
connection density at which the adjacency matrix will be thresholded
output : boolean
True to print output, False otherwise
RETURNS:
num_mods : list
lists # of modules for each subject
num_comps : list
lists # of disconnected components for each subject
'''
num_mods = []
num_comps = []
for subj in np.arange(len(subjs)):
# load partition
partf = '%s/subpart_%s_%s.json' %(results_dir,subjs[subj],cost_val)
part = json.load(open(partf,'r'))
# load matrix, graph
mat = np.loadtxt(matfiles[subj])
thrmat, cost = threshold_adjacency_matrix(adj_matrix=mat, cost=float(cost_val), uptri=True) #threshold
G = nx.from_numpy_matrix(thrmat)
num_mods.append(len(part.keys()))
num_comps.append(len(nx.connected_components(G)))
if output:
print(subjs[subj], 'cost:', cost_val)
# make subgraph for each part
for nodes in part.values():
subg = G.subgraph(nbunch=nodes)
# check to see if there are disconnected components in each module
if not nx.is_connected(subg):
subg_comps = nx.connected_components(subg)
if output:
print(subg_comps)
return num_mods, num_comps
"""
Explanation: Functions
End of explanation
"""
# User-specific: Load data
results_dir = '/home/jagust/kbegany/data/Rest.gbsm/Results/SA' ## EDIT ME! ##
corr_dir = '/home/despo/enhance/MRIdata_subjects/TRSE_Rest_GT/Data/corr_gbsm_aal/TXTfiles' ## EDIT ME! ##
costs = ['0.03','0.05','0.07','0.10','0.12','0.15','0.17','0.20','0.23','0.25']
partfiles = []
# get partition files
for cost in costs:
globstr = '%s/subpart_*_%s.json'%(results_dir, cost)
parts = sorted(glob(globstr))
partfiles += parts
# get mat files
globstr = '%s/*_*_Block01.txt'%(corr_dir)
matfiles = sorted(glob(globstr))
subjs = []
for mat in matfiles:
ind1 = len('/home/despo/enhance/MRIdata_subjects/TRSE_Rest_GT/Data/corr_gbsm_aal/TXTfiles/')
ind2 = len('_Block01.txt')
subjs.append(mat[ind1:-ind2])
_,_ = detect_degeneracy(subjs, results_dir, partfiles, matfiles, cost_val='0.25', output=True)
num_mods_all = []
num_comps_all = []
for cost in costs:
num_mods, num_comps = detect_degeneracy(subjs, results_dir, partfiles, matfiles, cost_val=cost, output=False)
num_mods_all.append(num_mods)
num_comps_all.append(num_comps)
# format into workable arrays
num_comps_all = np.array(num_comps_all)
num_comps_all = num_comps_all.reshape((len(subjs), len(costs)))
num_mods_all = np.array(num_mods_all)
num_mods_all = num_mods_all.reshape((len(subjs), len(costs)))
# plot module sizes
plt.figure(num=None, figsize=(8,6), dpi=600, facecolor=None)
for i in np.arange(len(costs)):
y = num_mods_all[:,i]
x = np.ones(len(y))*(i-1)
plt.scatter(x, y, facecolor='none', edgecolor='k', marker='D', s=100)
plt.xlim(-0.5,8.5)
plt.ylim(0,20)
plt.xlabel(('Cost'), fontsize=12)
plt.ylabel(('# of Modules'), fontsize=12)
_ = plt.xticks((0,1,2,3,4,5,6,7,8,9),('0.025','0.05','0.075','0.10','0.125','0.15','0.175','0.20','0.225','0.25'),
fontsize=10, rotation='vertical')
#plt.savefig('num_mods_bycost_8x6.png')
# plot number of components
plt.figure(num=None, figsize=(8,6), dpi=600, facecolor=None)
for i in np.arange(len(costs)):
y = num_comps_all[:,i]
x = np.ones(len(y))*(i-1)
plt.scatter(x, y, facecolor='none', edgecolor='k', marker='D', s=100)
plt.xlim(-0.5,8.5)
plt.ylim(0,50)
plt.xlabel(('Cost'), fontsize=12)
plt.ylabel(('# of Components'), fontsize=12)
_ = plt.xticks((0,1,2,3,4,5,6,7,8),('0.05','0.075','0.10','0.125','0.15','0.175','0.20','0.225','0.25'), fontsize=10, rotation='vertical')
#plt.savefig('num_mods_bycost_8x6.png')
plt.figure(num=None, figsize=(7,5), dpi=600, facecolor=None)
x = num_comps_all
y = num_mods_all
plt.scatter(x.flatten(), y.flatten())
plt.xlabel('# Components')
plt.ylabel('# Modules')
"""
Explanation: Workbook
The space below is a workbook where users can explore their data.
<br>
Note: Users will need to adjust the variables to accomodate the directory and file structure of their data.
<br>
End of explanation
"""
|
google/trax | trax/models/research/examples/hourglass_downsampled_imagenet.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 Google LLC.
End of explanation
"""
!pip install -q --upgrade jaxlib==0.1.71+cuda111 -f https://storage.googleapis.com/jax-releases/jax_releases.html
!pip install -q --upgrade jax==0.2.21
!pip install -q git+https://github.com/google/trax.git
!pip install -q pickle5
!pip install -q gin
# Execute this for a proper TPU setup!
# Make sure the Colab Runtime is set to Accelerator: TPU.
import jax
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20200416'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
jax.devices()
"""
Explanation: Hourglass: ImageNet32/64 evaluation
Install dependencies
End of explanation
"""
# Download ImageNet32 data (the url in tfds is down)
!gdown https://drive.google.com/uc?id=1OV4lBnuIcbqeuoiK83jWtlnQ9Afl6Tsr
!tar -zxf /content/im32.tar.gz
# tfds hack for imagenet32
import json
json_path = '/content/content/drive/MyDrive/imagenet/downsampled_imagenet/32x32/2.0.0/dataset_info.json'
with open(json_path, mode='r') as f:
ds_info = json.load(f)
if 'moduleName' in ds_info:
del ds_info['moduleName']
with open(json_path, mode='w') as f:
json.dump(ds_info, f)
!mkdir -p /root/tensorflow_datasets/downsampled_imagenet/32x32
!cp -r /content/content/drive/MyDrive/imagenet/downsampled_imagenet/32x32/2.0.0 /root/tensorflow_datasets/downsampled_imagenet/32x32
# Download and set up ImageNet64 (validation only) data
!gdown https://drive.google.com/uc?id=1ZoI3ZKMUXfrIlqPfIBCcegoe0aJHchpo
!tar -zxf im64_valid.tar.gz
!mkdir -p /root/tensorflow_datasets/downsampled_imagenet/64x64/2.0.0
!cp im64_valid/* /root/tensorflow_datasets/downsampled_imagenet/64x64/2.0.0
# Download gin configs
!wget -q https://raw.githubusercontent.com/google/trax/master/trax/supervised/configs/hourglass_imagenet32.gin
!wget -q https://raw.githubusercontent.com/google/trax/master/trax/supervised/configs/hourglass_imagenet64.gin
"""
Explanation: Download ImageNet32/64 data
Downloading the datasets for evaluation requires some hacks because URLs from tensorflow_datasets are invalid. Two cells below download data for ImageNet32 and ImageNet64, respectively. Choose the one appropriate for the checkpoint you want to evaluate.
End of explanation
"""
import gin
gin.parse_config_file('hourglass_imagenet32.gin')
model = trax.models.HourglassLM(mode='eval')
model.init_from_file(
'gs://trax-ml/hourglass/imagenet32/model_470000.pkl.gz',
weights_only=True,
)
loss_fn = trax.layers.WeightedCategoryCrossEntropy()
model_eval = trax.layers.Accelerate(trax.layers.Serial(
model,
loss_fn
))
"""
Explanation: Load the ImageNet32 model
This colab can be used to evaluate both imagenet32 and imagenet64 models. We start with our ImageNet32 checkpoint.
End of explanation
"""
import gin
import trax
# Here is the hacky part to remove shuffling of the dataset
def get_eval_dataset():
dataset_name = gin.query_parameter('data_streams.dataset_name')
data_dir = trax.data.tf_inputs.download_and_prepare(dataset_name, None)
train_data, eval_data, keys = trax.data.tf_inputs._train_and_eval_dataset(
dataset_name, data_dir, eval_holdout_size=0)
bare_preprocess_fn = gin.query_parameter('data_streams.bare_preprocess_fn')
eval_data = bare_preprocess_fn.scoped_configurable_fn(eval_data, training=False)
return trax.fastmath.dataset_as_numpy(eval_data)
from trax import fastmath
from trax.fastmath import numpy as jnp
from tqdm import tqdm
def batched_inputs(data_gen, batch_size):
inp_stack, mask_stack = [], []
for input_example, mask in data_gen:
inp_stack.append(input_example)
mask_stack.append(mask)
if len(inp_stack) % batch_size == 0:
if len(set(len(example) for example in inp_stack)) > 1:
for x, m in zip(inp_stack, mask_stack):
yield x, m
else:
input_batch = jnp.stack(inp_stack)
mask_batch = jnp.stack(mask_stack)
yield input_batch, mask_batch
inp_stack, mask_stack = [], []
if len(inp_stack) > 0:
for inp, mask in zip(inp_stack, mask_stack):
yield inp, mask
def run_full_evaluation(accelerated_model_with_loss, examples_data_gen,
batch_size, pad_to_len=None):
# Important: we assume batch size per device = 1
assert batch_size % fastmath.local_device_count() == 0
assert fastmath.local_device_count() == 1 or \
batch_size == fastmath.local_device_count()
loss_sum, n_tokens = 0.0, 0
def pad_right(inp_tensor):
if pad_to_len:
return jnp.pad(inp_tensor,
[[0, 0], [0, max(0, pad_to_len - inp_tensor.shape[1])]])
else:
return inp_tensor
batch_gen = batched_inputs(examples_data_gen, batch_size)
def batch_leftover_example(input_example, example_mask):
def extend_shape_to_batch_size(tensor):
return jnp.repeat(tensor, repeats=batch_size, axis=0)
return map(extend_shape_to_batch_size,
(input_example[None, ...], example_mask[None, ...]))
for i, (inp, mask) in tqdm(enumerate(batch_gen)):
leftover_batch = False
if len(inp.shape) == 1:
inp, mask = batch_leftover_example(inp, mask)
leftover_batch = True
inp, mask = map(pad_right, [inp, mask])
example_losses = accelerated_model_with_loss((inp, inp, mask))
if leftover_batch:
example_losses = example_losses[:1]
mask = mask[:1]
example_lengths = mask.sum(axis=-1)
loss_sum += (example_lengths * example_losses).sum()
n_tokens += mask.sum()
if i % 200 == 0:
print(f'Batches: {i}, current loss: {loss_sum / float(n_tokens)}')
return loss_sum / float(n_tokens)
"""
Explanation: Evaluate on the validation set
End of explanation
"""
def data_gen(dataset):
for example in dataset:
example = example['image']
mask = jnp.ones_like(example)
yield example, mask
BATCH_SIZE = 8
eval_data_gen = data_gen(get_eval_dataset())
loss = run_full_evaluation(model_eval, eval_data_gen, BATCH_SIZE)
print(f'Final perplexity: {loss}, final bpd: {loss / jnp.log(2)}')
"""
Explanation: ImageNet32 evaluation
End of explanation
"""
gin.parse_config_file('hourglass_imagenet64.gin')
model = trax.models.HourglassLM(mode='eval')
model.init_from_file(
'gs://trax-ml/hourglass/imagenet64/model_300000.pkl.gz',
weights_only=True,
)
loss_fn = trax.layers.WeightedCategoryCrossEntropy()
model_eval = trax.layers.Accelerate(trax.layers.Serial(
model,
loss_fn
))
BATCH_SIZE = 8
eval_data_gen = data_gen(get_eval_dataset())
loss = run_full_evaluation(model_eval, eval_data_gen, BATCH_SIZE)
print(f'Final perplexity: {loss}, final bpd: {loss / jnp.log(2)}')
"""
Explanation: ImageNet64 evaluation
End of explanation
"""
|
edosedgar/xs-pkg | deep_learning/hw1/homework_modules.ipynb | gpl-2.0 | class Module(object):
"""
Basically, you can think of a module as of a something (black box)
which can process `input` data and produce `ouput` data.
This is like applying a function which is called `forward`:
output = module.forward(input)
The module should be able to perform a backward pass: to differentiate the `forward` function.
More, it should be able to differentiate it if is a part of chain (chain rule).
The latter implies there is a gradient from previous step of a chain rule.
gradInput = module.backward(input, gradOutput)
"""
def __init__ (self):
self.output = None
self.gradInput = None
self.training = True
def forward(self, input):
"""
Takes an input object, and computes the corresponding output of the module.
"""
return self.updateOutput(input)
def backward(self,input, gradOutput):
"""
Performs a backpropagation step through the module, with respect to the given input.
This includes
- computing a gradient w.r.t. `input` (is needed for further backprop),
- computing a gradient w.r.t. parameters (to update parameters while optimizing).
"""
self.updateGradInput(input, gradOutput)
self.accGradParameters(input, gradOutput)
return self.gradInput
def updateOutput(self, input):
"""
Computes the output using the current parameter set of the class and input.
This function returns the result which is stored in the `output` field.
Make sure to both store the data in `output` field and return it.
"""
# The easiest case:
# self.output = input
# return self.output
pass
def updateGradInput(self, input, gradOutput):
"""
Computing the gradient of the module with respect to its own input.
This is returned in `gradInput`. Also, the `gradInput` state variable is updated accordingly.
The shape of `gradInput` is always the same as the shape of `input`.
Make sure to both store the gradients in `gradInput` field and return it.
"""
# The easiest case:
# self.gradInput = gradOutput
# return self.gradInput
pass
def accGradParameters(self, input, gradOutput):
"""
Computing the gradient of the module with respect to its own parameters.
No need to override if module has no parameters (e.g. ReLU).
"""
pass
def zeroGradParameters(self):
"""
Zeroes `gradParams` variable if the module has params.
"""
pass
def getParameters(self):
"""
Returns a list with its parameters.
If the module does not have parameters return empty list.
"""
return []
def getGradParameters(self):
"""
Returns a list with gradients with respect to its parameters.
If the module does not have parameters return empty list.
"""
return []
def train(self):
"""
Sets training mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
"""
self.training = True
def evaluate(self):
"""
Sets evaluation mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
"""
self.training = False
def __repr__(self):
"""
Pretty printing. Should be overrided in every module if you want
to have readable description.
"""
return "Module"
"""
Explanation: Module is an abstract class which defines fundamental methods necessary for a training a neural network. You do not need to change anything here, just read the comments.
End of explanation
"""
class Sequential(Module):
"""
This class implements a container, which processes `input` data sequentially.
`input` is processed by each module (layer) in self.modules consecutively.
The resulting array is called `output`.
"""
def __init__ (self):
super(Sequential, self).__init__()
self.modules = []
self.inputs = []
def add(self, module):
"""
Adds a module to the container.
"""
self.modules.append(module)
def updateOutput(self, input):
"""
Basic workflow of FORWARD PASS:
y_0 = module[0].forward(input)
y_1 = module[1].forward(y_0)
...
output = module[n-1].forward(y_{n-2})
Just write a little loop.
"""
# Your code goes here. ################################################
self.inputs = []
for module in self.modules:
output = module.forward(input)
input = output
self.inputs.append(input)
self.output = output
del self.inputs[-1]
return self.output
def backward(self, input, gradOutput):
"""
Workflow of BACKWARD PASS:
g_{n-1} = module[n-1].backward(y_{n-2}, gradOutput)
g_{n-2} = module[n-2].backward(y_{n-3}, g_{n-1})
...
g_1 = module[1].backward(y_0, g_2)
gradInput = module[0].backward(input, g_1)
!!!
To each module you need to provide the input, module saw while forward pass,
it is used while computing gradients.
Make sure that the input for `i-th` layer the output of `module[i]` (just the same input as in forward pass)
and NOT `input` to this Sequential module.
!!!
"""
# Your code goes here. ################################################
inputs = [input] + self.inputs.copy();
for module, forward_input in zip(reversed(self.modules), reversed(inputs)):
gradInput = module.backward(forward_input, gradOutput)
gradOutput = gradInput
self.gradInput = gradInput
return self.gradInput
def zeroGradParameters(self):
for module in self.modules:
module.zeroGradParameters()
def getParameters(self):
"""
Should gather all parameters in a list.
"""
return [x.getParameters() for x in self.modules]
def getGradParameters(self):
"""
Should gather all gradients w.r.t parameters in a list.
"""
return [x.getGradParameters() for x in self.modules]
def __repr__(self):
string = "".join([str(x) + '\n' for x in self.modules])
return string
def __getitem__(self,x):
return self.modules.__getitem__(x)
def train(self):
"""
Propagates training parameter through all modules
"""
self.training = True
for module in self.modules:
module.train()
def evaluate(self):
"""
Propagates training parameter through all modules
"""
self.training = False
for module in self.modules:
module.evaluate()
"""
Explanation: Sequential container
Define a forward and backward pass procedures.
End of explanation
"""
class Linear(Module):
"""
A module which applies a linear transformation
A common name is fully-connected layer, InnerProductLayer in caffe.
The module should work with 2D input of shape (n_samples, n_feature).
"""
def __init__(self, n_in, n_out):
super(Linear, self).__init__()
# This is a nice initialization
stdv = 1./np.sqrt(n_in)
self.W = np.random.uniform(-stdv, stdv, size = (n_out, n_in))
self.b = np.random.uniform(-stdv, stdv, size = n_out)
self.gradW = np.zeros_like(self.W)
self.gradb = np.zeros_like(self.b)
def updateOutput(self, input):
# Your code goes here. ################################################
self.output = np.add(np.dot(input, self.W.T), self.b)
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
self.gradInput = np.dot(gradOutput, self.W)
return self.gradInput
def accGradParameters(self, input, gradOutput):
# Your code goes here. ################################################
self.gradW = np.dot( gradOutput.T, input)
self.gradb = np.sum(gradOutput, axis=0)
def zeroGradParameters(self):
self.gradW.fill(0)
self.gradb.fill(0)
def getParameters(self):
return [self.W, self.b]
def getGradParameters(self):
return [self.gradW, self.gradb]
def __repr__(self):
s = self.W.shape
q = 'Linear %d -> %d' %(s[1],s[0])
return q
"""
Explanation: Layers
1. Linear transform layer
Also known as dense layer, fully-connected layer, FC-layer, InnerProductLayer (in caffe), affine transform
- input: batch_size x n_feats1
- output: batch_size x n_feats2
End of explanation
"""
class SoftMax(Module):
def __init__(self):
super(SoftMax, self).__init__()
def updateOutput(self, input):
# start with normalization for numerical stability
self.output = np.subtract(input, input.max(axis=1, keepdims=True))
# Your code goes here. ################################################
self.output = np.exp(self.output)/np.sum(np.exp(self.output), axis=1, keepdims=True)
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
pred_grad = np.multiply(self.output, gradOutput)
self.gradInput = pred_grad - np.multiply(self.output, np.sum(pred_grad, axis=1, keepdims=True))
return self.gradInput
def __repr__(self):
return "SoftMax"
"""
Explanation: 2. SoftMax
input: batch_size x n_feats
output: batch_size x n_feats
$\text{softmax}(x)_i = \frac{\exp x_i} {\sum_j \exp x_j}$
Recall that $\text{softmax}(x) == \text{softmax}(x - \text{const})$. It makes possible to avoid computing exp() from large argument.
End of explanation
"""
class LogSoftMax(Module):
def __init__(self):
super(LogSoftMax, self).__init__()
def updateOutput(self, input):
# start with normalization for numerical stability
self.output = np.subtract(input, input.max(axis=1, keepdims=True))
self.output = self.output - np.log(np.sum(np.exp(self.output), axis=1, keepdims=True))
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
input_new_exp = np.exp(np.subtract(input, input.max(axis=1, keepdims=True)))
self.gradInput = gradOutput
self.gradInput -= np.multiply(input_new_exp/np.sum(input_new_exp, axis=1, keepdims=True),
np.sum(gradOutput, axis=1, keepdims=True))
return self.gradInput
def __repr__(self):
return "LogSoftMax"
"""
Explanation: 3. LogSoftMax
input: batch_size x n_feats
output: batch_size x n_feats
$\text{logsoftmax}(x)_i = \log\text{softmax}(x)_i = x_i - \log {\sum_j \exp x_j}$
The main goal of this layer is to be used in computation of log-likelihood loss.
End of explanation
"""
class BatchNormalization(Module):
EPS = 1e-3
def __init__(self, alpha = 0.):
super(BatchNormalization, self).__init__()
self.alpha = alpha
self.moving_mean = 0
self.moving_variance = 0
def updateOutput(self, input):
# Your code goes here. ################################################
# use self.EPS please
if (self.training == True):
self.mean = np.mean(input, axis=0, keepdims=True)
self.var = np.var(input, axis=0, keepdims=True)
self.moving_mean = self.moving_mean * self.alpha + self.mean * (1 - self.alpha)
self.moving_variance = self.moving_variance * self.alpha + self.var * (1 - self.alpha)
self.output = (input - self.mean)/np.sqrt(self.var + self.EPS)
else:
self.output = (input - self.moving_mean) / np.sqrt(self.moving_variance + self.EPS)
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
self.gradInput = input.shape[0] * gradOutput
self.gradInput -= np.sum(gradOutput, axis=0, keepdims=True)
self.gradInput -= self.output * np.sum(gradOutput * self.output, axis=0, keepdims=True)
self.gradInput = 1/(input.shape[0] * np.sqrt(self.var + self.EPS)) * self.gradInput
return self.gradInput
def __repr__(self):
return "BatchNormalization"
class ChannelwiseScaling(Module):
"""
Implements linear transform of input y = \gamma * x + \beta
where \gamma, \beta - learnable vectors of length x.shape[-1]
"""
def __init__(self, n_out):
super(ChannelwiseScaling, self).__init__()
stdv = 1./np.sqrt(n_out)
self.gamma = 1 #np.random.uniform(-stdv, stdv, size=n_out) # ACHTUNG!! 1
self.beta = 0 #0 #np.random.uniform(-stdv, stdv, size=n_out) # 0
self.gradGamma = np.zeros_like(self.gamma)
self.gradBeta = np.zeros_like(self.beta)
def updateOutput(self, input):
self.output = input * self.gamma + self.beta
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = gradOutput * self.gamma
return self.gradInput
def accGradParameters(self, input, gradOutput):
self.gradBeta = np.sum(gradOutput, axis=0)
self.gradGamma = np.sum(gradOutput*input, axis=0)
def zeroGradParameters(self):
self.gradGamma.fill(0)
self.gradBeta.fill(0)
def getParameters(self):
return [self.gamma, self.beta]
def getGradParameters(self):
return [self.gradGamma, self.gradBeta]
def __repr__(self):
return "ChannelwiseScaling"
"""
Explanation: 4. Batch normalization
One of the most significant recent ideas that impacted NNs a lot is Batch normalization. The idea is simple, yet effective: the features should be whitened ($mean = 0$, $std = 1$) all the way through NN. This improves the convergence for deep models letting it train them for days but not weeks. You are to implement the first part of the layer: features normalization. The second part (ChannelwiseScaling layer) is implemented below.
input: batch_size x n_feats
output: batch_size x n_feats
The layer should work as follows. While training (self.training == True) it transforms input as $$y = \frac{x - \mu} {\sqrt{\sigma + \epsilon}}$$
where $\mu$ and $\sigma$ - mean and variance of feature values in batch and $\epsilon$ is just a small number for numericall stability. Also during training, layer should maintain exponential moving average values for mean and variance:
self.moving_mean = self.moving_mean * alpha + batch_mean * (1 - alpha)
self.moving_variance = self.moving_variance * alpha + batch_variance * (1 - alpha)
During testing (self.training == False) the layer normalizes input using moving_mean and moving_variance.
Note that decomposition of batch normalization on normalization itself and channelwise scaling here is just a common implementation choice. In general "batch normalization" always assumes normalization + scaling.
End of explanation
"""
class Dropout(Module):
def __init__(self, p=0.5):
super(Dropout, self).__init__()
self.p = p
self.mask = None
def updateOutput(self, input):
# Your code goes here. ################################################
if (self.training == True):
if (self.p == 0.0):
self.mask = np.ones_like(input)
else:
self.mask = np.random.binomial(1, 1 - self.p, input.shape)
self.output = np.multiply(input, self.mask) / (1 - self.p)
else:
self.output = input
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
self.gradInput = gradOutput * self.mask / (1 - self.p)
return self.gradInput
def __repr__(self):
return "Dropout"
"""
Explanation: Practical notes. If BatchNormalization is placed after a linear transformation layer (including dense layer, convolutions, channelwise scaling) that implements function like y = weight * x + bias, than bias adding become useless and could be omitted since its effect will be discarded while batch mean subtraction. If BatchNormalization (followed by ChannelwiseScaling) is placed before a layer that propagates scale (including ReLU, LeakyReLU) followed by any linear transformation layer than parameter gamma in ChannelwiseScaling could be freezed since it could be absorbed into the linear transformation layer.
5. Dropout
Implement dropout. The idea and implementation is really simple: just multimply the input by $Bernoulli(p)$ mask. Here $p$ is probability of an element to be zeroed.
This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons.
While training (self.training == True) it should sample a mask on each iteration (for every batch), zero out elements and multiply elements by $1 / (1 - p)$. The latter is needed for keeping mean values of features close to mean values which will be in test mode. When testing this module should implement identity transform i.e. self.output = input.
input: batch_size x n_feats
output: batch_size x n_feats
End of explanation
"""
class ReLU(Module):
def __init__(self):
super(ReLU, self).__init__()
def updateOutput(self, input):
self.output = np.maximum(input, 0)
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = np.multiply(gradOutput, input > 0)
return self.gradInput
def __repr__(self):
return "ReLU"
"""
Explanation: Activation functions
Here's the complete example for the Rectified Linear Unit non-linearity (aka ReLU):
End of explanation
"""
class LeakyReLU(Module):
def __init__(self, slope = 0.03):
super(LeakyReLU, self).__init__()
self.slope = slope
def updateOutput(self, input):
# Your code goes here. ################################################
self.output = np.where(input <= 0, self.slope * input, input)
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
input_copy = np.where(input > 0, 1, input)
input_copy = np.where(input_copy < 0, self.slope, input_copy)
self.gradInput = np.multiply(gradOutput, input_copy)
return self.gradInput
def __repr__(self):
return "LeakyReLU"
"""
Explanation: 6. Leaky ReLU
Implement Leaky Rectified Linear Unit. Expriment with slope.
End of explanation
"""
class ELU(Module):
def __init__(self, alpha = 1.0):
super(ELU, self).__init__()
self.alpha = alpha
def updateOutput(self, input):
# Your code goes here. ################################################
self.output = np.where(input <= 0, self.alpha * (np.exp(input) - 1), input)
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
input_copy = np.where(input > 0, 1, input)
input_copy = np.where(input_copy <= 0, self.alpha * np.exp(input_copy), input_copy)
self.gradInput = np.multiply(input_copy, gradOutput)
return self.gradInput
def __repr__(self):
return "ELU"
"""
Explanation: 7. ELU
Implement Exponential Linear Units activations.
End of explanation
"""
class SoftPlus(Module):
def __init__(self):
super(SoftPlus, self).__init__()
def updateOutput(self, input):
# Your code goes here. ################################################
self.output = np.log(1 + np.exp(input))
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
self.gradInput = np.exp(input)/(1 + np.exp(input)) * gradOutput
return self.gradInput
def __repr__(self):
return "SoftPlus"
"""
Explanation: 8. SoftPlus
Implement SoftPlus activations. Look, how they look a lot like ReLU.
End of explanation
"""
class Criterion(object):
def __init__ (self):
self.output = None
self.gradInput = None
def forward(self, input, target):
"""
Given an input and a target, compute the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateOutput`.
"""
return self.updateOutput(input, target)
def backward(self, input, target):
"""
Given an input and a target, compute the gradients of the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateGradInput`.
"""
return self.updateGradInput(input, target)
def updateOutput(self, input, target):
"""
Function to override.
"""
return self.output
def updateGradInput(self, input, target):
"""
Function to override.
"""
return self.gradInput
def __repr__(self):
"""
Pretty printing. Should be overrided in every module if you want
to have readable description.
"""
return "Criterion"
"""
Explanation: Criterions
Criterions are used to score the models answers.
End of explanation
"""
class MSECriterion(Criterion):
def __init__(self):
super(MSECriterion, self).__init__()
def updateOutput(self, input, target):
self.output = np.sum(np.power(input - target,2)) / input.shape[0]
return self.output
def updateGradInput(self, input, target):
self.gradInput = (input - target) * 2 / input.shape[0]
return self.gradInput
def __repr__(self):
return "MSECriterion"
"""
Explanation: The MSECriterion, which is basic L2 norm usually used for regression, is implemented here for you.
- input: batch_size x n_feats
- target: batch_size x n_feats
- output: scalar
End of explanation
"""
class ClassNLLCriterionUnstable(Criterion):
EPS = 1e-15
def __init__(self):
a = super(ClassNLLCriterionUnstable, self)
super(ClassNLLCriterionUnstable, self).__init__()
def updateOutput(self, input, target):
# Use this trick to avoid numerical errors
input_clamp = np.clip(input, self.EPS, 1 - self.EPS)
# Your code goes here. ################################################
self.output = - np.sum(np.multiply(target, np.log(input_clamp)))/input.shape[0]
return self.output
def updateGradInput(self, input, target):
# Use this trick to avoid numerical errors
input_clamp = np.clip(input, self.EPS, 1 - self.EPS)
# Your code goes here. ################################################
self.gradInput = - (target / input_clamp)/input.shape[0]
return self.gradInput
def __repr__(self):
return "ClassNLLCriterionUnstable"
"""
Explanation: 9. Negative LogLikelihood criterion (numerically unstable)
You task is to implement the ClassNLLCriterion. It should implement multiclass log loss. Nevertheless there is a sum over y (target) in that formula,
remember that targets are one-hot encoded. This fact simplifies the computations a lot. Note, that criterions are the only places, where you divide by batch size. Also there is a small hack with adding small number to probabilities to avoid computing log(0).
- input: batch_size x n_feats - probabilities
- target: batch_size x n_feats - one-hot representation of ground truth
- output: scalar
End of explanation
"""
class ClassNLLCriterion(Criterion):
def __init__(self):
a = super(ClassNLLCriterion, self)
super(ClassNLLCriterion, self).__init__()
def updateOutput(self, input, target):
# Your code goes here. ################################################
self.output = - np.sum(np.multiply(target, input))/input.shape[0]
return self.output
def updateGradInput(self, input, target):
# Your code goes here. ################################################
self.gradInput = - target/input.shape[0]
return self.gradInput
def __repr__(self):
return "ClassNLLCriterion"
"""
Explanation: 10. Negative LogLikelihood criterion (numerically stable)
input: batch_size x n_feats - log probabilities
target: batch_size x n_feats - one-hot representation of ground truth
output: scalar
Task is similar to the previous one, but now the criterion input is the output of log-softmax layer. This decomposition allows us to avoid problems with computation of forward and backward of log().
End of explanation
"""
class ClassContrastiveCriterion(Criterion):
def __init__(self, M):
a = super(ClassContrastiveCriterion, self)
super(ClassContrastiveCriterion, self).__init__()
self.M = M #margin for punishing the negative pairs
def updateOutput(self, input, target):
# Your code goes here. ################################################
# Taken from the website given
def compute_distances_no_loops(X):
dists = -2 * np.dot(X, X.T) + np.sum(X**2, axis=1) + \
np.sum(X**2, axis=1)[:, np.newaxis]
return dists
# Calculate N_{neg} and N_{pos}
target_aug = (target[np.newaxis, :].T == target)
target_aug_neg = target_aug ^ np.eye(target.shape[0]).astype('bool')
Np = target_aug_neg.sum()
Nn = (target.shape[0] * (target.shape[0] - 1)) - Np
# Calculate distances
dist = compute_distances_no_loops(input)
loss = ~target_aug * 1/(2*Nn) * (np.maximum(0., self.M - np.sqrt(np.abs(dist)))**2)
loss = loss + target_aug_neg * dist / (2*Np)
self.output = loss.sum()
return self.output
def updateGradInput(self, input, target):
# Your code goes here. ################################################
def compute_distances_no_loops(X):
dists = X - X[:, np.newaxis]
norms = np.linalg.norm(dists, axis=-1)[:,:,np.newaxis]
norms = np.where(norms == 0, 1.0, norms)
ndist = dists / norms
return dists, norms, ndist
dist, norms, ndist = compute_distances_no_loops(input)
target_aug = target[np.newaxis, :]
target_aug = (target_aug.T == target)
target_aug_neg = target_aug ^ np.eye(target.shape[0]).astype('bool')
Np = target_aug_neg.sum()
Nn = (target.shape[0] * (target.shape[0] - 1)) - Np
loss_grad = - 2/Nn * ( ~target_aug[:,:,np.newaxis] * (np.maximum(0, self.M - norms) * ndist)).sum(axis=0)
loss_grad += 2/Np * (target_aug_neg[:,:,np.newaxis] * dist).sum(axis=0)
self.gradInput = loss_grad
return self.gradInput
def __repr__(self):
return "ClassContrastiveCriterion"
"""
Explanation: 11. Contrastive criterion
input: batch_size x n_feats - embedding vectors (can be outputs of any layer, but usually the last one before the classification linear layer is used)
target: labels - ground truth class indices (not one-hot ecodings)
output: scalar
The contrastive loss pulls the examples of the same class closer (in terms of the embedding layer) and pushes the examples of different classes from each other.
This is the formula for the contrastive loss in the terms of embedding pairs:
$$L_c(x_i, x_j, label_i, label_j) = \begin{cases}\frac{1}{2N_{pos}} \| x_i - x_j \|^{2}{2} & label_i = label_j\ \frac{1}{2N{neg}} (\max(0, M - \| x_i - x_j \|{2}))^2 & label_i \neq label_j \end{cases}$$
Where $M$ is a hyperparameter (constant),
$N{pos}$ and $N_{neg}$ - number of 'positive' (examples of the same class) and 'negative' pairs (examples of different classes).
You should generate all the possible pairs in the batch (ensure that your batch size is > 10, so that there are positive pairs), but remember to be efficient and use vectorize opetations, hint: https://medium.com/dataholiks-distillery/l2-distance-matrix-vectorization-trick-26aa3247ac6c. Then compute the Euclidean distances for them, and finally compute the loss value.
When computing the gradients with respect to inputs, you should (as always) use the chain rule: compute the loss gradients w.r.t distances, compute the distance gradients w.r.t input features (each example may take part in different pairs).
End of explanation
"""
def sgd_momentum(variables, gradients, config, state):
# 'variables' and 'gradients' have complex structure, accumulated_grads will be stored in a simpler one
state.setdefault('accumulated_grads', {})
var_index = 0
for current_layer_vars, current_layer_grads in zip(variables, gradients):
for current_var, current_grad in zip(current_layer_vars, current_layer_grads):
old_grad = state['accumulated_grads'].setdefault(var_index, np.zeros_like(current_grad))
np.add(config['momentum'] * old_grad, config['learning_rate'] * current_grad, out=old_grad)
current_var -= old_grad
var_index += 1
"""
Explanation: Optimizers
SGD optimizer with momentum
variables - list of lists of variables (one list per layer)
gradients - list of lists of current gradients (same structure as for variables, one array for each var)
config - dict with optimization parameters (learning_rate and momentum)
state - dict with optimizator state (used to save accumulated gradients)
End of explanation
"""
def adam_optimizer(variables, gradients, config, state):
# 'variables' and 'gradients' have complex structure, accumulated_grads will be stored in a simpler one
state.setdefault('m', {}) # first moment vars
state.setdefault('v', {}) # second moment vars
state.setdefault('t', 0) # timestamp
state['t'] += 1
for k in ['learning_rate', 'beta1', 'beta2', 'epsilon']:
assert k in config, config.keys()
var_index = 0
lr_t = config['learning_rate'] * np.sqrt(1 - config['beta2']**state['t']) / (1 - config['beta1']**state['t'])
for current_layer_vars, current_layer_grads in zip(variables, gradients):
for current_var, current_grad in zip(current_layer_vars, current_layer_grads):
var_first_moment = state['m'].setdefault(var_index, np.zeros_like(current_grad))
var_second_moment = state['v'].setdefault(var_index, np.zeros_like(current_grad))
# <YOUR CODE> #######################################
# update `current_var_first_moment`, `var_second_moment` and `current_var` values
np.add(config['beta1'] * var_first_moment, (1 - config['beta1']) * current_grad, out=var_first_moment)
np.add(config['beta2'] * var_second_moment, (1 - config['beta2']) * np.multiply(current_grad, current_grad), out=var_second_moment)
current_var -= lr_t * var_first_moment / (np.sqrt(var_second_moment) + config['epsilon'])
# small checks that you've updated the state; use np.add for rewriting np.arrays values
assert var_first_moment is state['m'].get(var_index)
assert var_second_moment is state['v'].get(var_index)
var_index += 1
"""
Explanation: 12. Adam optimizer
variables - list of lists of variables (one list per layer)
gradients - list of lists of current gradients (same structure as for variables, one array for each var)
config - dict with optimization parameters (learning_rate, beta1, beta2, epsilon)
state - dict with optimizator state (used to save 1st and 2nd moment for vars)
Formulas for optimizer:
Current step learning rate: $$\text{lr}t = \text{learning_rate} * \frac{\sqrt{1-\beta_2^t}} {1-\beta_1^t}$$
First moment of var: $$\mu_t = \beta_1 * \mu{t-1} + (1 - \beta_1)g$$
Second moment of var: $$v_t = \beta_2 * v_{t-1} + (1 - \beta_2)g*g$$
New values of var: $$\text{variable} = \text{variable} - \text{lr}_t * \frac{\mu_t}{\sqrt{v_t} + \epsilon}$$
End of explanation
"""
class Flatten(Module):
def __init__(self):
super(Flatten, self).__init__()
def updateOutput(self, input):
self.output = input.reshape(len(input), -1)
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = gradOutput.reshape(input.shape)
return self.gradInput
def __repr__(self):
return "Flatten"
"""
Explanation: Flatten layer
Just reshapes inputs and gradients. It's usually used as proxy layer between Conv2d and Linear.
End of explanation
"""
|
maxvogel/NetworKit-mirror2 | Doc/Notebooks/NetworKit_UserGuide.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: NetworKit User Guide
About NetworKit
NetworKit is an open-source toolkit for high-performance
network analysis. Its aim is to provide tools for the analysis of large
networks in the size range from thousands to billions of edges. For this
purpose, it implements efficient graph algorithms, many of them parallel to
utilize multicore architectures. These are meant to compute standard measures
of network analysis, such as degree sequences, clustering coefficients and
centrality. In this respect, NetworKit is comparable
to packages such as NetworkX, albeit with a focus on parallelism
and scalability. NetworKit is also a testbed for algorithm engineering and
contains a few novel algorithms from recently published research, especially
in the area of community detection.
Introduction
This notebook provides an interactive introduction to the features of NetworKit, consisting of text and executable code. We assume that you have read the Readme and successfully built the core library and the Python module. Code cells can be run one by one (e.g. by selecting the cell and pressing shift+enter), or all at once (via the Cell->Run All command). Try running all cells now to verify that NetworKit has been properly built and installed.
Preparation
This notebook creates some plots. To show them in the notebook, matplotlib must be imported and we need to activate matplotlib's inline mode:
End of explanation
"""
from networkit import *
"""
Explanation: NetworKit is a hybrid built from C++ and Python code: Its core functionality is implemented in C++ for performance reasons, and then wrapped for Python using the Cython toolchain. This allows us to expose high-performance parallel code as a normal Python module. On the surface, NetworKit is just that and can be imported accordingly:
End of explanation
"""
cd ../../
"""
Explanation: IPython lets us use familiar shell commands in a Python interpreter. Use one of them now to change into the directory of your NetworKit download:
End of explanation
"""
G = readGraph("input/PGPgiantcompo.graph", Format.METIS)
"""
Explanation: Reading and Writing Graphs
Let us start by reading a network from a file on disk: PGPgiantcompo.graph. In the course of this tutorial, we are going to work on the PGPgiantcompo network, a social network/web of trust in which nodes are PGP keys and an edge represents a signature from one key on another. It is distributed with NetworKit as a good starting point.
There is a convenient function in the top namespace which tries to guess the input format and select the appropriate reader:
End of explanation
"""
graphio.METISGraphReader().read("input/PGPgiantcompo.graph")
# is the same as: readGraph("input/PGPgiantcompo.graph", Format.METIS)
"""
Explanation: There is a large variety of formats for storing graph data in files. For NetworKit, the currently best supported format is the METIS adjacency format. Various example graphs in this format can be found here. The readGraph function tries to be an intelligent wrapper for various reader classes. In this example, it uses the METISGraphReader which is located in the graphio submodule, alongside other readers. These classes can also be used explicitly:
End of explanation
"""
graphio.writeGraph(G,"output/PGPgiantcompo.graphviz", Format.GraphViz)
"""
Explanation: It is also possible to specify the format for readGraph() and writeGraph(). Supported formats can be found via [graphio.]Format. However, graph formats are most likely only supported as far as the NetworKit::Graph can hold and use the data. Please note, that not all graph formats are supported for reading and writing.
Thus, it is possible to use NetworKit to convert graphs between formats. Let's say I need the previously read PGP graph in the Graphviz format:
End of explanation
"""
graphio.convertGraph(Format.LFR, Format.GML, "input/example.edgelist", "output/example.gml")
"""
Explanation: NetworKit also provides a function to convert graphs directly:
End of explanation
"""
n = G.numberOfNodes()
m = G.numberOfEdges()
print(n, m)
G.toString()
"""
Explanation: The Graph Object
Graph is the central class of NetworKit. An object of this type represents an undirected, optionally weighted network. Let us inspect several of the methods which the class provides.
End of explanation
"""
V = G.nodes()
print(V[:10])
E = G.edges()
print(E[:10])
G.hasEdge(42,11)
"""
Explanation: Nodes are simply integer indices, and edges are pairs of such indices.
End of explanation
"""
G.weight(42,11)
"""
Explanation: This network is unweighted, meaning that each edge has the default weight of 1.
End of explanation
"""
cc = components.ConnectedComponents(G)
cc.run()
print("number of components ", cc.numberOfComponents())
v = 0
print("component of node ", v , ": " , cc.componentOfNode(0))
#print("map of component sizes: ", cc.getComponentSizes())
"""
Explanation: Connected Components
A connected component is a set of nodes in which each pair of nodes is connected by a path. The following function determines the connected components of a graph:
End of explanation
"""
dd = sorted(centrality.DegreeCentrality(G).run().scores(), reverse=True)
plt.xscale("log")
plt.xlabel("degree")
plt.yscale("log")
plt.ylabel("number of nodes")
plt.plot(dd)
"""
Explanation: Degree Distribution
Node degree, the number of edges connected to a node, is one of the most studied properties of networks. Types of networks are often characterized in terms of their distribution of node degrees. We obtain and visualize the degree distribution of our example network as follows.
End of explanation
"""
v = 0
bfs = graph.BFS(G, v)
bfs.run()
bfsdist = bfs.getDistances()
"""
Explanation: Search and Shortest Paths
A simple breadth-first search from a starting node can be performed as follows:
End of explanation
"""
sum(bfsdist) / len(bfsdist)
"""
Explanation: The return value is a list of distances from v to other nodes - indexed by node id. For example, we can now calculate the mean distance from the starting node to all other nodes:
End of explanation
"""
dijkstra = graph.Dijkstra(G, v)
dijkstra.run()
spdist = dijkstra.getDistances()
sum(spdist) / len(spdist)
"""
Explanation: Similarly, Dijkstra's algorithm yields shortest path distances from a starting node to all other nodes in a weighted graph. Because PGPgiantcompo is an unweighted graph, the result is the same here:
End of explanation
"""
K = readGraph("input/karate.graph",Format.METIS)
coreDec = centrality.CoreDecomposition(K)
coreDec.run()
"""
Explanation: Core Decomposition
A $k$-core decomposition of a graph is performed by successicely peeling away nodes with degree less than $k$. The remaining nodes form the $k$-core of the graph.
End of explanation
"""
set(coreDec.scores())
viztasks.drawGraph(K, nodeSizes=[(k**2)*20 for k in coreDec.scores()])
"""
Explanation: Core decomposition assigns a core number to each node, being the maximum $k$ for which a node is contained in the $k$-core. For this small graph, core numbers have the following range:
End of explanation
"""
community.detectCommunities(G)
"""
Explanation: Community Detection
This section demonstrates the community detection capabilities of NetworKit. Community detection is concerned with identifying groups of nodes which are significantly more densely connected to eachother than to the rest of the network.
Code for community detection is contained in the community module. The module provides a top-level function to quickly perform community detection with a suitable algorithm and print some stats about the result.
End of explanation
"""
communities = community.detectCommunities(G)
"""
Explanation: The function prints some statistics and returns the partition object representing the communities in the network as an assignment of node to community label. Let's capture this result of the last function call.
End of explanation
"""
community.Modularity().getQuality(communities, G)
"""
Explanation: Modularity is the primary measure for the quality of a community detection solution. The value is in the range [-0.5,1] and usually depends both on the performance of the algorithm and the presence of distinctive community structures in the network.
End of explanation
"""
type(communities)
print("{0} elements assigned to {1} subsets".format(communities.numberOfElements(), communities.numberOfSubsets()))
print("the biggest subset has size {0}".format(max(communities.subsetSizes())))
"""
Explanation: The Partition Data Structure
The result of community detection is a partition of the node set into disjoint subsets. It is represented by the Partition data strucure, which provides several methods for inspecting and manipulating a partition of a set of elements (which need not be the nodes of a graph).
End of explanation
"""
community.writeCommunities(communities, "output/communties.partition")
"""
Explanation: The contents of a partition object can be written to file in a simple format, in which each line i contains the subset id of node i.
End of explanation
"""
community.detectCommunities(G, algo=community.PLP(G))
"""
Explanation: Choice of Algorithm
The community detection function used a good default choice for an algorithm: PLM, our parallel implementation of the well-known Louvain method. It yields a high-quality solution at reasonably fast running times. Let us now apply a variation of this algorithm.
End of explanation
"""
sizes = communities.subsetSizes()
sizes.sort(reverse=True)
ax1 = plt.subplot(2,1,1)
ax1.set_ylabel("size")
ax1.plot(sizes)
ax2 = plt.subplot(2,1,2)
ax2.set_xscale("log")
ax2.set_yscale("log")
ax2.set_ylabel("size")
ax2.plot(sizes)
"""
Explanation: We have switched on refinement, and we can see how modularity is slightly improved. For a small network like this, this takes only marginally longer.
Visualizing the Result
We can easily plot the distribution of community sizes as follows. While the distribution is skewed, it does not seem to fit a power-law, as shown by a log-log plot.
End of explanation
"""
from networkit.graph import Subgraph
c2 = communities.getMembers(2)
sg = Subgraph()
g2 = sg.fromNodes(G,c2)
communities.subsetSizeMap()[2]
g2.numberOfNodes()
"""
Explanation: Subgraph
NetworKit supports the creation of Subgraphs depending on an original graph and a set of nodes. This might be useful in case you want to analyze certain communities of a graph. Let's say that community 2 of the above result is of further interest, so we want a new graph that consists of nodes and intra cluster edges of community 2.
End of explanation
"""
communities2 = community.detectCommunities(g2)
"""
Explanation: As we can see, the number of nodes in our subgraph matches the number of nodes of community 2. The subgraph can be used like any other graph object, e.g. further community analysis:
End of explanation
"""
K = readGraph("input/karate.graph", Format.METIS)
bc = centrality.Betweenness(K)
bc.run()
"""
Explanation: Centrality
Centrality measures the relative importance of a node within a graph. Code for centrality analysis is grouped into the centrality module.
Betweenness Centrality
We implement Brandes' algorithm for the exact calculation of betweenness centrality. While the algorithm is efficient, it still needs to calculate shortest paths between all pairs of nodes, so its scalability is limited. We demonstrate it here on the small Karate club graph.
End of explanation
"""
bc.ranking()[:10] # the 10 most central nodes
"""
Explanation: We have now calculated centrality values for the given graph, and can retrieve them either as an ordered ranking of nodes or as a list of values indexed by node id.
End of explanation
"""
abc = centrality.ApproxBetweenness(G, epsilon=0.1)
abc.run()
"""
Explanation: Approximation of Betweenness
Since exact calculation of betweenness scores is often out of reach, NetworKit provides an approximation algorithm based on path sampling. Here we estimate betweenness centrality in PGPgiantcompo, with a probabilistic guarantee that the error is no larger than an additive constant $\epsilon$.
End of explanation
"""
abc.ranking()[:10]
"""
Explanation: The 10 most central nodes according to betweenness are then
End of explanation
"""
# Eigenvector centrality
ec = centrality.EigenvectorCentrality(K)
ec.run()
ec.ranking()[:10] # the 10 most central nodes
# PageRank
pr = centrality.PageRank(K, 1e-6)
pr.run()
pr.ranking()[:10] # the 10 most central nodes
"""
Explanation: Eigenvector Centrality and PageRank
Eigenvector centrality and its variant PageRank assign relative importance to nodes according to their connections, incorporating the idea that edges to high-scoring nodes contribute more. PageRank is a version of eigenvector centrality which introduces a damping factor, modeling a random web surfer which at some point stops following links and jumps to a random page. In PageRank theory, centrality is understood as the probability of such a web surfer to arrive on a certain page. Our implementation of both measures is based on parallel power iteration, a relatively simple eigensolver.
End of explanation
"""
import networkx as nx
nxG = nxadapter.nk2nx(G) # convert from NetworKit.Graph to networkx.Graph
print(nx.degree_assortativity_coefficient(nxG))
"""
Explanation: NetworkX Compatibility
NetworkX is a popular Python package for network analysis. To let both packages complement eachother, and to enable the adaptation of existing NetworkX-based code, we support the conversion of the respective graph data structures.
End of explanation
"""
ERG = generators.ErdosRenyiGenerator(1000, 0.1).generate()
"""
Explanation: Generating Graphs
An important subfield of network science is the design and analysis of generative models. A variety of generative models have been proposed with the aim of reproducing one or several of the properties we find in real-world complex networks. NetworKit includes generator algorithms for several of them.
The Erdös-Renyi model is the most basic random graph model, in which each edge exists with the same uniform probability. NetworKit provides an efficient generator:
End of explanation
"""
CRG = generators.ClusteredRandomGraphGenerator(200, 4, 0.2, 0.002).generate()
community.detectCommunities(CRG)
"""
Explanation: A simple way to generate a random graph with community structure is to use the ClusteredRandomGraphGenerator. It uses a simple variant of the Erdös-Renyi model: The node set is partitioned into a given number of subsets. Nodes within the same subset have a higher edge probability.
End of explanation
"""
degreeSequence = [G.degree(v) for v in G.nodes()]
clgen = generators.ChungLuGenerator(degreeSequence)
"""
Explanation: The Chung-Lu model (also called configuration model) generates a random graph which corresponds to a given degree sequence, i.e. has the same expected degree sequence. It can therefore be used to replicate some of the properties of a given real networks, while others are not retained, such as high clustering and the specific community structure.
End of explanation
"""
getLogLevel() # the default loglevel
setLogLevel("TRACE") # set to most verbose mode
setLogLevel("ERROR") # set back to default
"""
Explanation: Settings
In this section we discuss global settings.
Logging
When using NetworKit from the command line, the verbosity of console output can be controlled via several loglevels, from least to most verbose: FATAL, ERROR, WARN, INFO, DEBUG and TRACE. (Currently, logging is only available on the console and not visible in the IPython Notebook).
End of explanation
"""
setNumberOfThreads(4) # set the maximum number of available threads
getMaxNumberOfThreads() # see maximum number of available threads
getCurrentNumberOfThreads() # the number of threads currently executing
"""
Explanation: Please note, that the default build setting is optimized (--optimize=Opt) and thus, every LOG statement below INFO is removed. If you need DEBUG and TRACE statements, please build the extension module by appending --optimize=Dbg when calling the setup script.
Parallelism
The degree of parallelism can be controlled and monitored in the following way:
End of explanation
"""
|
radhikapc/foundation-homework | homework06/Homework06-Dark Sky Forecast API-Radhika.ipynb | mit | #https://api.forecast.io/forecast/APIKEY/LATITUDE,LONGITUDE,TIME
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/12.971599,77.594563')
data = response.json()
#print(data)
#print(data.keys())
print("Bangalore is in", data['timezone'], "timezone")
timezone_find = data.keys()
#find representation
print("The longitude is", data['longitude'], "The latitude is", data['latitude'])
"""
Explanation: 1 Make a request from the Forecast.io API for where you were born (or lived, or want to visit!)
Tip: Once you've imported the JSON into a variable, check the timezone's name to make sure it seems like it got the right part of the world!
Tip 2: How is north vs. south and east vs. west latitude/longitude represented? Is it the normal North/South/East/West?
End of explanation
"""
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941, 2016-06-08T09:00:46-0400')
data = response.json()
#print(data.keys())
print("The current windspeed at New York is", data['currently']['windSpeed'])
#print(data['currently']) - find how much warmer
print("It is",data['currently']['apparentTemperature'], "warmer it feels than it actually is")
"""
Explanation: 2. What's the current wind speed? How much warmer does it feel than it actually is?
End of explanation
"""
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941, 2016-06-08T09:00:46-0400')
data = response.json()
#print(data.keys())
#print(data['daily']['data'])
now_moon = data['daily']['data']
for i in now_moon:
print("The visibility of moon today in New York is", i['moonPhase'], "and is in the middle of new moon phase and the first quarter moon")
"""
Explanation: 3. Moon Visible in New York
The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
End of explanation
"""
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941, 2016-06-08T09:00:46-0400')
data = response.json()
TemMax = data['daily']['data']
for i in TemMax:
tem_diff = i['temperatureMax'] - i['temperatureMin']
print("The temparature difference for today approximately is", round(tem_diff))
"""
Explanation: 4. What's the difference between the high and low temperatures for today?
End of explanation
"""
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941')
data = response.json()
temp = data['daily']['data']
#print(temp)
count = 0
for i in temp:
count = count+1
print("The high temperature for the day", count, "is", i['temperatureMax'], "and the low temperature is", i['temperatureMin'])
if float(i['temperatureMin']) < 40:
print("it's a cold weather")
elif (float(i['temperatureMin']) > 40) & (float(i['temperatureMin']) < 60):
print("It's a warm day!")
else:
print("It's very hot weather")
"""
Explanation: 5. Next Week's Prediction
Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
End of explanation
"""
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/25.761680,-80.191790, 2016-06-09T12:01:00-0400')
data = response.json()
#print(data['hourly']['data'])
Tem = data['hourly']['data']
count = 0
for i in Tem:
count = count +1
print("The temperature in Miami, Florida on 9th June in the", count, "hour is", i['temperature'])
if float(i['cloudCover']) > 0.5:
print("and is cloudy")
"""
Explanation: 6.Weather in Florida
What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
End of explanation
"""
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.771133,-73.974187, 1980-12-25T12:01:00-0400')
data = response.json()
Temp = data['currently']['temperature']
print("The temperature in Central Park, NY on the Christmas Day of 1980 was", Temp)
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.771133,-73.974187, 1990-12-25T12:01:00-0400')
data = response.json()
Temp = data['currently']['temperature']
print("The temperature in Central Park, NY on the Christmas Day of 1990 was", Temp)
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.771133,-73.974187, 2000-12-25T12:01:00-0400')
data = response.json()
Temp = data['currently']['temperature']
print("The temperature in Central Park, NY on the Christmas Day of 2000 was", Temp)
"""
Explanation: 7. Temperature in Central Park
What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
End of explanation
"""
|
yedivanseven/bestPy | examples/05_Recommender.ipynb | gpl-3.0 | import sys
sys.path.append('../..')
"""
Explanation: CHAPTER 5
The Recommender
Now that we got to know bestPy's powerful algorithms, we cant't wait to use them, right? In trying to do so, however, we might realize that they are pretty bare-bone and inconvenient to handle. For example, we need to know the internally used integer index of a customer to get a preditiction for him/her instead of just getting a prediction for the customer ID. Likewise, we only get back an array of scores for each article and still have to search for the most highly recommended, still have to translate its index into an actual article ID, etc.
Taking all this burden off the user, who should focus on selecting and tweaking the algorithms, there is The Recommender.
Preliminaries
We only need this because the examples folder is a subdirectory of the bestPy package.
End of explanation
"""
from bestPy import RecoBasedOn, write_log_to # Additionally import RecoBasedOn
from bestPy.datastructures import Transactions
from bestPy.algorithms import Baseline, TruncatedSVD # Import Baseline and TruncatedSVD as examplary algorithm
logfile = 'logfile.txt'
write_log_to(logfile, 20)
file = 'examples_data.csv'
data = Transactions.from_csv(file)
"""
Explanation: Imports, logging, and data
On top of the basics, we still import the Baseline and the TruncatedSVD algorithm as an example, but now focus on The Recommender, which is accessible in the top-level package as RecoBasedOn.
End of explanation
"""
recommendation = RecoBasedOn(data)
"""
Explanation: Creating a new RecoBasedOn object
We will see different ways of doing this further down but, for now, all we need is data in the form of a Transactions instance.
End of explanation
"""
recommendation.algorithm
"""
Explanation: Parameters of The Recommender object
Inspecting the new recommendation object with Tab completion reveals an algorithm attribute as the first entry.
End of explanation
"""
algorithm = TruncatedSVD()
algorithm.number_of_factors = 24
algorithm.binarize = False
recommendation = recommendation.using(algorithm)
recommendation.algorithm
"""
Explanation: This is the default algorithm.
IMPORTANT: If we wanted a different algorithm, say truncated SVD, we don't simply set it, but we call the method using() instead, like so:
End of explanation
"""
algorithm.has_data
"""
Explanation: No need to first attach data to the algorithm. The recommender does that or us.
End of explanation
"""
recommendation.baseline
"""
Explanation: Next up is the baseline attrribute. As maybe expected, it tells us that our Baseline algorithm is part of The Recommender.
End of explanation
"""
recommendation.baseline = Baseline()
recommendation.baseline
"""
Explanation: We need it in order to make recommendations also to new cutomers, who do not have a purchase history yet. As opposed to the algorithm, the baseline can be simply set as expected.
End of explanation
"""
recommendation = recommendation.keeping_old
"""
Explanation: Finally we have a set of attributes starting with keeping_old. It tells the recommender not to filter out articles already purchased by the customer we are making a recommendation for but, on the contrary, to allow recommending them back to him/her if the algorithm says we should. To dial in this behavior of The Recommender we call the attribute in a manner similar to the using() method.
End of explanation
"""
recommendation.only_new
"""
Explanation: If we wanted to know whether or not only new articles will be recommended (as opposed to also articles that a given customer already bought), we simply inspect the only_new attribute.
End of explanation
"""
recommendation = recommendation.pruning_old
recommendation.only_new
"""
Explanation: Evidently, it is now False. Finally, if we wanted to change the bahavior of The Recommender to recommending only new articles, thus discarding already bought articles, we invoke the remaing attribute pruning_old like so:
End of explanation
"""
recommendation = RecoBasedOn(data).using(algorithm).pruning_old
"""
Explanation: NOTE: You may wonder why the method using() and the attributes keeping_old as well as pruning_old are called in a somewhat odd fashion. The idea behind this is that you can chain all these calls together in a single, elegant line of code that almost reads like a sentence in natural language.
End of explanation
"""
customer = '4' # Now a string ID
top_six = recommendation.for_one(customer, 6)
for article in top_six:
print(article)
"""
Explanation: And that's it with the parameters.
Making a recommendation for a target customer
Surely you have already realized that also The Recommender has a for_one() method, just like our algorithms. Indeed, it also provides recommendations for a given customer but, this time, in a much more convenient form. Specifically, it
+ accepts a cutomer ID rather than the internally used integer index as argument;
+ sorts the articles by their score and returns only the top-most hits;
+ allows us to specify how many of these we want to have;
+ returns actual article IDs rather that just their internally used integer indices.
More specifically, it returns a python generator, which needs to be consumed to actually access the recommended article IDs, like so:
End of explanation
"""
all_articles = recommendation.for_one(customer, 8300)
"""
Explanation: And, voilà, your recommendation. Again, obvious misuse, like asking for more recommendations than there are articles, is discretely corrected. Try, for instance, the following request
End of explanation
"""
newbie = 'new customer'
top_three = recommendation.for_one(newbie, 3)
for article in top_three:
print(article)
"""
Explanation: and all you get is an entry in the logfile.
[WARNING ]: Requested 8300 recommendations but only 8255 available. Returning all 8255. (recommender| ...
Thanks to the baseline, handling new customers is no problem.
End of explanation
"""
algorithm.number_of_factors = 10
top_six = recommendation.for_one(customer, 6)
for article in top_six:
print(article)
"""
Explanation: Provided you set the logging level to 20 (meaning INFO), you will be notified of this feat with the message:
[INFO ]: Unknown target user. Defaulting to baseline recommendation. (recommender|__cold_start)
Tweaking the algorithm
It is important to note that, if you wanted to change the parameters of the algorithm and get a new recommendation based on these new parameters, you can simply do so. You do not need to instantiate a new RecoBasedOn object and neither to you need to re-attach the changed algorithm to an existing RecoBasedOn instance. Nothing of that sort. Simply do
End of explanation
"""
|
aldian/tensorflow | tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
from tensorflow import keras
import numpy as np
import pathlib
tf.float16
"""
Explanation: Post-training float16 quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_float16_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
TensorFlow Lite now supports
converting weights to 16-bit floating point values during model conversion from TensorFlow to TensorFlow Lite's flat buffer format. This results in a 2x reduction in model size. Some harware, like GPUs, can compute natively in this reduced precision arithmetic, realizing a speedup over traditional floating point execution. The Tensorflow Lite GPU delegate can be configured to run in this way. However, a model converted to float16 weights can still run on the CPU without additional modification: the float16 weights are upsampled to float32 prior to the first inference. This permits a significant reduction in model size in exchange for a minimal impacts to latency and accuracy.
In this tutorial, you train an MNIST model from scratch, check its accuracy in TensorFlow, and then convert the model into a Tensorflow Lite flatbuffer
with float16 quantization. Finally, check the accuracy of the converted model and compare it to the original float32 model.
Build an MNIST model
Setup
End of explanation
"""
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=1,
validation_data=(test_images, test_labels)
)
"""
Explanation: Train and export the model
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
"""
Explanation: For the example, you trained the model for just a single epoch, so it only trains to ~96% accuracy.
Convert to a TensorFlow Lite model
Using the Python TFLiteConverter, you can now convert the trained model into a TensorFlow Lite model.
Now load the model using the TFLiteConverter:
End of explanation
"""
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
"""
Explanation: Write it out to a .tflite file:
End of explanation
"""
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
"""
Explanation: To instead quantize the model to float16 on export, first set the optimizations flag to use default optimizations. Then specify that float16 is the supported type on the target platform:
End of explanation
"""
tflite_fp16_model = converter.convert()
tflite_model_fp16_file = tflite_models_dir/"mnist_model_quant_f16.tflite"
tflite_model_fp16_file.write_bytes(tflite_fp16_model)
"""
Explanation: Finally, convert the model like usual. Note, by default the converted model will still use float input and outputs for invocation convenience.
End of explanation
"""
!ls -lh {tflite_models_dir}
"""
Explanation: Note how the resulting file is approximately 1/2 the size.
End of explanation
"""
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
interpreter_fp16 = tf.lite.Interpreter(model_path=str(tflite_model_fp16_file))
interpreter_fp16.allocate_tensors()
"""
Explanation: Run the TensorFlow Lite models
Run the TensorFlow Lite model using the Python TensorFlow Lite Interpreter.
Load the model into the interpreters
End of explanation
"""
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
interpreter.set_tensor(input_index, test_image)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter_fp16.get_input_details()[0]["index"]
output_index = interpreter_fp16.get_output_details()[0]["index"]
interpreter_fp16.set_tensor(input_index, test_image)
interpreter_fp16.invoke()
predictions = interpreter_fp16.get_tensor(output_index)
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
"""
Explanation: Test the models on one image
End of explanation
"""
# A helper function to evaluate the TF Lite model using "test" dataset.
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for test_image in test_images:
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
# Compare prediction results with ground truth labels to calculate accuracy.
accurate_count = 0
for index in range(len(prediction_digits)):
if prediction_digits[index] == test_labels[index]:
accurate_count += 1
accuracy = accurate_count * 1.0 / len(prediction_digits)
return accuracy
print(evaluate_model(interpreter))
"""
Explanation: Evaluate the models
End of explanation
"""
# NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite
# doesn't have super optimized server CPU kernels. For this reason this may be
# slower than the above float interpreter. But for mobile CPUs, considerable
# speedup can be observed.
print(evaluate_model(interpreter_fp16))
"""
Explanation: Repeat the evaluation on the float16 quantized model to obtain:
End of explanation
"""
|
halimacc/CS231n-assignments | assignment2/BatchNormalization.ipynb | unlicense | # As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
"""
Explanation: Batch Normalization
One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].
The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.
The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.
It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.
[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift", ICML 2015.
End of explanation
"""
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print(' means: ', a.mean(axis=0))
print(' stds: ', a.std(axis=0))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print(' mean: ', a_norm.mean(axis=0))
print(' std: ', a_norm.std(axis=0))
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print('After batch normalization (nontrivial gamma, beta)')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in range(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
"""
Explanation: Batch normalization: Forward
In the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.
End of explanation
"""
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
"""
Explanation: Batch Normalization: backward
Now implement the backward pass for batch normalization in the function batchnorm_backward.
To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.
Once you have finished, run the following to numerically check your backward pass.
End of explanation
"""
np.random.seed(231)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', rel_error(dx1, dx2))
print('dgamma difference: ', rel_error(dgamma1, dgamma2))
print('dbeta difference: ', rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
"""
Explanation: Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)
In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.
Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
NOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it.
End of explanation
"""
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
if reg == 0: print()
"""
Explanation: Fully Connected Nets with Batch Normalization
Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.
Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.
HINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.
End of explanation
"""
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
"""
Explanation: Batchnorm for deep networks
Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
End of explanation
"""
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
End of explanation
"""
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gca().set_ylim(1.0, 3.5)
plt.gcf().set_size_inches(10, 15)
plt.show()
"""
Explanation: Batch normalization and initialization
We will now run a small experiment to study the interaction of batch normalization and weight initialization.
The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/lite/performance/post_training_integer_quant.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
import numpy as np
assert float(tf.__version__[:3]) >= 2.3
"""
Explanation: 訓練後の整数量子化
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_integer_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">View on TensorFlow.org</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/lite/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/lite/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a> </td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/lite/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
概要
整数量子化は、32 ビット浮動小数点数(重みや活性化出力など)を最も近い 8 ビット固定小数点数に変換する最適化ストラテジーです。これにより、より小さなモデルが生成され、推論速度が増加するため、マイクロコントローラーといった性能の低いデバイスにとって貴重となります。このデータ形式は、Edge TPU などの整数のみのアクセラレータでも必要とされています。
このチュートリアルでは、MNIST モデルを新規にトレーニングし、それを TensorFlow Lite ファイルに変換して、トレーニング後量子化を使用して量子化します。最後に、変換されたモデルの精度を確認し、元の浮動小数点モデルと比較します。
モデルをどれくらい量子化するかについてのオプションには、実際いくつかあります。他のストラテジーでは、一部のデータが浮動小数点数のままとなることがありますが、このチュートリアルでは、すべての重みと活性化出力を 8 ビット整数データに変換する「全整数量子化」を実行します。
さまざまな量子化ストラテジーについての詳細は、TensorFlow Lite モデルの最適化をご覧ください。
セットアップ
入力テンソルと出力テンソルの両方を量子化するには、TensorFlow r2.3 で追加された API を使用する必要があります。
End of explanation
"""
# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images.astype(np.float32) / 255.0
test_images = test_images.astype(np.float32) / 255.0
# Define the model architecture
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=5,
validation_data=(test_images, test_labels)
)
"""
Explanation: MNIST モデルをビルドする
MNIST データセットから、数字を分類する単純なモデルを構築します。
モデルのトレーニングは 5 エポックしか行わないため、時間はかかりません。およそ 98% の精度に達します。
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
"""
Explanation: TensorFlow Lite モデルに変換する
次に、TFLiteConverter API を使用してトレーニング済みのモデルを TensorFlow Lite 形式に変換し、様々な程度で量子化を適用できます。
量子化のバージョンの中には、一部のデータを浮動小数点数のフォーマットに残すものもあることに注意してください。そのため、以下のセクションでは、完全に int8 または unit8 データのモデルを得るまで、各オプションの量子化の量を増加しています。(各セクションのコードは、オプションごとにすべての量子化のステップを確認できるように、重複していることに注意してください。)
まず、量子化なしで変換されたモデルです。
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model_quant = converter.convert()
"""
Explanation: TensorFlow Lite モデルになってはいますが、すべてのパラメータデータには 32 ビット浮動小数点値が使用されています。
ダイナミックレンジ量子化による変換
では、デフォルトの optimizations フラグを有効にして、すべての固定パラメータ(重みなど)を量子化しましょう。
End of explanation
"""
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
# Model has only one input so each data point has one element.
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
tflite_model_quant = converter.convert()
"""
Explanation: 重みが量子化されたためモデルは多少小さくなりましたが、他の変数データはまだ浮動小数点数フォーマットのままです。
浮動小数点数フォールバック量子化による変換
変数データ(モデル入力/出力やレイヤー間の中間データ)を量子化するには、RepresentativeDataset を指定する必要があります。これは、代表値を示すのに十分な大きさのある一連の入力データを提供するジェネレータ関数です。コンバータがすべての変数データのダイナミックレンジを推測できるようにします。(トレーニングや評価データセットとは異なり、このデータセットは一意である必要はありません。)複数の入力をサポートするために、それぞれの代表的なデータポイントはリストで、リストの要素はインデックスに従ってモデルに供給されます。
End of explanation
"""
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
input_type = interpreter.get_input_details()[0]['dtype']
print('input: ', input_type)
output_type = interpreter.get_output_details()[0]['dtype']
print('output: ', output_type)
"""
Explanation: すべての重みと変数データが量子化されたため、元の TensorFlow Lite モデルにくらべてはるかに小さくなりました。
ただし、従来的に浮動小数点数モデルの入力テンソルと出力テンソルを使用するアプリケーションとの互換性を維持するために、TensorFlow Lite Converter は、モデルの入力テンソルと出力テンソルを浮動小数点数に残しています。
End of explanation
"""
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
# Ensure that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Set the input and output tensors to uint8 (APIs added in r2.3)
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model_quant = converter.convert()
"""
Explanation: 互換性を考慮すれば、大抵においては良いことではありますが、Edge TPU など、整数ベースの演算のみを実行するデバイスには対応していません。
さらに、上記のプロセスでは、TensorFlow Lite が演算用の量子化の実装を含まない場合、その演算を浮動小数点数フォーマットに残す可能性があります。このストラテジーでは、より小さく効率的なモデルを得られるように変換を完了することが可能ですが、やはり、整数のみのハードウェアには対応しません。(この MNIST モデルのすべての op には量子化された実装が含まれています。)
そこで、エンドツーエンドの整数限定モデルを確実に得られるよう、パラメータをいくつか追加する必要があります。
整数限定量子化による変換
入力テンソルと出力テンソルを量子化し、量子化できない演算に遭遇したらコンバーターがエラーをスローするようにするには、追加パラメータをいくつか使用して、モデルを変換し直します。
End of explanation
"""
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
input_type = interpreter.get_input_details()[0]['dtype']
print('input: ', input_type)
output_type = interpreter.get_output_details()[0]['dtype']
print('output: ', output_type)
"""
Explanation: 内部の量子化は上記と同じままですが、入力テンソルと出力テンソルが整数フォーマットになっているのがわかります。
End of explanation
"""
import pathlib
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
# Save the unquantized/float model:
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
# Save the quantized model:
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_model_quant)
"""
Explanation: これで、モデルの入力テンソルと出力テンソルに整数データを強いようする整数量子化モデルを得られました。Edge TPU などの整数限定ハードウェアに対応しています。
モデルをファイルとして保存する
モデルを他のデバイスにデプロイするには、.tflite ファイルが必要となります。そこで、変換されたモデルをファイルに保存して、以下の推論を実行する際に読み込んでみましょう。
End of explanation
"""
# Helper function to run inference on a TFLite model
def run_tflite_model(tflite_file, test_image_indices):
global test_images
# Initialize the interpreter
interpreter = tf.lite.Interpreter(model_path=str(tflite_file))
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()[0]
output_details = interpreter.get_output_details()[0]
predictions = np.zeros((len(test_image_indices),), dtype=int)
for i, test_image_index in enumerate(test_image_indices):
test_image = test_images[test_image_index]
test_label = test_labels[test_image_index]
# Check if the input type is quantized, then rescale input data to uint8
if input_details['dtype'] == np.uint8:
input_scale, input_zero_point = input_details["quantization"]
test_image = test_image / input_scale + input_zero_point
test_image = np.expand_dims(test_image, axis=0).astype(input_details["dtype"])
interpreter.set_tensor(input_details["index"], test_image)
interpreter.invoke()
output = interpreter.get_tensor(output_details["index"])[0]
predictions[i] = output.argmax()
return predictions
"""
Explanation: TensorFlow Lite モデルを実行する
では、TensorFlow Lite Interpreter を使用して推論を実行し、モデルの精度を比較しましょう。
まず、特定のモデルと画像を使って推論を実行し、予測を返す関数が必要です。
End of explanation
"""
import matplotlib.pylab as plt
# Change this to test a different image
test_image_index = 1
## Helper function to test the models on one image
def test_model(tflite_file, test_image_index, model_type):
global test_labels
predictions = run_tflite_model(tflite_file, [test_image_index])
plt.imshow(test_images[test_image_index])
template = model_type + " Model \n True:{true}, Predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[test_image_index]), predict=str(predictions[0])))
plt.grid(False)
"""
Explanation: 1つの画像に対してモデルを検証する
次に、浮動小数点数モデルと量子化モデルのパフォーマンスを比較します。
tflite_model_file は、浮動小数点数データを持つ元の TensorFlow Lite モデルです。
tflite_model_quant_file は、整数限定量子化を使用して変換した最後のモデルです(入力と出力に unit8 データを使用します)。
もう一つ、予測を出力する関数を作成しましょう。
End of explanation
"""
test_model(tflite_model_file, test_image_index, model_type="Float")
"""
Explanation: では、浮動小数点数モデルをテストします。
End of explanation
"""
test_model(tflite_model_quant_file, test_image_index, model_type="Quantized")
"""
Explanation: 今度は量子化されたモデル(uint8データを使用する)を検証します:
End of explanation
"""
# Helper function to evaluate a TFLite model on all images
def evaluate_model(tflite_file, model_type):
global test_images
global test_labels
test_image_indices = range(test_images.shape[0])
predictions = run_tflite_model(tflite_file, test_image_indices)
accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images)
print('%s model accuracy is %.4f%% (Number of test samples=%d)' % (
model_type, accuracy, len(test_images)))
"""
Explanation: モデルを評価する
このチュートリアルの冒頭で読み込んだすテスト画像をすべて使用して、両方のモデルを実行しましょう。
End of explanation
"""
evaluate_model(tflite_model_file, model_type="Float")
"""
Explanation: 浮動小数点数モデルを評価します。
End of explanation
"""
evaluate_model(tflite_model_quant_file, model_type="Quantized")
"""
Explanation: uint8データを使用した完全に量子化されたモデルで評価を繰り返します:
End of explanation
"""
|
AllenDowney/ThinkBayes2 | examples/hockey.ipynb | mit | # If we're running on Colab, install PyMC and ArviZ
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install pymc3
!pip install arviz
# PyMC generates a FutureWarning we don't need to deal with yet
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import seaborn as sns
def plot_hist(sample, **options):
"""Plot a histogram of goals.
sample: sequence of values
"""
sns.histplot(sample, stat='probability', discrete=True,
alpha=0.5, **options)
def plot_kde(sample, **options):
"""Plot a distribution using KDE.
sample: sequence of values
"""
sns.kdeplot(sample, cut=0, **options)
import matplotlib.pyplot as plt
def legend(**options):
"""Make a legend only if there are labels."""
handles, labels = plt.gca().get_legend_handles_labels()
if len(labels):
plt.legend(**options)
def decorate_heads(ylabel='Probability'):
"""Decorate the axes."""
plt.xlabel('Number of heads (k)')
plt.ylabel(ylabel)
plt.title('Distribution of heads')
legend()
def decorate_proportion(ylabel='Likelihood'):
"""Decorate the axes."""
plt.xlabel('Proportion of heads (x)')
plt.ylabel(ylabel)
plt.title('Distribution of proportion')
legend()
"""
Explanation: Grid algorithm for the gamma-Poisson hierarchical model
Copyright 2021 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
"""
def decorate_rate(ylabel='Density'):
"""Decorate the axes."""
plt.xlabel('Goals per game (mu)')
plt.ylabel(ylabel)
plt.title('Distribution of goal scoring rate')
legend()
"""
Explanation: Poisson model
Let's look at one more example of a hierarchical model, based on the hockey example we started with.
Remember that we used a gamma distribution to represent the distribution of the rate parameters, mu.
I chose the parameters of that distribution, alpha and beta, based on results from previous NHL playoff games.
An alternative is to use a hierarchical model, where alpha and beta are hyperparameters. Then we can use data to update estimate the distribution of mu for each team, and to estimate the distribution of mu across teams.
Of course, now we need a prior distribution for alpha and beta. A common choice is the half Cauchy distribution (see Gelman), but on advice of counsel, I'm going with exponential.
Here's a model that generates the prior distribution of mu.
End of explanation
"""
import pymc3 as pm
with pm.Model() as model7:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
mu_A = pm.Gamma('mu_A', alpha, beta)
goals_A = pm.Poisson('goals_A', mu_A, observed=[5,3])
trace7 = pm.sample(500)
pm.model_to_graphviz(model7)
"""
Explanation: One teams
Here's the hierarchical version of the model for two teams.
End of explanation
"""
import arviz as az
with model7:
az.plot_trace(trace7)
"""
Explanation: We can use traceplot to review the results and do some visual diagnostics.
End of explanation
"""
mu_A = trace7['mu_A']
mu_B = trace7['mu_B']
mu_B.mean(), mu_A.mean()
plot_kde(mu_A, label='mu_A posterior')
plot_kde(mu_B, label='mu_B posterior')
decorate_rate('Density')
az.plot_posterior(trace7, var_names=['alpha', 'beta'])
import numpy as np
from scipy.stats import expon
from empiricaldist import Pmf
qs = np.linspace(0.1, 10, 100)
ps = expon(scale=1).pdf(qs)
prior_alpha = Pmf(ps, qs)
prior_alpha.index.name = 'alpha'
prior_alpha.shape
prior_alpha.plot()
qs = np.linspace(0.1, 9, 90)
ps = expon(scale=1).pdf(qs)
prior_beta = Pmf(ps, qs)
prior_beta.index.name = 'beta'
prior_beta.shape
prior_alpha.plot()
prior_beta.plot()
from utils import make_joint, normalize
PA, PB = np.meshgrid(prior_alpha.ps, prior_beta.ps, indexing='ij')
hyper = PA * PB
hyper.shape
import pandas as pd
from utils import plot_contour
plot_contour(pd.DataFrame(hyper))
from scipy.stats import poisson
mus = np.linspace(0.1, 12, 80)
ks = 5, 3
M, K = np.meshgrid(mus, ks)
like_mu = poisson(M).pmf(K).prod(axis=0)
like_mu.shape
A, B, M = np.meshgrid(prior_alpha.qs, prior_beta.qs, mus, indexing='ij')
A.shape
A.mean()
B.mean()
M.mean()
from scipy.stats import gamma
prior = gamma(A, B).pdf(M) * hyper.reshape((100, 90, 1))
prior.shape
prior_a = Pmf(prior.sum(axis=(1,2)), prior_alpha.qs)
prior_a.plot()
prior_b = Pmf(prior.sum(axis=(0,2)), prior_beta.qs)
prior_b.plot()
prior_m = Pmf(prior.sum(axis=(0,1)), mus)
prior_m.plot()
posterior = prior * like_mu
posterior /= posterior.sum()
ps = posterior.sum(axis=(0, 1))
marginal_mu = Pmf(ps, mus)
marginal_mu.plot()
ps = posterior.sum(axis=(1, 2))
marginal_alpha = Pmf(ps, prior_alpha.qs)
marginal_alpha.plot()
marginal_alpha.mean()
ps = posterior.sum(axis=(0, 2))
marginal_beta = Pmf(ps, prior_beta.qs)
marginal_beta.plot()
marginal_beta.mean()
import pymc3 as pm
with pm.Model() as model7:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
mu_A = pm.Gamma('mu_A', alpha, beta)
mu_B = pm.Gamma('mu_B', alpha, beta)
goals_A = pm.Poisson('goals_A', mu_A, observed=[5,3])
goals_B = pm.Poisson('goals_B', mu_B, observed=[1,1])
trace7 = pm.sample(500)
"""
Explanation: Here are the posterior distributions for the two teams.
End of explanation
"""
data = dict(BOS13 = [3, 1, 2, 5, 1, 2],
CHI13 = [3, 1, 0, 5, 3, 3],
NYR14 = [2, 4, 0, 2, 2],
LAK14 = [2, 4, 3, 1, 2],
TBL15 = [1, 4, 3, 1, 1, 0],
CHI15 = [2, 3, 2, 2, 2, 2],
SJS16 = [2, 1, 2, 1, 4, 1],
PIT16 = [3, 1, 2, 3, 2, 3],
NSH17 = [3, 1, 5, 4, 0, 0],
PIT17 = [5, 4, 1, 1, 6, 2],
VGK18 = [6, 2, 1, 2, 3],
WSH18 = [4, 3, 3, 6, 4],
STL19 = [2, 2, 2, 4, 2, 1, 4],
BOS19 = [4, 2, 7, 2, 1, 5, 1],
DAL20 = [4, 2, 2, 4, 2, 0],
TBL20 = [1, 3, 5, 4, 2, 2],
MTL21 = [1, 1, 3, 2, 0],
TBL21 = [5, 3, 6, 2, 1])
"""
Explanation: More background
But let's take advantage of more information. Here are the results from the most recent Stanley Cup finals.
For games that went into overtime, I included only goals scored during regulation play.
End of explanation
"""
with pm.Model() as model8:
alpha = pm.Exponential('alpha', lam=1)
beta = pm.Exponential('beta', lam=1)
mu = dict()
goals = dict()
for name, observed in data.items():
mu[name] = pm.Gamma('mu_'+name, alpha, beta)
goals[name] = pm.Poisson(name, mu[name], observed=observed)
trace8 = pm.sample(500)
"""
Explanation: Here's how we can get the data into the model.
End of explanation
"""
viz = pm.model_to_graphviz(model8)
viz
# How to save a graphviz image
#viz.format = 'png'
#viz.view(filename='model8', directory='./')
"""
Explanation: Here's the graph representation of the model:
End of explanation
"""
with model8:
az.plot_trace(trace8, var_names=['alpha', 'beta'])
"""
Explanation: And here are the results.
End of explanation
"""
sample_post_alpha = trace8['alpha']
sample_post_alpha.mean()
sample_post_beta = trace8['beta']
sample_post_beta.mean()
"""
Explanation: Here are the posterior means for the hyperparameters.
End of explanation
"""
sample_post_mu_TBL21 = trace8['mu_TBL21']
sample_post_mu_MTL21 = trace8['mu_MTL21']
sample_post_mu_TBL21.mean(), sample_post_mu_MTL21.mean()
plot_kde(sample_post_mu_TBL21, label='TBL21')
plot_kde(sample_post_mu_MTL21, label='MTL21')
decorate_rate()
"""
Explanation: So in case you were wondering how I chose the parameters of the gamma distribution in the first notebook.
That's right -- time travel.
End of explanation
"""
(sample_post_mu_TBL21 > sample_post_mu_MTL21).mean()
"""
Explanation: Here's the updated chance that Tampa Bay is the better team.
End of explanation
"""
|
spulido99/NetworksAnalysis | alejogm0520/Repaso_Estadistico.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stas
%matplotlib inline
x = np.arange(0.01, 1, 0.01)
values = [(0.5, 0.5),(5, 1),(1, 3),(2, 2),(2, 5)]
for i, j in values:
y = stas.beta.pdf(x,i,j)
plt.plot(x,y)
plt.show()
"""
Explanation: Analisis de Redes: Repaso Estadistico
Ejercicio 1: Hacer este gŕafico en Python.
End of explanation
"""
md = []
mn = []
mo = []
kur = []
ske = []
for i, j in values:
r = stas.beta.rvs(i, j, size=1000000)
md.append(np.median(r))
mn.append(np.mean(r))
mo.append(stas.mode(r)[0][0])
kur.append(stas.kurtosis(r))
ske.append(stas.skew(r))
fig = plt.figure()
ax1 = fig.add_subplot(151)
ax1.set_title('Median')
ax1.plot(md)
ax2 = fig.add_subplot(152)
ax2.set_title('Mean')
ax2.plot(mn)
ax3 = fig.add_subplot(153)
ax3.set_title('Mode')
ax3.plot(mo)
ax4 = fig.add_subplot(154)
ax4.set_title('Kurtosis')
ax4.plot(kur)
ax5 = fig.add_subplot(155)
ax5.set_title('Skewness')
ax5.plot(ske)
axes = [ax1, ax2, ax3, ax4, ax5]
for i in axes:
plt.setp(i.get_xticklabels(), visible=False)
plt.setp(i.get_yticklabels(), visible=False)
"""
Explanation: Ejercicio 2: Con datos aleatorios de distribuciones beta, obtener y graficar sus propiedades descriptivas.
End of explanation
"""
|
tensorflow/graphics | tensorflow_graphics/notebooks/6dof_alignment.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 Google LLC.
End of explanation
"""
!pip install tensorflow_graphics
"""
Explanation: Object pose alignment
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/6dof_alignment.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/6dof_alignment.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Precisely estimating the pose of objects is fundamental to many industries. For instance, in augmented and virtual reality, it allows users to modify the state of some variable by interacting with these objects (e.g. volume controlled by a mug on the user's desk).
This notebook illustrates how to use Tensorflow Graphics to estimate the rotation and translation of known 3D objects.
This capability is illustrated by two different demos:
1. Machine learning demo illustrating how to train a simple neural network capable of precisely estimating the rotation and translation of a given object with respect to a reference pose.
2. Mathematical optimization demo that takes a different approach to the problem; does not use machine learning.
Note: The easiest way to use this tutorial is as a Colab notebook, which allows you to dive in with no setup.
Setup & Imports
If Tensorflow Graphics is not installed on your system, the following cell can install the Tensorflow Graphics package for you.
End of explanation
"""
import time
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow_graphics.geometry.transformation import quaternion
from tensorflow_graphics.math import vector
from tensorflow_graphics.notebooks import threejs_visualization
from tensorflow_graphics.notebooks.resources import tfg_simplified_logo
tf.compat.v1.enable_v2_behavior()
# Loads the Tensorflow Graphics simplified logo.
vertices = tfg_simplified_logo.mesh['vertices'].astype(np.float32)
faces = tfg_simplified_logo.mesh['faces']
num_vertices = vertices.shape[0]
"""
Explanation: Now that Tensorflow Graphics is installed, let's import everything needed to run the demos contained in this notebook.
End of explanation
"""
# Constructs the model.
model = keras.Sequential()
model.add(layers.Flatten(input_shape=(num_vertices, 3)))
model.add(layers.Dense(64, activation=tf.nn.tanh))
model.add(layers.Dense(64, activation=tf.nn.relu))
model.add(layers.Dense(7))
def pose_estimation_loss(y_true, y_pred):
"""Pose estimation loss used for training.
This loss measures the average of squared distance between some vertices
of the mesh in 'rest pose' and the transformed mesh to which the predicted
inverse pose is applied. Comparing this loss with a regular L2 loss on the
quaternion and translation values is left as exercise to the interested
reader.
Args:
y_true: The ground-truth value.
y_pred: The prediction we want to evaluate the loss for.
Returns:
A scalar value containing the loss described in the description above.
"""
# y_true.shape : (batch, 7)
y_true_q, y_true_t = tf.split(y_true, (4, 3), axis=-1)
# y_pred.shape : (batch, 7)
y_pred_q, y_pred_t = tf.split(y_pred, (4, 3), axis=-1)
# vertices.shape: (num_vertices, 3)
# corners.shape:(num_vertices, 1, 3)
corners = tf.expand_dims(vertices, axis=1)
# transformed_corners.shape: (num_vertices, batch, 3)
# q and t shapes get pre-pre-padded with 1's following standard broadcast rules.
transformed_corners = quaternion.rotate(corners, y_pred_q) + y_pred_t
# recovered_corners.shape: (num_vertices, batch, 3)
recovered_corners = quaternion.rotate(transformed_corners - y_true_t,
quaternion.inverse(y_true_q))
# vertex_error.shape: (num_vertices, batch)
vertex_error = tf.reduce_sum((recovered_corners - corners)**2, axis=-1)
return tf.reduce_mean(vertex_error)
optimizer = keras.optimizers.Adam()
model.compile(loss=pose_estimation_loss, optimizer=optimizer)
model.summary()
"""
Explanation: 1. Machine Learning
Model definition
Given the 3D position of all the vertices of a known mesh, we would like a network that is capable of predicting the rotation parametrized by a quaternion (4 dimensional vector), and translation (3 dimensional vector) of this mesh with respect to a reference pose. Let's now create a very simple 3-layer fully connected network, and a loss for the task. Note that this model is very simple and definitely not optimal, which is out of scope for this notebook.
End of explanation
"""
def generate_training_data(num_samples):
# random_angles.shape: (num_samples, 3)
random_angles = np.random.uniform(-np.pi, np.pi,
(num_samples, 3)).astype(np.float32)
# random_quaternion.shape: (num_samples, 4)
random_quaternion = quaternion.from_euler(random_angles)
# random_translation.shape: (num_samples, 3)
random_translation = np.random.uniform(-2.0, 2.0,
(num_samples, 3)).astype(np.float32)
# data.shape : (num_samples, num_vertices, 3)
data = quaternion.rotate(vertices[tf.newaxis, :, :],
random_quaternion[:, tf.newaxis, :]
) + random_translation[:, tf.newaxis, :]
# target.shape : (num_samples, 4+3)
target = tf.concat((random_quaternion, random_translation), axis=-1)
return np.array(data), np.array(target)
num_samples = 10000
data, target = generate_training_data(num_samples)
print(data.shape) # (num_samples, num_vertices, 3): the vertices
print(target.shape) # (num_samples, 4+3): the quaternion and translation
"""
Explanation: Data generation
Now that we have a model defined, we need data to train it. For each sample in the training set, a random 3D rotation and 3D translation are sampled and applied to the vertices of our object. Each training sample consists of all the transformed vertices and the inverse rotation and translation that would allow to revert the rotation and translation applied to the sample.
End of explanation
"""
# Callback allowing to display the progression of the training task.
class ProgressTracker(keras.callbacks.Callback):
def __init__(self, num_epochs, step=5):
self.num_epochs = num_epochs
self.current_epoch = 0.
self.step = step
self.last_percentage_report = 0
def on_epoch_end(self, batch, logs={}):
self.current_epoch += 1.
training_percentage = int(self.current_epoch * 100.0 / self.num_epochs)
if training_percentage - self.last_percentage_report >= self.step:
print('Training ' + str(
training_percentage) + '% complete. Training loss: ' + str(
logs.get('loss')) + ' | Validation loss: ' + str(
logs.get('val_loss')))
self.last_percentage_report = training_percentage
reduce_lr_callback = keras.callbacks.ReduceLROnPlateau(
monitor='val_loss',
factor=0.5,
patience=10,
verbose=0,
mode='auto',
min_delta=0.0001,
cooldown=0,
min_lr=0)
# google internal 1
# Everything is now in place to train.
EPOCHS = 100
pt = ProgressTracker(EPOCHS)
history = model.fit(
data,
target,
epochs=EPOCHS,
validation_split=0.2,
verbose=0,
batch_size=32,
callbacks=[reduce_lr_callback, pt])
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylim([0, 1])
plt.legend(['loss', 'val loss'], loc='upper left')
plt.xlabel('Train epoch')
_ = plt.ylabel('Error [mean square distance]')
"""
Explanation: Training
At this point, everything is in place to start training the neural network!
End of explanation
"""
def transform_points(target_points, quaternion_variable, translation_variable):
return quaternion.rotate(target_points,
quaternion_variable) + translation_variable
"""
Explanation: Testing
The network is now trained and ready to use!
The displayed results consist of two images. The first image contains the object in 'rest pose' (pastel lemon color) and the rotated and translated object (pastel honeydew color). This effectively allows to observe how different the two configurations are. The second image also shows the object in rest pose, but this time the transformation predicted by our trained neural network is applied to the rotated and translated version. Hopefully, the two objects are now in a very similar pose.
Note: press play multiple times to sample different test cases. You will notice that sometimes the scale of the object is off. This comes from the fact that quaternions can encode scale. Using a quaternion of unit norm would result in not changing the scale of the result. We let the interested reader experiment with adding this constraint either in the network architecture, or in the loss function.
Start with a helper function to apply a quaternion and a translation:
End of explanation
"""
class Viewer(object):
def __init__(self, my_vertices):
my_vertices = np.asarray(my_vertices)
context = threejs_visualization.build_context()
light1 = context.THREE.PointLight.new_object(0x808080)
light1.position.set(10., 10., 10.)
light2 = context.THREE.AmbientLight.new_object(0x808080)
lights = (light1, light2)
material = context.THREE.MeshLambertMaterial.new_object({
'color': 0xfffacd,
})
material_deformed = context.THREE.MeshLambertMaterial.new_object({
'color': 0xf0fff0,
})
camera = threejs_visualization.build_perspective_camera(
field_of_view=30, position=(10.0, 10.0, 10.0))
mesh = {'vertices': vertices, 'faces': faces, 'material': material}
transformed_mesh = {
'vertices': my_vertices,
'faces': faces,
'material': material_deformed
}
geometries = threejs_visualization.triangular_mesh_renderer(
[mesh, transformed_mesh],
lights=lights,
camera=camera,
width=400,
height=400)
self.geometries = geometries
def update(self, transformed_points):
self.geometries[1].getAttribute('position').copyArray(
transformed_points.numpy().ravel().tolist())
self.geometries[1].getAttribute('position').needsUpdate = True
"""
Explanation: Define a threejs viewer for the transformed shape:
End of explanation
"""
def get_random_transform():
# Forms a random translation
with tf.name_scope('translation_variable'):
random_translation = tf.Variable(
np.random.uniform(-2.0, 2.0, (3,)), dtype=tf.float32)
# Forms a random quaternion
hi = np.pi
lo = -hi
random_angles = np.random.uniform(lo, hi, (3,)).astype(np.float32)
with tf.name_scope('rotation_variable'):
random_quaternion = tf.Variable(quaternion.from_euler(random_angles))
return random_quaternion, random_translation
"""
Explanation: Define a random rotation and translation:
End of explanation
"""
random_quaternion, random_translation = get_random_transform()
initial_orientation = transform_points(vertices, random_quaternion,
random_translation).numpy()
viewer = Viewer(initial_orientation)
predicted_transformation = model.predict(initial_orientation[tf.newaxis, :, :])
predicted_inverse_q = quaternion.inverse(predicted_transformation[0, 0:4])
predicted_inverse_t = -predicted_transformation[0, 4:]
predicted_aligned = quaternion.rotate(initial_orientation + predicted_inverse_t,
predicted_inverse_q)
viewer = Viewer(predicted_aligned)
"""
Explanation: Run the model to predict the transformation parameters, and visualize the result:
End of explanation
"""
def loss(target_points, quaternion_variable, translation_variable):
transformed_points = transform_points(target_points, quaternion_variable,
translation_variable)
error = (vertices - transformed_points) / num_vertices
return vector.dot(error, error)
def gradient_loss(target_points, quaternion, translation):
with tf.GradientTape() as tape:
loss_value = loss(target_points, quaternion, translation)
return tape.gradient(loss_value, [quaternion, translation])
"""
Explanation: 2. Mathematical optimization
Here the problem is tackled using mathematical optimization, which is another traditional way to approach the problem of object pose estimation. Given correspondences between the object in 'rest pose' (pastel lemon color) and its rotated and translated counter part (pastel honeydew color), the problem can be formulated as a minimization problem. The loss function can for instance be defined as the sum of Euclidean distances between the corresponding points using the current estimate of the rotation and translation of the transformed object. One can then compute the derivative of the rotation and translation parameters with respect to this loss function, and follow the gradient direction until convergence. The following cell closely follows that procedure, and uses gradient descent to align the two objects. It is worth noting that although the results are good, there are more efficient ways to solve this specific problem. The interested reader is referred to the Kabsch algorithm for further details.
Note: press play multiple times to sample different test cases.
Define the loss and gradient functions:
End of explanation
"""
learning_rate = 0.05
with tf.name_scope('optimization'):
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate)
"""
Explanation: Create the optimizer.
End of explanation
"""
random_quaternion, random_translation = get_random_transform()
transformed_points = transform_points(vertices, random_quaternion,
random_translation)
viewer = Viewer(transformed_points)
nb_iterations = 100
for it in range(nb_iterations):
gradients_loss = gradient_loss(vertices, random_quaternion,
random_translation)
optimizer.apply_gradients(
zip(gradients_loss, (random_quaternion, random_translation)))
transformed_points = transform_points(vertices, random_quaternion,
random_translation)
viewer.update(transformed_points)
time.sleep(0.1)
"""
Explanation: Initialize the random transformation, run the optimization and animate the result.
End of explanation
"""
|
openmrslab/suspect | docs/notebooks/tut05_hsvd.ipynb | mit | import suspect
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: 5. Water suppression with HSVD
In this tutorial we will take a look at water suppression. Water is present in the body at concentrations thousands of times higher than any of the metabolites we are interested in, so any spectrum where the water signal is not suppressed is completely dominated by the water peak centred at 4.7ppm.
The standard way to suppress the water is to use the CHESS (chemical shift selective) technique. This preparation method uses a frequency selective excitation pulse to excite only the spins in the region of the water peak, followed by a "crusher" gradient pulse which dephases the excited spins. Once they have lost their phase coherence, these spins will no longer contribute any signal during the acquisition. In practice, the basic CHESS technique has been superseded by first WET and now VAPOR, which use a sequence of CHESS style pulses with varying flip angles and delays to achieve greater tolerance to B1 variation, and generally improved performance.
However, in many cases, this prospective water suppression is insufficient to completely remove the water signal. Regions with poor shim, such as tumour, may have a water peak which partially extends outside the suppression region, and patient movement can have the same effect. Furthermore, many users choose to reduce the effect of water suppression by allowing a small amount of T1 recovery between the CHESS and the acquisition sequence. This approach, often referred to as "weak" water suppression, gives a large residual water peak which is useful during processing, for calculating channel weights and correcting frequency shifts. This peak must then be removed in a further processing step.
The methods available for removing the residual water peak generally involve some form of bandpass filter which removes the signal from a particular region of the spectrum. For this tutorial we are going to focus on the most widely used technique, HSVD (Hankel Singular Value Decomposition).
As usual, we start by importing our dependencies:
End of explanation
"""
data = suspect.io.load_rda("/home/jovyan/suspect/tests/test_data/siemens/SVS_30.rda")
import scipy.signal
window = scipy.signal.tukey(data.np * 2)[data.np:]
data = window * data
"""
Explanation: For this tutorial, we will be using the SVS_30.rda data included in the Suspect test data collection, so that we don't have to worry about channel combination or frequency correction here. However, we will repeat the apodisation step described in Tutorial 1.
End of explanation
"""
plt.plot(data.spectrum().real)
"""
Explanation: If we plot the raw spectrum we immediately see that the water peak completely dominates all the other peaks in the spectrum:
End of explanation
"""
components = suspect.processing.water_suppression.hsvd(data, 20)
"""
Explanation: HSVD works by approximating the FID with a set of exponentially decaying components:
End of explanation
"""
print(components[0])
"""
Explanation: The second argument to the function is the number of components to generate. This will depend on both the number of peaks in the spectrum and how Lorentzian they are. Too few components will not be able to correctly describe the signal but too many can lead to over-fitting. Around 20 is typically a good number for most cases, but do experiment with your own data to understand better exactly what is going on.
The hsvd() function returns a list of dicts, with each dict containing information about one exponential component:
End of explanation
"""
hsvd_fid = suspect.processing.water_suppression.construct_fid(components, data.time_axis())
hsvd_fid = data.inherit(hsvd_fid)
# plot two axes, one of the whole spectrum and one focussing on the metabolite region
f, (ax1, ax2) = plt.subplots(2)
ax2.set_xlim([550, 850])
ax2.set_ylim([0, 2e5])
for ax in (ax1, ax2):
ax.plot(data.spectrum().real)
ax.plot(hsvd_fid.spectrum().real)
"""
Explanation: This components list can be turned back into an FID using the construct_fid() function, which takes a list of components to be used and a reference time axis. In this example we also set the resulting FID to inherit() all the MRS properties from the original data object.
End of explanation
"""
# plot two axes, one of the whole dataset and one of the metabolite region
f, (ax1, ax2) = plt.subplots(2)
ax2.set_xlim([550, 850])
ax2.set_ylim([-1e5, 5e5])
for component in components:
component_fid = suspect.processing.water_suppression.construct_fid([component], data.time_axis())
component_fid = data.inherit(component_fid)
ax1.plot(component_fid.spectrum().real)
ax2.plot(component_fid.spectrum().real)
"""
Explanation: Overall we see that the hsvd_fid is a very good approximation to the original data signal, although some of the smaller peaks such as the Glx region are not fitted. To get a better idea of what is going on, we can reconstruct each component individually and plot the whole set together.
End of explanation
"""
water_components = [component for component in components if component["frequency"] < 70 or component["fwhm"] > 100]
"""
Explanation: What we find is that the major metabolite peaks each have one component associated with them, while the water peak has several. This is because it is not a perfect Lorentzian - to adequately describe the peak shape requires a series of progressively smaller correction terms to modify the main peak. Typically only the water peak gets multiple components as the others are too small, and the total number of components is limited.
The next step is to separate out the components making up the water signal from the metabolite components, which we do using a frequency cut-off. We can do this rather neatly using a Python list comprehension:
End of explanation
"""
water_fid = suspect.processing.water_suppression.construct_fid(water_components, data.time_axis())
water_fid = data.inherit(water_fid)
dry_fid = data - water_fid
# plot two axes, one of the whole spectrum and one focussing on the metabolite region
f, (ax1, ax2) = plt.subplots(2)
ax2.set_xlim([550, 850])
ax2.set_ylim([-1e5, 2e5])
for ax in (ax1, ax2):
ax.plot(data.spectrum().real)
ax.plot(water_fid.spectrum().real)
ax.plot(dry_fid.spectrum().real)
"""
Explanation: In this case we have selected all the components with frequencies below 80Hz. The best value for this cut-off frequency will depend strongly on your data, and of course on the field strength of the magnet, but 80Hz is a reasonable starting point for most people at 3T. For our data we don't have any peaks downfield of water so we don't need a negative frequency cut-off.
In addition we have selected a second set of components, this time with a FWHM greater than 100Hz. These very broad components are part of the baseline and it can be helpful to remove these at the same time.
Once we have selected the components we want to remove, we can simply subtract the constructed FIDs from our original data to arrive at the water suppressed spectrum.
End of explanation
"""
|
d00d/quantNotebooks | Notebooks/quantopian_research_public/notebooks/lectures/Futures_Trading_Considerations/notebook.ipynb | unlicense | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from quantopian.research.experimental import continuous_future, history
"""
Explanation: Futures Trading Considerations
by Maxwell Margenot and Delaney Mackenzie
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
In this lecture we will consider some practical implications for trading futures contracts. We will discuss the futures calendar and how it impacts trading as well as how to maintain futures positions across expiries.
End of explanation
"""
contract = symbols('ESH17')
one_day_volume = get_pricing(contract, start_date='2017-02-01', end_date='2017-02-01', frequency='minute', fields='volume')
one_day_volume.tz_convert('EST').plot()
plt.title('Trading Volume for 3/01/2017 by Minute')
plt.xlabel('Minute')
plt.ylabel('Volume');
"""
Explanation: Futures Calendar
An important feature of futures markets is the calendar used to trade them. Futures markets are open long after the equity markets close, though the effective periods within which you can trade with large amounts of liquidity tend to overlap. The specific high points in volume for futures contracts vary greatly from underlying to underlying. Despite this, the majority of the volume for many contracts typically falls within normal EST market hours.
Let's have a look at a day in the life of the S&P 500 Index E-Mini futures contract that was deliverable in March 2017.
End of explanation
"""
contract = 'FCH17'
one_day_volume = get_pricing(contract, start_date='2017-02-01', end_date='2017-02-01', frequency='minute', fields='volume')
one_day_volume.tz_convert('EST').plot()
plt.title('Trading Volume for 3/01/2017 by Minute')
plt.xlabel('Minute')
plt.ylabel('Volume');
"""
Explanation: This is one of the most liquid futures contracts and we see this significant increase in volume traded during normal equity trading hours. These hours can be even more tight for less liquid commodities. For example, let's look at how Feeder Cattle trades during the same time period on the same day.
End of explanation
"""
contracts = symbols(['ESH16', 'ESM16', 'ESU16'])
rolling_volume = get_pricing(contracts, start_date='2015-12-15', end_date='2016-09-15', fields='volume')
rolling_volume.plot()
plt.title('Volume for Different Expiries of same Underlying')
plt.xlabel('Date')
plt.ylabel('Volume');
"""
Explanation: If we are trying to trade multiple different underlyings with futures contracts in the same algorithm, we need to be conscious of their volume relative to each other. All trading algorithms are dependent on orders being executed as determined by their calculations. Some contracts are so illiquid that entering into even the smallest position will amount to becoming a large part of the volume for a given day. This could heavily impact slippage
Unsurprisingly, volume will also vary for different expiries on the same underlying. The front month contract, the contract closest to delivery, has the largest amount of volume. As we draw closer to delivery the front month's volume is eclipsed by the next expiry date as participants in the market close out their positions and roll them forward.
End of explanation
"""
maximum_any_day_volume = rolling_volume.max(axis=1)
maximum_any_day_volume.name = 'Volume Roll-over'
rolling_volume.plot()
maximum_any_day_volume.plot(color='black', linestyle='--')
plt.title('Volume for Front Contract with Volume-based Rollover')
plt.xlabel('Date')
plt.ylabel('Volume')
plt.legend();
"""
Explanation: Futures Positions Have Inherent Leverage
In entering a futures position, you place down a certain amount of capital in a margin account. This margin account is exposed to the fluctuating futures price of the underlying that you have chosen. This creates a levered position off the bat as the value that you are exposed to (before delivery) in the account is different from the overall value that is on the hook at delivery.
This internal leverage is determined on a contract to contract basis due to the different multipliers involved for different underlyings.
Roll-over
If we want to maintain a futures position across expiries, we need to "roll over" our contracts. This is the practice of switching to the next month's contract after closing your previous holding. The majority of futures positions are either closed or rolled over before ever reaching delivery.
The futures contract with expiry closest to the current date is known as the "front month" contract. It usually enjoys the smallest spread between futures and spot prices as well as the most liquidity. In contrast, the futures contract that has the furthest expiration date in a set of contracts is known as the "back month" contract. Contracts that are further out have significantly less liquidity, though they still may contain vague information about future prices anticipated by the market.
By rolling forward our positions, we can maintain a hedge on a particular underlying or simply maintain a position across time. Without rolling contracts over we would be required to develop trading strategies that work only on a short timescale.
This graph illustrates the volume that results from rolling over contracts on the first date where the front month contract's volume is eclipsed by the following month on the same underlying.
End of explanation
"""
continuous_corn = continuous_future('CN', offset=0, roll='calendar', adjustment='mul')
"""
Explanation: In this particular instance, our goal is to ride the wave of liquidity provided by the front contract.
Continuous Futures
With futures, it is difficult to get a continuous series of historical prices. Each time that you roll forward to a new contract, the price series incurs a jump. This jump negatively impacts our analysis of prices as the discontinuity introduces shocks in our return and volatility measures that may not be representative of the actual changes in the underlying.
We use the continuous futures objects as part of the platform to get a continuous chain of historical data for futures contracts, taking these concerns into account. There are several ways to adjust for the cost of carry when looking at historical data, though people differ on what they prefer. The general consensus is that an adjustment should be done.
We can have a continuous future "roll" forward either based on calendar dates or based on the shift in volume from the front month contract to the next. The ContinuousFuture object is not a tradable asset, however. It is an API construct that abstracts the chain of consecutive contracts for the same underlying. They maintain ongoing references to the active contract in the chain and make it easier to to maintain a dynamic reference to contracts that you want to order as well as to get historical series of data, all based on your chosen method of adjustment and your desired roll method.
End of explanation
"""
continuous_corn_price = history(continuous_corn, start_date='2009-01-01', end_date='2016-01-01', fields='price')
continuous_corn_price.plot();
"""
Explanation: The above defined continuous future has an offset of $0$, indicating that we want it to reference the front month contract at each roll. Incrementing the offset causes the continuous future to instead monitor the contract that is displaced from the front month by that number.
Adjustments
We can define a continuous future to use multiplicative adjustments, additive adjustments, or no adjustments ('mul', 'add', None). The cost of carry that is realized as we shift from one contract to the next can be seen as the shock from a dividend payment. Adjustments are important to frame past prices relative to today's prices by including the cost of carry. Additive adjustments close the gaps betwen contracts by simply taking the differences and aggregating those back, while multiplicative adjustments scale previous prices using a ratio to close the gap.
End of explanation
"""
|
moonbury/pythonanywhere | github/MasteringMatplotlib/mmpl-preview.ipynb | gpl-3.0 | import matplotlib
matplotlib.use('nbagg')
%matplotlib inline
"""
Explanation: A Preview
A quick look at some examples indicating the sorts of things that will be examined in later notebooks.
Each notebook will take advantage of the NbAgg backend, and we set that up first:
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import seaborn as sns
import numpy as np
from scipy import stats
import pandas as pd
"""
Explanation: Next we'll do whatever imports we will need for the session:
End of explanation
"""
sns.set_palette("BuPu_d")
sns.set_context("notebook", font_scale=2.0)
"""
Explanation: Joint Plots with Seaborn
As you can see from the import, we're going to use Seaborn for some nice visual presentation. Let's select a palette and color saturation level:
End of explanation
"""
np.random.seed(42424242)
"""
Explanation: We're also going to use NumPy and SciPy to generate some data. Next, let's get some Seaborn defaults set up:
End of explanation
"""
x = stats.gamma(5).rvs(420)
y = stats.gamma(13).rvs(420)
"""
Explanation: Let's create some random data to use for our visualization:
End of explanation
"""
with sns.axes_style("white"):
sns.jointplot(x, y, kind="hex", size=16)
"""
Explanation: And now, let's plot the data:
End of explanation
"""
baseball = pd.read_csv("../data/baseball.csv")
"""
Explanation: Scatter Plot Matrix Graphs with Pandas
Our next preview will be a Pandas teaser displaying a great deal of data in a single plot. The Pandas project ships with some sample data that we can load and view:
End of explanation
"""
[colors.rgb2hex(x) for x in sns.color_palette()]
"""
Explanation: Pandas uses the new "ggplot" style defined in matplotlib. We'd like to override that with a custom style sheet based on the palette we've chosen from Seaborn. Let's get the list of colors from our palette:
End of explanation
"""
plt.style.use('../styles/custom.mplstyle')
data = pd.scatter_matrix(baseball.loc[:,'r':'sb'], figsize=(16, 10))
"""
Explanation: To see how we used some of these colors, take a look at ./styles/custom.mplstyle.
Now let's graph our data, using the custom style that changes the default background color used by pandas (from the 'ggplot' style):
End of explanation
"""
|
fgnt/nara_wpe | examples/WPE_Tensorflow_online.ipynb | mit | channels = 8
sampling_rate = 16000
delay = 3
alpha=0.99
taps = 10
frequency_bins = stft_options['size'] // 2 + 1
"""
Explanation: Example with real audio recordings
The iterations are dropped in contrast to the offline version. To use past observations the correlation matrix and the correlation vector are calculated recursively with a decaying window. $\alpha$ is the decay factor.
Setup
End of explanation
"""
file_template = 'AMI_WSJ20-Array1-{}_T10c0201.wav'
signal_list = [
sf.read(str(project_root / 'data' / file_template.format(d + 1)))[0]
for d in range(channels)
]
y = np.stack(signal_list, axis=0)
IPython.display.Audio(y[0], rate=sampling_rate)
"""
Explanation: Audio data
End of explanation
"""
Y = stft(y, **stft_options).transpose(1, 2, 0)
T, _, _ = Y.shape
def aquire_framebuffer():
buffer = list(Y[:taps+delay, :, :])
for t in range(taps+delay+1, T):
buffer.append(Y[t, :, :])
yield np.array(buffer)
buffer.pop(0)
"""
Explanation: Online buffer
For simplicity the STFT is performed before providing the frames.
Shape: (frames, frequency bins, channels)
frames: K+delay+1
End of explanation
"""
Z_list = []
Q = np.stack([np.identity(channels * taps) for a in range(frequency_bins)])
G = np.zeros((frequency_bins, channels * taps, channels))
with tf.Session() as session:
Y_tf = tf.placeholder(tf.complex128, shape=(taps + delay + 1, frequency_bins, channels))
Q_tf = tf.placeholder(tf.complex128, shape=(frequency_bins, channels * taps, channels * taps))
G_tf = tf.placeholder(tf.complex128, shape=(frequency_bins, channels * taps, channels))
results = online_wpe_step(Y_tf, get_power_online(tf.transpose(Y_tf, (1, 0, 2))), Q_tf, G_tf, alpha=alpha, taps=taps, delay=delay)
for Y_step in tqdm(aquire_framebuffer()):
feed_dict = {Y_tf: Y_step, Q_tf: Q, G_tf: G}
Z, Q, G = session.run(results, feed_dict)
Z_list.append(Z)
Z_stacked = np.stack(Z_list)
z = istft(np.asarray(Z_stacked).transpose(2, 0, 1), size=stft_options['size'], shift=stft_options['shift'])
IPython.display.Audio(z[0], rate=sampling_rate)
"""
Explanation: Non-iterative frame online approach
A frame online example requires, that certain state variables are kept from frame to frame. That is the inverse correlation matrix $\text{R}_{t, f}^{-1}$ which is stored in Q and initialized with an identity matrix, as well as filter coefficient matrix that is stored in G and initialized with zeros.
Again for simplicity the ISTFT is applied in Numpy afterwards.
End of explanation
"""
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=(20, 8))
im1 = ax1.imshow(20 * np.log10(np.abs(Y[200:400, :, 0])).T, origin='lower')
ax1.set_xlabel('')
_ = ax1.set_title('reverberated')
im2 = ax2.imshow(20 * np.log10(np.abs(Z_stacked[200:400, :, 0])).T, origin='lower')
_ = ax2.set_title('dereverberated')
cb = fig.colorbar(im1)
"""
Explanation: Power spectrum
Before and after applying WPE.
End of explanation
"""
|
Bastien-Brd/pi-tuner | pitch_detection_from_microphone.ipynb | mit | import sounddevice as sd
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Tutorial for recording a guitar string stroke and detecting its pitch
I use the python library called sounddevice which allows to easily record audio and represent the result as a numpy array.
We will use two different methods for detecting the pitch and compare their results.
For reference, here is the list of frequencies of all 6 strings expected for a well tuned guitar:
String | Frequency | Scientific pitch notation
--- | --- | ---
1 (E) | 329.63 Hz | E4
2 (B) | 246.94 Hz | B3
3 (G) | 196.00 Hz | G3
4 (D) | 146.83 Hz | D3
5 (A) | 110.00 Hz | A2
6 (E) | 82.41 Hz | E2
End of explanation
"""
sd.query_devices()
"""
Explanation: First of all, check the list of available audio devices on the system
I use an external USB sound card called Sound Blaster E1: this is the one we will use here
End of explanation
"""
device = 0 # we use my USB sound card device
duration = 2 # seconds
fs = 44100 # samples by second
"""
Explanation: We define the length we want to record in seconds and the sampling rate to 44100 Hz
End of explanation
"""
myrecording = sd.rec(duration * fs, samplerate=fs, channels=1, device=0)
"""
Explanation: We can now record 2 seconds worth of audio
For this tutorial, I have played the D string of my guitar.
The result is a numpy array we store in the myrecording variable
End of explanation
"""
df = pd.DataFrame(myrecording)
df.loc[25000:30000].plot()
"""
Explanation: Let's plot a section of this array to look at it first
We notice a pretty periodic signal with a clear fundamental frequency: which makes sense since a guitar string vibrates producing an almost purely sinuzoidal wave
End of explanation
"""
fourier = np.fft.fft(rec)
"""
Explanation: Pitch detection using Fast Fourier Transform
We use numpy to compute the discrete Fourier transform of the signal:
End of explanation
"""
plt.plot(abs(fourier[:len(fourier)/10]))
"""
Explanation: We can visualise a section of the Fourier transform to notice there is a clear fundamental frequency:
End of explanation
"""
f_max_index = np.argmax(abs(fourier[:fourier.size/2]))
freqs = np.fft.fftfreq(len(fourier))
freqs[f_max_index]*fs
"""
Explanation: We find the frequency corresponding to the maximum of this Fourier transform, and calculate the corresponding real frequency by re-multiplying by the sampling rate
End of explanation
"""
rec = myrecording.ravel()
rec = rec[25000:30000]
autocorr = np.correlate(rec, rec, mode='same')
plt.plot(autocorr)
"""
Explanation: This methid has detected that my guitar string stroke has fundamental frequency of 149.94 Hz, which is indeed very close to the expected frequency of the D string of a well tuned guitar (target if 146.83 Hz)
My guitar was not very well tuned: this indicates I should slightly tune down my 4th string
Using Autocorrelation method for pitch detection
End of explanation
"""
|
julienchastang/unidata-python-workshop | notebooks/AWIPS/Model_Sounding_Data.ipynb | mit | from awips.dataaccess import DataAccessLayer
import matplotlib.tri as mtri
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from math import exp, log
import numpy as np
from metpy.calc import get_wind_components, lcl, dry_lapse, parcel_profile, dewpoint
from metpy.calc import wind_speed, wind_direction, thermo, vapor_pressure
from metpy.plots import SkewT, Hodograph
from metpy.units import units, concatenate
DataAccessLayer.changeEDEXHost("edex-cloud.unidata.ucar.edu")
request = DataAccessLayer.newDataRequest()
request.setDatatype("modelsounding")
forecastModel = "GFS"
request.addIdentifier("reportType", forecastModel)
request.setParameters("pressure","temperature","specHum","uComp","vComp","omega","cldCvr")
"""
Explanation: The EDEX modelsounding plugin creates 64-level vertical profiles from GFS and ETA (NAM) BUFR products distirubted over NOAAport. Paramters which are requestable are pressure, temperature, specHum, uComp, vComp, omega, cldCvr.
End of explanation
"""
locations = DataAccessLayer.getAvailableLocationNames(request)
locations.sort()
list(locations)
request.setLocationNames("KFRM")
cycles = DataAccessLayer.getAvailableTimes(request, True)
times = DataAccessLayer.getAvailableTimes(request)
try:
fcstRun = DataAccessLayer.getForecastRun(cycles[-1], times)
list(fcstRun)
response = DataAccessLayer.getGeometryData(request,[fcstRun[0]])
except:
print('No times available')
exit
"""
Explanation: Available Locations
End of explanation
"""
tmp,prs,sh = np.array([]),np.array([]),np.array([])
uc,vc,om,cld = np.array([]),np.array([]),np.array([]),np.array([])
for ob in response:
tmp = np.append(tmp,ob.getString(b"temperature"))
prs = np.append(prs,ob.getString(b"pressure"))
sh = np.append(sh,ob.getString(b"specHum"))
uc = np.append(uc,ob.getString(b"uComp"))
vc = np.append(vc,ob.getString(b"vComp"))
om = np.append(om,ob.getString(b"omega"))
cld = np.append(cld,ob.getString(b"cldCvr"))
print("parms = " + str(ob.getParameters()))
print("site = " + str(ob.getLocationName()))
print("geom = " + str(ob.getGeometry()))
print("datetime = " + str(ob.getDataTime()))
print("reftime = " + str(ob.getDataTime().getRefTime()))
print("fcstHour = " + str(ob.getDataTime().getFcstTime()))
print("period = " + str(ob.getDataTime().getValidPeriod()))
"""
Explanation: Model Sounding Parameters
Construct arrays for each parameter to plot (temperature, pressure, moisutre (spec. humidity), wind components, and cloud cover)
End of explanation
"""
t = (tmp-273.15) * units.degC
p = prs/100 * units.mbar
u,v = uc*1.94384,vc*1.94384 # m/s to knots
spd = wind_speed(u, v) * units.knots
dir = wind_direction(u, v) * units.deg
rmix = (sh/(1-sh)) *1000 * units('g/kg')
e = vapor_pressure(p, rmix)
td = dewpoint(e)
"""
Explanation: Calculating Dewpoint from Specific Humidity
Because the modelsounding plugin does not return dewpoint values, we must calculate the profile ourselves. Here are three examples of dewpoint calculated from specific humidity, including a manual calculation following NCEP AWIPS/NSHARP.
1) MetPy calculated mixing ratio and vapor pressure
End of explanation
"""
td2 = dewpoint(vapor_pressure(p, sh))
"""
Explanation: 2) metpy calculated assuming spec. humidity = mixing ratio
End of explanation
"""
# new arrays
ntmp = tmp
# where p=pressure(pa), T=temp(C), T0=reference temp(273.16)
rh = 0.263*prs*sh / (np.exp(17.67*ntmp/(ntmp+273.15-29.65)))
vaps = 6.112 * np.exp((17.67 * ntmp) / (ntmp + 243.5))
vapr = rh * vaps / 100
dwpc = np.array(243.5 * (np.log(6.112) - np.log(vapr)) / (np.log(vapr) - np.log(6.112) - 17.67)) * units.degC
"""
Explanation: 3) NCEP AWIPS soundingrequest plugin
based on GEMPAK/NSHARP, from https://github.com/Unidata/awips2-ncep/blob/unidata_16.2.2/edex/gov.noaa.nws.ncep.edex.plugin.soundingrequest/src/gov/noaa/nws/ncep/edex/plugin/soundingrequest/handler/MergeSounding.java#L1783
End of explanation
"""
%matplotlib inline
plt.rcParams['figure.figsize'] = (12, 14)
# Create a skewT plot
skew = SkewT()
# Plot the data
skew.plot(p, t, 'r', linewidth=2)
skew.plot(p, td, 'b', linewidth=2)
skew.plot(p, td2, 'y')
skew.plot(p, dwpc, 'g', linewidth=2)
skew.plot_barbs(p, u, v)
skew.ax.set_ylim(1000, 100)
skew.ax.set_xlim(-40, 60)
plt.title( forecastModel + " " \
+ ob.getLocationName().decode('UTF-8') \
+ "("+ str(ob.getGeometry()) + ")" \
+ ", " + str(ob.getDataTime())
)
# An example of a slanted line at constant T -- in this case the 0 isotherm
l = skew.ax.axvline(0, color='c', linestyle='--', linewidth=2)
# Draw hodograph
ax_hod = inset_axes(skew.ax, '40%', '40%', loc=2)
h = Hodograph(ax_hod, component_range=wind_speed(u, v).max())
h.add_grid(increment=20)
h.plot_colormapped(u, v, spd)
# Show the plot
plt.show()
"""
Explanation: MetPy SkewT and Hodograph
End of explanation
"""
|
zephirefaith/AI_Fall15_Assignments | A1/player_notebook.ipynb | mit | from random import randint
class RandomPlayer():
"""Player that chooses a move randomly."""
def move(self, game, legal_moves, time_left):
if not legal_moves: return (-1,-1)
return legal_moves[randint(0,len(legal_moves)-1)]
"""
Explanation: This is the ipython notebook you should use as a template for your agent. Your task for this assignment is to implement a winning AI for the game of Isolation, as specified in the assignment pdf you have been issued.
The following random agent just selects a move out of the set of legal moves. Note that your agent, when asked for a move, is already provided with the set of moves available to it. This is done for your convenience. If your agent attempts to perform an illegal move, it will lose, so please refrain from doing so. It is also provided with a function that, when invoked, returns the amount of time left for your agent to make its move. If your agent fails to make a move in the alloted time, it will lose.
End of explanation
"""
class HumanPlayer():
"""Player that chooses a move according to
user's input."""
def move(self, game, legal_moves, time_left):
print('\t'.join(['[%d] %s'%(i,str(move)) for i,move in enumerate(legal_moves)] ))
valid_choice = False
while not valid_choice:
try:
index = int(raw_input('Select move index:'))
valid_choice = 0 <= index < len(legal_moves)
if not valid_choice:
print('Illegal move! Try again.')
except ValueError:
print('Invalid index! Try again.')
return legal_moves[index]
"""
Explanation: The following are functions that might be useful to you in developing your agent:
game.get_legal_moves(): Returns a list of legal moves for the active player.
game.get_opponent_moves(): Returns a list of legal moves for the inactive player.
game.forecast_move(move): This returns a new board, whose state is the result of making the move specified on the current board.
game.get_state(): This returns a 2D array containing a copy of the explicit state of the board.
game.is_winner(player): Returns whether your player agent has won.
game.is_opponent_winner(player): Returns whether your player's opponent has won.
game.print_board(): Returns a string representation of the game board. This should be useful for debugging.
End of explanation
"""
class OpenMoveEvalFn():
def score(self, game, maximizing_player):
if maximizing_player:
eval_func = len(game.get_legal_moves())
else:
eval_func = len(game.get_opponent_moves())
return eval_func
"""
Explanation: This is the first part of the assignment you are expected to implement. It is the evaluation function we've been using in class. The score of a specified game state is just the number of moves open to the active player.
End of explanation
"""
class CustomEvalFn():
def score(self, game, maximizing_player):
#maximize your own moves and minimize opponents moves
if maximizing_player:
my_moves = len(game.get_legal_moves())
opponents_moves = len(game.get_opponent_moves())
else:
opponents_moves = len(game.get_legal_moves())
my_moves = len(game.get_opponent_moves())
#maximize this
eval_func = my_moves - opponents_moves
return eval_func
"""
Explanation: The following is a
Custom evaluation function that acts
however you think it should. This is not
required but highly encouraged if you
want to build the best AI possible.
End of explanation
"""
class CustomPlayer():
def __init__(self, search_depth=15, eval_fn=CustomEvalFn(), threshold = 50):
self.eval_fn = eval_fn
self.search_depth = search_depth
self.thresh = threshold
def move(self, game, legal_moves, time_left):
if game.move_count<2:
return legal_moves[randint(0,len(legal_moves)-1)]
best_move = self.alphabeta_id(game, time_left, self.search_depth)
# you will eventually replace minimax with alpha-beta
return best_move
def utility(self, game, maximizing_player):
if game.is_winner(self):
return 50000
if game.is_opponent_winner(self):
return -50000
return self.eval_fn.score(game, maximizing_player)
def minimax(self, game, time_left, depth=float("inf"), maximizing_player=True):
#terminal states
#if maximizing_player and realize opponent has won
#if minimizing player and realize max has won
if (maximizing_player and game.is_opponent_winner(self)) or (not maximizing_player and game.is_winner(self)):
return ((-1,-1), self.utility(game, maximizing_player))
#if realize that time_left isn't much, we have an arbitrary threshold of 50 ms here/max depth is reached
if depth==0 or time_left()<self.thresh:
return ((-1,-1), self.utility(game, maximizing_player))
#get actions
actions = game.get_legal_moves()
best_move = (-1,-1)
#if maximizing player, get minimax value for all actions and choose the move which has maxMIN value
if maximizing_player:
best_val = float("-inf")
for a in actions:
_, score_of_action = self.minimax(game.forecast_move(a), time_left, depth-1, False);
best_move, best_val = (a, score_of_action) if score_of_action>=best_val else (best_move, best_val)
if time_left()<self.thresh:
return (best_move, best_val)
#if minimizing player find minimax value for all actions and choose one which has minMAX value
else:
best_val = float("inf")
for a in actions:
_, score_of_action = self.minimax(game.forecast_move(a), time_left, depth-1, True);
best_move, best_val = (a, score_of_action) if score_of_action<=best_val else (best_move, best_val)
if time_left()<self.thresh:
return (best_move, best_val)
return (best_move, best_val)
def alphabeta(self, game, time_left, depth=float("inf"), alpha=float("-inf"), beta=float("inf"), maximizing_player=True):
#terminal states
#if maximizing_player and realize opponent has won
#if minimizing player and realize max has won
if (maximizing_player and game.is_opponent_winner(self)) or (not maximizing_player and game.is_winner(self)):
return ((-1,-1), self.utility(game, maximizing_player))
#if realize that time_left isn't much, we have an arbitrary threshold of 75 ms here/max depth is reached
if depth==0 or time_left()<self.thresh:
return ((-1,-1), self.utility(game, maximizing_player))
#get actions
actions = game.get_legal_moves()
best_move = (-1,-1)
#if maximizing player, get alphabeta value for all actions and choose the move
#which has max value plus prune those which do not adher to alphabeta limits
if maximizing_player:
best_val = float("-inf")
for a in actions:
_, score_of_action = self.alphabeta(game.forecast_move(a), time_left, depth-1, alpha, beta, False);
if score_of_action>best_val:
best_move = a
best_val = score_of_action
if best_val>beta:
return a, best_val
alpha = max(alpha, best_val)
if alpha>=beta:
return a, alpha
if time_left()<self.thresh:
return (best_move, best_val)
else:
best_val = float("inf")
for a in actions:
_, score_of_action = self.alphabeta(game.forecast_move(a), time_left, depth-1, alpha, beta, True);
if score_of_action<best_val:
best_move = a
best_val = score_of_action
if best_val<alpha:
return a, best_val
beta = min(beta, best_val)
if beta<=alpha:
return a, beta
if time_left()<self.thresh:
return (best_move, best_val)
return (best_move, best_val)
def alphabeta_id(self, game, time_left, max_depth):
for depth in range(1, max_depth):
best_move, _ = self.alphabeta(game, time_left, depth)
if time_left()<self.thresh:
break
return (best_move)
"""
Explanation: Implement a Player below that chooses a move using
your evaluation function and
a depth-limited minimax algorithm
with alpha-beta pruning.
You must finish and test this player
to make sure it properly uses minimax
and alpha-beta to return a good move
in less than 500 milliseconds.
End of explanation
"""
"""Example test you can run
to make sure your AI does better
than random."""
from isolation import Board
if __name__ == "__main__":
r = RandomPlayer()
h = CustomPlayer()
game = Board(h,r)
game.play_isolation(500)
"""Example test you can run
to make sure your basic evaluation
function works."""
from isolation import Board
if __name__ == "__main__":
sample_board = Board(RandomPlayer(),RandomPlayer())
# setting up the board as though we've been playing
sample_board.move_count = 3
sample_board.__active_player__ = 0 # player 1 = 0, player 2 = 1
# 1st board = 16 moves
sample_board.__board_state__ = [
[0,2,0,0,0],
[0,0,0,0,0],
[0,0,1,0,0],
[0,0,0,0,0],
[0,0,0,0,0]]
sample_board.__last_player_move__ = [(2,2),(0,1)]
# player 1 should have 16 moves available,
# so board gets a score of 16
h = OpenMoveEvalFn()
print('This board has a score of %s.'%(h.score(sample_board)))
"""
Explanation: The following are some basic tests you can use to sanity-check your code. You will also be provided with a test server to which you will be able to submit your agents later this week. Good luck!
End of explanation
"""
|
DawesLab/LabNotebooks | Beam Splitter Leonhart.ipynb | mit | from qutip import *
from numpy import sqrt, pi, cos, sin, exp, array, real, imag, linspace
from numpy import math
factorial = math.factorial
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Beam Splitter QM
AMCDawes
Based on paper (and book) by Ulf Leonhardt
arXiv:quant-ph/0305007v2 4 Jul 2003
Some cells refer to equation numbers from this paper.
End of explanation
"""
# define max dimension, enlarge as needed
N = 7
a1 = tensor(destroy(N),identity(N))
a2 = tensor(identity(N),destroy(N))
# quantum Stokes, eqn 4.6:
Lt = 1/2*(a1.dag()*a1 + a2.dag()*a2)
Lx = 1/2*(a1.dag()*a2 + a2.dag()*a1)
Ly = 1j/2*(a2.dag()*a1 - a1.dag()*a2)
Lz = 1/2*(a1.dag()*a1 - a2.dag()*a2)
# the number operators, just to have:
n1 = a1.dag()*a1
n2 = a2.dag()*a2
# Note, can use this approach or form a tensor with n and identity.
def tp(n,m):
"""Create a two photon ket state |n,m>
implemented using QuTiP tensor"""
return tensor(fock(N,n),fock(N,m))
def Bmatrix(Φ,Θ,Ψ,Λ):
"""This is the classical matrix given in 4.4, mainly to confirm parameter choice"""
a = exp(1j*Λ/2)
b = array([[exp(1j*Ψ/2),0],[0,exp(-1j*Ψ/2)]])
c = array([[cos(Θ/2),sin(Θ/2)],[-sin(Θ/2),cos(Θ/2)]])
d = array([[exp(1j*Φ/2),0],[0,exp(-1j*Φ/2)]])
return a * b @ c @ d
# Generate the perfect 50/50 BS as inm 4.23
# to check the angles:
Bmatrix(0,pi/2,0,0)
def B(Φ,Θ,Ψ,Λ):
"""Create the B operator given in 4.12"""
B = (-1j*Φ*Lz).expm() * (-1j*Θ*Ly).expm() * (-1j*Ψ*Lx).expm() * (-1j*Λ*Lt).expm()
return B
# The B operator for a 50/50 BS
bs = B(0,pi/2,0,0)
# Apply it to a |1,1> input state, a la Hong, Ou, Mandel:
out = bs.dag() * tp(1,1)
# Compare to the example expression from Ulf in 4.24 line 1:
out1 = 1/2 * (a1.dag() - a2.dag()) * (a1.dag() + a2.dag()) * tp(0,0)
# the right answer in any case (4.24 line 2)
testout = (1/sqrt(2)*(tp(2,0) - tp(0,2)))
testout == out1 == out
"""
Explanation: Define the anihilation operators for photons 1 and 2, and the quantum Stokes parameters:
End of explanation
"""
psi1 = bs.dag() * tp(1,0)
psi1 == (1/sqrt(2)*(tp(1,0) - tp(0,1))) ## sanity check
"""
Explanation: These all agree: so far so good.
Next, try a single photon input:
End of explanation
"""
def phaseshifter(phi):
""" I believe this shifts one arm by phi
doesn't seem to be working though"""
shifter = tensor(identity(N),exp(1j*phi)*identity(N))
return shifter
def stateplot(state):
plt.plot(real(state.full()),"bo")
plt.plot(imag(state.full()),"ro")
philist = linspace(0,2*pi,20)
input2 = [phaseshifter(pi/8)*psi1 for phi in philist]
out2 = bs.dag() * input2
plt.plot(philist,expect(n1,out2),label="$n_1$")
plt.plot(philist,expect(n2,out2),label="$n_2$")
plt.legend()
stateplot(test3[6])
expect(n2,out2)
expect(n1,out2)
stateplot(psi3[0])
# Explore the phase-shifted version of the 1 photon output by hard-coding the state
psi3 = [1/sqrt(2)*(exp(1j*phi)*tp(0,1) + tp(1,0)) for phi in philist]
test3 = bs.dag()*psi3
plt.plot(philist,expect(n1,test3),label="$n_1$")
plt.plot(philist,expect(n2,test3),label="$n_2$")
plt.legend()
(tp(1,0)).full()
"""
Explanation: Note: this is different from van Enk (NotesBS.pdf) by the sign and a factor of i. It agrees with several other papers in the reference folder. TODO: sort this out.
Now to let the outputs interfere again after a phase shift in path 2:
End of explanation
"""
|
otavio-r-filho/AIND-Deep_Learning_Notebooks | autoencoder/Simple_Autoencoder_Solution.ipynb | mit | %matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
"""
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
"""
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
"""
# Size of the encoding layer (the hidden layer)
encoding_dim = 32
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from
decoded = tf.nn.sigmoid(logits, name='output')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
"""
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
"""
# Create the session
sess = tf.Session()
"""
Explanation: Training
End of explanation
"""
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
"""
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
"""
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation
"""
|
yashdeeph709/Algorithms | PythonBootCamp/Complete-Python-Bootcamp-master/Milestone Project 1 - Advanced Solution.ipynb | apache-2.0 | # Specifically for the iPython Notebook environment for clearing output.
from IPython.display import clear_output
# Global variables
board = [' '] * 10
game_state = True
announce = ''
"""
Explanation: Tic Tac Toe
This is the solution for the Milestone Project! A two player game made within a Jupyter Notebook. Feel free to download the notebook to understand how it works!
First some imports we'll need to use for displaying output and set the global variables
End of explanation
"""
# Note: Game will ignore the 0 index
def reset_board():
global board,game_state
board = [' '] * 10
game_state = True
"""
Explanation: Next make a function that will reset the board, in this case we'll store values as a list.
End of explanation
"""
def display_board():
''' This function prints out the board so the numpad can be used as a reference '''
# Clear current cell output
clear_output()
# Print board
print " "+board[7]+" |"+board[8]+" | "+board[9]+" "
print "------------"
print " "+board[4]+" |"+board[5]+" | "+board[6]+" "
print "------------"
print " "+board[1]+" |"+board[2]+" | "+board[3]+" "
"""
Explanation: Now create a function to display the board, I'll use the num pad as the board reference.
Note: Should probably just make board and player classes later....
End of explanation
"""
def win_check(board, player):
''' Check Horizontals,Verticals, and Diagonals for a win '''
if (board[7] == board[8] == board[9] == player) or \
(board[4] == board[5] == board[6] == player) or \
(board[1] == board[2] == board[3] == player) or \
(board[7] == board[4] == board[1] == player) or \
(board[8] == board[5] == board[2] == player) or \
(board[9] == board[6] == board[3] == player) or \
(board[1] == board[5] == board[9] == player) or \
(board[3] == board[5] == board[7] == player):
return True
else:
return False
"""
Explanation: Define a function to check for a win by comparing inputs in the board list. Note: Maybe should just have a list of winning combos and cycle through them?
End of explanation
"""
def full_board_check(board):
''' Function to check if any remaining blanks are in the board '''
if " " in board[1:]:
return False
else:
return True
"""
Explanation: Define function to check if the board is already full in case of a tie. (This is straightforward with our board stored as a list)
Just remember index 0 is always empty.
End of explanation
"""
def ask_player(mark):
''' Asks player where to place X or O mark, checks validity '''
global board
req = 'Choose where to place your: ' + mark
while True:
try:
choice = int(raw_input(req))
except ValueError:
print("Sorry, please input a number between 1-9.")
continue
if choice not in range(1,10):
print("Sorry, please input a number between 1-9.")
continue
if board[choice] == " ":
board[choice] = mark
break
else:
print "That space isn't empty!"
continue
"""
Explanation: Now define a function to get player input and do various checks on it.
End of explanation
"""
def player_choice(mark):
global board,game_state,announce
#Set game blank game announcement
announce = ''
#Get Player Input
mark = str(mark)
# Validate input
ask_player(mark)
#Check for player win
if win_check(board,mark):
clear_output()
display_board()
announce = mark +" wins! Congratulations"
game_state = False
#Show board
clear_output()
display_board()
#Check for a tie
if full_board_check(board):
announce = "Tie!"
game_state = False
return game_state,announce
"""
Explanation: Now have a function that takes in the player's choice (via the ask_player function) then returns the game_state.
End of explanation
"""
def play_game():
reset_board()
global announce
# Set marks
X='X'
O='O'
while True:
# Show board
clear_output()
display_board()
# Player X turn
game_state,announce = player_choice(X)
print announce
if game_state == False:
break
# Player O turn
game_state,announce = player_choice(O)
print announce
if game_state == False:
break
# Ask player for a rematch
rematch = raw_input('Would you like to play again? y/n')
if rematch == 'y':
play_game()
else:
print "Thanks for playing!"
"""
Explanation: Finally put it all together in a function to play the game.
End of explanation
"""
play_game()
"""
Explanation: Let's play!
End of explanation
"""
|
rashikaranpuria/Machine-Learning-Specialization | Machine Learning Foundations: A Case Study Approach/Assignment_three/Document retrieval.ipynb | mit | import graphlab
"""
Explanation: Document retrieval from wikipedia data
Fire up GraphLab Create
End of explanation
"""
people = graphlab.SFrame('people_wiki.gl/people_wiki.gl')
"""
Explanation: Load some text data - from wikipedia, pages on people
End of explanation
"""
people.head()
len(people)
"""
Explanation: Data contains: link to wikipedia article, name of person, text of article.
End of explanation
"""
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
"""
Explanation: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
End of explanation
"""
clooney = people[people['name'] == 'George Clooney']
clooney['text']
"""
Explanation: Exploring the entry for actor George Clooney
End of explanation
"""
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
print obama['word_count']
"""
Explanation: Get the word counts for Obama article
End of explanation
"""
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
"""
Explanation: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
End of explanation
"""
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
"""
Explanation: Sorting the word counts to show most common words at the top
End of explanation
"""
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
tfidf = graphlab.text_analytics.tf_idf(people['word_count'])
tfidf
people['tfidf'] = tfidf['docs']
"""
Explanation: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
End of explanation
"""
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
"""
Explanation: Examine the TF-IDF for the Obama article
End of explanation
"""
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
"""
Explanation: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
End of explanation
"""
graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
"""
Explanation: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
End of explanation
"""
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
"""
Explanation: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
End of explanation
"""
knn_model.query(obama)
"""
Explanation: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
End of explanation
"""
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
elton = people[people['name'] == 'Elton John']
elton['word_count'] = graphlab.text_analytics.count_words(elton['text'])
print elton['word_count']
elton_word_count_table = elton[['word_count']].stack('word_count', new_column_name = ['word','count'])
elton_word_count_table.sort('count',ascending=False)
elton[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
victoria = people[people['name'] == 'Victoria Beckham']
graphlab.distances.cosine(elton['tfidf'][0],victoria['tfidf'][0])
paul = people[people['name'] == 'Paul McCartney']
graphlab.distances.cosine(elton['tfidf'][0],paul['tfidf'][0])
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
knn_model.query(elton, k=None).print_rows(num_rows=30)
kwc_model = graphlab.nearest_neighbors.create(people,features=['word_count'],label='name',distance='cosine')
kwc_model.query(elton)
knn_model.query(victoria, k=None).print_rows(num_rows=30)
kwc_model.query(victoria)
"""
Explanation: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval
End of explanation
"""
|
balarsen/pymc_learning | updating_info/Updating Info.ipynb | bsd-3-clause | # pymc3.distributions.DensityDist?
import matplotlib.pyplot as plt
import matplotlib as mpl
from pymc3 import Model, Normal, Slice
from pymc3 import sample
from pymc3 import traceplot
from pymc3.distributions import Interpolated
from theano import as_op
import theano.tensor as tt
import numpy as np
from scipy import stats
%matplotlib inline
%load_ext version_information
%version_information pymc3
"""
Explanation: This shows how to have some data and update priors form posterious as we get more data
NOTE this requires Pymc3 3.1
Updating priors
In this notebook, I will show how it is possible to update the priors as new data becomes available. The example is a slightly modified version of the linear regression in the Getting started with PyMC3 notebook.
End of explanation
"""
# Initialize random number generator
np.random.seed(123)
# True parameter values
alpha_true = 5
beta0_true = 7
beta1_true = 13
# Size of dataset
size = 100
# Predictor variable
X1 = np.random.randn(size)
X2 = np.random.randn(size) * 0.2
# Simulate outcome variable
Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size)
"""
Explanation: Generating data
End of explanation
"""
basic_model = Model()
with basic_model:
# Priors for unknown model parameters
alpha = Normal('alpha', mu=0, sd=1)
beta0 = Normal('beta0', mu=12, sd=1)
beta1 = Normal('beta1', mu=18, sd=1)
# Expected value of outcome
mu = alpha + beta0 * X1 + beta1 * X2
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y)
# draw 1000 posterior samples
trace = sample(1000)
traceplot(trace);
"""
Explanation: Model specification
Our initial beliefs about the parameters are quite informative (sd=1) and a bit off the true values.
End of explanation
"""
def from_posterior(param, samples):
smin, smax = np.min(samples), np.max(samples)
width = smax - smin
x = np.linspace(smin, smax, 100)
y = stats.gaussian_kde(samples)(x)
# what was never sampled should have a small probability but not 0,
# so we'll extend the domain and use linear approximation of density on it
x = np.concatenate([[x[0] - 3 * width], x, [x[-1] + 3 * width]])
y = np.concatenate([[0], y, [0]])
return Interpolated(param, x, y)
"""
Explanation: In order to update our beliefs about the parameters, we use the posterior distributions, which will be used as the prior distributions for the next inference. The data used for each inference iteration has to be independent from the previous iterations, otherwise the same (possibly wrong) belief is injected over and over in the system, amplifying the errors and misleading the inference. By ensuring the data is independent, the system should converge to the true parameter values.
Because we draw samples from the posterior distribution (shown on the right in the figure above), we need to estimate their probability density (shown on the left in the figure above). Kernel density estimation (KDE) is a way to achieve this, and we will use this technique here. In any case, it is an empirical distribution that cannot be expressed analytically. Fortunately PyMC3 provides a way to use custom distributions, via Interpolated class.
End of explanation
"""
traces = [trace]
for _ in range(10):
# generate more data
X1 = np.random.randn(size)
X2 = np.random.randn(size) * 0.2
Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size)
model = Model()
with model:
# Priors are posteriors from previous iteration
alpha = from_posterior('alpha', trace['alpha'])
beta0 = from_posterior('beta0', trace['beta0'])
beta1 = from_posterior('beta1', trace['beta1'])
# Expected value of outcome
mu = alpha + beta0 * X1 + beta1 * X2
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y)
# draw 10000 posterior samples
trace = sample(1000)
traces.append(trace)
print('Posterior distributions after ' + str(len(traces)) + ' iterations.')
cmap = mpl.cm.autumn
for param in ['alpha', 'beta0', 'beta1']:
plt.figure(figsize=(8, 2))
for update_i, trace in enumerate(traces):
samples = trace[param]
smin, smax = np.min(samples), np.max(samples)
x = np.linspace(smin, smax, 100)
y = stats.gaussian_kde(samples)(x)
plt.plot(x, y, color=cmap(1 - update_i / len(traces)))
plt.axvline({'alpha': alpha_true, 'beta0': beta0_true, 'beta1': beta1_true}[param], c='k')
plt.ylabel('Frequency')
plt.title(param)
plt.show()
"""
Explanation: Now we just need to generate more data and build our Bayesian model so that the prior distributions for the current iteration are the posterior distributions from the previous iteration. It is still possible to continue using NUTS sampling method because Interpolated class implements calculation of gradients that are necessary for Hamiltonian Monte Carlo samplers.
End of explanation
"""
for _ in range(10):
# generate more data
X1 = np.random.randn(size)
X2 = np.random.randn(size) * 0.2
Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size)
model = Model()
with model:
# Priors are posteriors from previous iteration
alpha = from_posterior('alpha', trace['alpha'])
beta0 = from_posterior('beta0', trace['beta0'])
beta1 = from_posterior('beta1', trace['beta1'])
# Expected value of outcome
mu = alpha + beta0 * X1 + beta1 * X2
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y)
# draw 10000 posterior samples
trace = sample(1000)
traces.append(trace)
print('Posterior distributions after ' + str(len(traces)) + ' iterations.')
cmap = mpl.cm.autumn
for param in ['alpha', 'beta0', 'beta1']:
plt.figure(figsize=(8, 2))
for update_i, trace in enumerate(traces):
samples = trace[param]
smin, smax = np.min(samples), np.max(samples)
x = np.linspace(smin, smax, 100)
y = stats.gaussian_kde(samples)(x)
plt.plot(x, y, color=cmap(1 - update_i / len(traces)))
plt.axvline({'alpha': alpha_true, 'beta0': beta0_true, 'beta1': beta1_true}[param], c='k')
plt.ylabel('Frequency')
plt.title(param)
plt.show()
for _ in range(10):
# generate more data
X1 = np.random.randn(size)
X2 = np.random.randn(size) * 0.2
Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size)
model = Model()
with model:
# Priors are posteriors from previous iteration
alpha = from_posterior('alpha', trace['alpha'])
beta0 = from_posterior('beta0', trace['beta0'])
beta1 = from_posterior('beta1', trace['beta1'])
# Expected value of outcome
mu = alpha + beta0 * X1 + beta1 * X2
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y)
# draw 10000 posterior samples
trace = sample(1000)
traces.append(trace)
print('Posterior distributions after ' + str(len(traces)) + ' iterations.')
cmap = mpl.cm.autumn
for param in ['alpha', 'beta0', 'beta1']:
plt.figure(figsize=(8, 2))
for update_i, trace in enumerate(traces):
samples = trace[param]
smin, smax = np.min(samples), np.max(samples)
x = np.linspace(smin, smax, 100)
y = stats.gaussian_kde(samples)(x)
plt.plot(x, y, color=cmap(1 - update_i / len(traces)))
plt.axvline({'alpha': alpha_true, 'beta0': beta0_true, 'beta1': beta1_true}[param], c='k')
plt.ylabel('Frequency')
plt.title(param)
plt.show()
"""
Explanation: You can re-execute the last two cells to generate more updates.
What is interesting to note is that the posterior distributions for our parameters tend to get centered on their true value (vertical lines), and the distribution gets thiner and thiner. This means that we get more confident each time, and the (false) belief we had at the beginning gets flushed away by the new data we incorporate.
End of explanation
"""
|
JKeun/project-02-watcha | 01_crawling/02_api_crawling(raw_df1).ipynb | mit | import requests
import json
import pandas as pd
"""
Explanation: api(json)을 통한 feature 크롤링
y : 'owner_action' key안에 있는 ['rating': 내가준 별점]
X : ['title':영화제목, 'eval_count':평가자수, 'watcha_rating':평균별점, 'filmrate':관람가, 'main_genre':장르, 'nation':국가, 'running_time':상영시간, 'year':제작년도]
$\hat y$ : predicted_rating
추가로 뽑아야 할 feature들
X : [감독, 주연배우, 이동진영화평론가의 평점]
End of explanation
"""
df = pd.DataFrame(columns = ['영화', '내 별점(y)', '평균별점', '평가자수', '등급', '장르', '국가', '상영시간', '년도'])
for page_num in range(1, 23):
response = requests.get("https://watcha.net/v2/users/jXaIHl0ZtdYZ/movies.json?filter%5Bsorting%5D=time&page={page}".format(
page=page_num))
watcha_dict = json.loads(response.text)
watcha_list = watcha_dict.get('cards')
for i in range(24):
rating = watcha_list[i].get('items')[0].get('item').get('owner_action').get('rating') # y값 : 나의 rating
title = watcha_list[i].get('items')[0].get('item').get('title')
avg_rating = watcha_list[i].get('items')[0].get('item').get('watcha_rating')
eval_count = watcha_list[i].get('items')[0].get('item').get('eval_count')
film_rate = watcha_list[i].get('items')[0].get('item').get('filmrate')
genre = watcha_list[i].get('items')[0].get('item').get('main_genre')
nation = watcha_list[i].get('items')[0].get('item').get('nation')
running_time = watcha_list[i].get('items')[0].get('item').get('running_time')
year = watcha_list[i].get('items')[0].get('item').get('year')
df.loc[len(df)] = [title, rating, avg_rating, eval_count, film_rate, genre, nation, running_time, year]
df
"""
Explanation: Watcha api url page=1 ~ 23 까지의 크롤링 작업
End of explanation
"""
df1 = pd.DataFrame(columns = ['영화', '내 별점(y)', '평균별점', '평가자수', '등급', '장르', '국가', '상영시간', '년도'])
response = requests.get("https://watcha.net/v2/users/jXaIHl0ZtdYZ/movies.json?filter%5Bsorting%5D=time&page=23")
watcha_dict = json.loads(response.text)
watcha_list = watcha_dict.get('cards')
len(watcha_list)
for i in range(16):
rating = watcha_list[i].get('items')[0].get('item').get('owner_action').get('rating') # y값 : 나의 rating
title = watcha_list[i].get('items')[0].get('item').get('title')
avg_rating = watcha_list[i].get('items')[0].get('item').get('watcha_rating')
eval_count = watcha_list[i].get('items')[0].get('item').get('eval_count')
film_rate = watcha_list[i].get('items')[0].get('item').get('filmrate')
genre = watcha_list[i].get('items')[0].get('item').get('main_genre')
nation = watcha_list[i].get('items')[0].get('item').get('nation')
running_time = watcha_list[i].get('items')[0].get('item').get('running_time')
year = watcha_list[i].get('items')[0].get('item').get('year')
df1.loc[len(df1)] = [title, rating, avg_rating, eval_count, film_rate, genre, nation, running_time, year]
df1
"""
Explanation: api url page=24 크롤링 작업
End of explanation
"""
watcha_df = df.append(df1, ignore_index=True)
watcha_df.tail(10)
path='C:/Users/JKEUN/ipython notebook/project_01_watcha/resource/'
watcha_df.to_csv(path+'1st_df.csv', index=False, encoding='utf8')
"""
Explanation: 최종DataFrame (sample : 544개)
append
End of explanation
"""
|
santanche/java2learn | notebooks/pt/c04components/s03message-bus/3.iot-dashboard-python.ipynb | gpl-2.0 | from resources.iot.device import IoT_sensor_consumer
from IPython.core.display import display
import ipywidgets as widgets
from resources.iot.device import IoT_mqtt_publisher, IoT_sensor
"""
Explanation: Exemplos de Componentes Visuais para iPython
End of explanation
"""
widgets.FloatProgress(value=30.0, min=0, max=100.0, bar_style='danger', orientation='vertical')
widgets.FloatProgress(value=60.0, min=0, max=100.0, bar_style='info', orientation='horizontal', description='pressão: ')
"""
Explanation: Barra
End of explanation
"""
widgets.Label("isto é um label")
"""
Explanation: Label
End of explanation
"""
widget = widgets.FloatProgress(min=0, max=100.0, bar_style='info', orientation='vertical', description='exemplo') # 'success', 'info', 'warning', 'danger' or ''
widget_label = widgets.Label("isto é um label")
"""
Explanation: Criando dois componentes visuais
End of explanation
"""
display(widget, widget_label)
"""
Explanation: Renderizando componentes visuais
End of explanation
"""
consumer = IoT_sensor_consumer("localhost",1883,"sensor/+/+")
"""
Explanation: Criando um componente que consome mensagens do MQTT
End of explanation
"""
consumer.connect(widget, widget_label)
"""
Explanation: Conectando os componentes
End of explanation
"""
widget_1 = widgets.FloatProgress(min=0, max=40.0, bar_style='info', orientation='vertical')
widget_1_label = widgets.Label()
consumer_1 = IoT_sensor_consumer("localhost",1883,"sensor/1/+")
"""
Explanation: Dashboard de componentes
Criando o consumidor_1 e seus widgets
End of explanation
"""
widget_2 = widgets.FloatProgress(min=0, max=90.0, bar_style='warning', orientation='vertical')
widget_2_label = widgets.Label()
consumer_2 = IoT_sensor_consumer("localhost",1883,"sensor/2/+")
"""
Explanation: Criando o consumidor_2 e seus widgets
End of explanation
"""
widget_3 = widgets.FloatProgress(min=0, max=40.0, bar_style='info', orientation='vertical')
widget_3_label = widgets.Label()
consumer_3 = IoT_sensor_consumer("localhost",1883,"sensor/3/+")
"""
Explanation: Criando o consumidor_3 e seus widgets
End of explanation
"""
widget_4 = widgets.FloatProgress(min=0, max=90.0, bar_style='warning', orientation='vertical')
widget_4_label = widgets.Label()
consumer_4 = IoT_sensor_consumer("localhost",1883,"sensor/4/+")
"""
Explanation: Criando o consumidor_4 e seus widgets
End of explanation
"""
widget_avg = widgets.FloatProgress(min=0, max=90.0, bar_style='success', orientation='horizontal')
widget_avg_label = widgets.Label()
consumer_avg = IoT_sensor_consumer("localhost",1883,"sensor/*/temperature/avg")
"""
Explanation: Criando o consumidor_avg e seus widgets
End of explanation
"""
separator = widgets.Label(value=" ---------- ")
col_1 = widgets.VBox([widget_1, widget_1_label])
col_2 = widgets.VBox([widget_2, widget_2_label])
col_3 = widgets.VBox([widget_3, widget_3_label])
col_4 = widgets.VBox([widget_4, widget_4_label])
col_5 = widgets.HBox([widget_avg, widget_avg_label])
row_1 = widgets.HBox([separator, col_1, separator, col_2, separator, col_3, separator, col_4])
row_2 = widgets.HBox([col_5])
display(row_1)
display(row_2)
"""
Explanation: Organizando os componentes visualmente
End of explanation
"""
consumer_1.connect(widget_1, widget_1_label)
consumer_2.connect(widget_2, widget_2_label)
consumer_3.connect(widget_3, widget_3_label)
consumer_4.connect(widget_4, widget_4_label)
consumer_avg.connect(widget_avg, widget_avg_label)
"""
Explanation: Conectando componentes visuais e seus respectivos consumidores
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.