repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15 values | content stringlengths 335 154k |
|---|---|---|---|
samstav/scipy_2015_sklearn_tutorial | notebooks/04.3 Analyzing Model Capacity.ipynb | cc0-1.0 | import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.svm import SVR
from sklearn import cross_validation
np.random.seed(0)
n_samples = 200
kernels = ['linear', 'poly', 'rbf']
true_fun = lambda X: X ** 3
X = np.sort(5 * (np.random.rand(n_samples) - .5))
y = true_fun(X) + .01 * np.random.randn(n_samples)
plt.figure(figsize=(14, 5))
for i in range(len(kernels)):
ax = plt.subplot(1, len(kernels), i + 1)
plt.setp(ax, xticks=(), yticks=())
model = SVR(kernel=kernels[i], C=5)
model.fit(X[:, np.newaxis], y)
# Evaluate the models using crossvalidation
scores = cross_validation.cross_val_score(model,
X[:, np.newaxis], y, scoring="mean_squared_error", cv=10)
X_test = np.linspace(3 * -.5, 3 * .5, 100)
plt.plot(X_test, model.predict(X_test[:, np.newaxis]), label="Model")
plt.plot(X_test, true_fun(X_test), label="True function")
plt.scatter(X, y, label="Samples")
plt.xlabel("x")
plt.ylabel("y")
plt.xlim((-3 * .5, 3 * .5))
plt.ylim((-1, 1))
plt.legend(loc="best")
plt.title("Kernel {}\nMSE = {:.2e}(+/- {:.2e})".format(
kernels[i], -scores.mean(), scores.std()))
plt.show()
"""
Explanation: The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question:
If our estimator is underperforming, how should we move forward?
Use simpler or more complicated model?
Add more features to each observed data point?
Add more training samples?
The answer is often counter-intuitive. In particular, sometimes using a
more complicated model will give worse results. Also, sometimes adding
training data will not improve your results. The ability to determine
what steps will improve your model is what separates the successful machine
learning practitioners from the unsuccessful.
Learning Curves and Validation Curves
One way to address this issue is to use what are often called Learning Curves.
Given a particular dataset and a model we'd like to fit (e.g. using feature creation and linear regression), we'd
like to tune our value of the hyperparameter kernel to give us the best fit. We can visualize the different regimes with the following plot, modified from the sklearn examples here
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
from sklearn import cross_validation
np.random.seed(0)
n_samples = 200
true_fun = lambda X: X ** 3
X = np.sort(5 * (np.random.rand(n_samples) - .5))
y = true_fun(X) + .02 * np.random.randn(n_samples)
X = X[:, None]
y = y
f, axarr = plt.subplots(1, 3)
axarr[0].scatter(X[::20], y[::20])
axarr[0].set_xlim((-3 * .5, 3 * .5))
axarr[0].set_ylim((-1, 1))
axarr[1].scatter(X[::10], y[::10])
axarr[1].set_xlim((-3 * .5, 3 * .5))
axarr[1].set_ylim((-1, 1))
axarr[2].scatter(X, y)
axarr[2].set_xlim((-3 * .5, 3 * .5))
axarr[2].set_ylim((-1, 1))
plt.show()
"""
Explanation: Learning Curves
What the right model for a dataset is depends critically on how much data we have. More data allows us to be more confident about building a complex model. Lets built some intuition on why that is. Look at the following datasets:
End of explanation
"""
from sklearn.learning_curve import learning_curve
from sklearn.svm import SVR
training_sizes, train_scores, test_scores = learning_curve(SVR(kernel='linear'), X, y, cv=10, scoring="mean_squared_error",
train_sizes=[.6, .7, .8, .9, 1.])
# Use the negative because we want to minimize squared error
plt.plot(training_sizes, -train_scores.mean(axis=1), label="training scores")
plt.plot(training_sizes, -test_scores.mean(axis=1), label="test scores")
plt.ylim((0, 50))
plt.legend(loc='best')
"""
Explanation: They all come from the same underlying process. But if you were asked to make a prediction, you would be more likely to draw a straight line for the left-most one, as there are only very few datapoints, and no real rule is apparent. For the dataset in the middle, some structure is recognizable, though the exact shape of the true function is maybe not obvious. With even more data on the right hand side, you would probably be very comfortable with drawing a curved line with a lot of certainty.
A great way to explore how a model fit evolves with different dataset sizes are learning curves.
A learning curve plots the validation error for a given model against different training set sizes.
But first, take a moment to think about what we're going to see:
Questions:
As the number of training samples are increased, what do you expect to see for the training error? For the validation error?
Would you expect the training error to be higher or lower than the validation error? Would you ever expect this to change?
We can run the following code to plot the learning curve for a kernel = linear model:
End of explanation
"""
from sklearn.learning_curve import learning_curve
from sklearn.svm import SVR
training_sizes, train_scores, test_scores = learning_curve(SVR(kernel='rbf'), X, y, cv=10, scoring="mean_squared_error",
train_sizes=[.6, .7, .8, .9, 1.])
# Use the negative because we want to minimize squared error
plt.plot(training_sizes, -train_scores.mean(axis=1), label="training scores")
plt.plot(training_sizes, -test_scores.mean(axis=1), label="test scores")
plt.ylim((0, 50))
plt.legend(loc='best')
"""
Explanation: You can see that for the model with kernel = linear, the validation score doesn't really decrease as more data is given.
Notice that the validation error generally decreases with a growing training set,
while the training error generally increases with a growing training set. From
this we can infer that as the training size increases, they will converge to a single
value.
From the above discussion, we know that kernel = linear
underfits the data. This is indicated by the fact that both the
training and validation errors are very high. When confronted with this type of learning curve,
we can expect that adding more training data will not help matters: both
lines will converge to a relatively high error.
When the learning curves have converged to a high error, we have an underfitting model.
An underfitting model can be improved by:
Using a more sophisticated model (i.e. in this case, increase complexity of the kernel parameter)
Gather more features for each sample.
Decrease regularization in a regularized model.
A underfitting model cannot be improved, however, by increasing the number of training
samples (do you see why?)
Now let's look at an overfit model:
End of explanation
"""
|
tdhoang0412/python-class | Monday_2017-04-24/06_Homework_1.ipynb | gpl-3.0 | # do not forget to put the following '%matplotlib inline'
# within Jupyter notebooks. If you forget it, external
# windows are opened for the plot but we would like to
# have the plots integrated in the notebooks
# The line only needs to be give ONCE per notebook!
%matplotlib inline
# Verification of scipys Bessel function implementation
# - asymptotic behaviour for large x
import scipy.special as ss
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# for nicer plots, make fonts larger and lines thicker
matplotlib.rcParams['font.size'] = 12
matplotlib.rcParams['axes.linewidth'] = 2.0
def jn_asym(n,x):
"""Asymptotic form of jn(x) for x>>n"""
return np.sqrt(2.0 / np.pi / x) * \
np.cos(x - (n * np.pi / 2.0 + np.pi / 4.0))
# We choose to plot between 0 and 50. We exclude 0 because the
# recursion relation contains a division by it.
x = np.linspace(0., 50, 500)
# plot J_0, J_1 and J_5.
for n in [0, 1, 5]:
plt.plot(x, ss.jn(n, x), label='$J_%d$' % (n))
# and compute its asymptotic form (valid for x>>n, where n is the order).
# must first find the valid range of x where at least x>n.
x_asym = x[x > n]
plt.plot(x_asym, jn_asym(n, x_asym), linewidth = 2.0,
label='$J_%d$ (asymptotic)' % n)
# Finish the plot and show it
plt.title('Bessel Functions')
plt.xlabel('x')
# notet hat you also can use LaTeX for plot labels!
plt.ylabel('$J_n(x)$')
# horizontal line at 0 to show x-axis, but after the legend
plt.legend()
plt.axhline(0)
"""
Explanation: Monday, 2017-04-24 Homework notebook
The first homework consists of the tasks:
Get familiar with the Jupyter notebook, the cell handling, the markdown language and the keyboard shortcuts
Get familiar on how to run Python-codes. There are the examples within the code-directory of the
Monday_2017-04-24 lecture.
Create a new notebook with the name Bessel_Functions and reproduce this notebook. Please
find script forms of the codes bessel_asymp.py and bessel_recursion.py in the code directory of the Monday_2017-04-24 lecture.
Note: With the Notebook you of course already have the solution. Please do not cheat on yourself :-)
Bessel Functions
In this notebook we want to verify two simple relations involving the Bessel
functions $J_n(x)$ of the first kind. The relations are the
asymptotic form of $J_n(x)$ for $x\gg n$ and the known recursion relation to obtain
$J_{n+1}(x)$ from $J_{n}(x)$ and $J_{n-1}(x)$:
$J_n(x) \approx \sqrt{\frac{2}{\pi x}}\cos(x-(n\frac{\pi}{2}+\frac{\pi}{4}))$ for $x\gg n$
$J_{n+1}(x) = \frac{2n}{x} J_n(x)-J_{n-1}(x)$
For more information on the funtions, visit the corresponding Wikipedia article.
We basically would like to check, how well the scipy Bessel function implementation satisfies the above relations.
End of explanation
"""
# Verification of scipys Bessel function implementation
# - recursion relation
import scipy.special as ss
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# for nicer plots, make fonts larger and lines thicker
matplotlib.rcParams['font.size'] = 12
matplotlib.rcParams['axes.linewidth'] = 2.0
# Now, let's verify numerically the recursion relation
# J(n+1,x) = (2n/x)J(n,x)-J(n-1,x), n = 5
# We choose here to consider x-values between 0.1 and 50.
# We exclude 0 because the recursion relation contains a
# formal division by it.
x = np.linspace(0.1, 50, 500)
# construct both sides of the recursion relation, these should be equal
n = 5
# the scipy implementation of jn(5);
j_n = ss.jn(5, x)
# The recursion relation:
j_n_rec = (2.0 * (n - 1) / x) * ss.jn(n - 1, x) - ss.jn(n - 2, x)
# We now plot the difference between the two formulas
# (j_n and j_n_rec above). Note that to
# properly display the errors, we want to use a logarithmic y scale.
plt.semilogy(x, abs(j_n - j_n_rec), 'r+-', linewidth=2.0)
plt.title('Error in recursion for $J_%s$' % n)
plt.xlabel('x')
plt.ylabel('$|J_n(5) - J_{n,rec}(5)|$')
plt.grid()
# Don't forget a show() call at the end of the script.
# Here we save the plot to a file
plt.savefig("bessel_error.png")
"""
Explanation: We see that the asymptotic form is an excellent approximation for the Bessel function at large $x$-values.
End of explanation
"""
|
mcocdawc/chemcoord | Tutorial/Cartesian.ipynb | lgpl-3.0 | import chemcoord as cc
from chemcoord.xyz_functions import get_rotation_matrix
import numpy as np
import time
water = cc.Cartesian.read_xyz('water_dimer.xyz', start_index=1)
small = cc.Cartesian.read_xyz('MIL53_small.xyz', start_index=1)
middle = cc.Cartesian.read_xyz('MIL53_middle.xyz', start_index=1)
"""
Explanation: Introduction
Welcome to the tutorial for ChemCoord (http://chemcoord.readthedocs.org/).
The manipulation of the coordinates is a lot easier, if you can view them on the fly.
So please install a molecule viewer, which opens xyz-files.
A non complete list includes:
molcas gv,
avogadro,
vmd, and
pymol
Cartesian
End of explanation
"""
water
"""
Explanation: Let's have a look at it:
End of explanation
"""
water.view(viewer='gv.exe')
"""
Explanation: It is also possible to open it with an external viewer. I use Molcas gv.exe so you have to change it accordingly to your program of choice.
End of explanation
"""
cc.settings['defaults']['viewer'] = 'gv.exe' # replace by your viewer of choice
"""
Explanation: To make this setting permament, execute:
End of explanation
"""
water['x']
# or explicit label based indexing
water.loc[:, 'x']
# or explicit integer based indexing
water.iloc[:, 1]
"""
Explanation: Slicing
The slicing operations are the same as for pandas.DataFrames. (http://pandas.pydata.org/pandas-docs/stable/indexing.html)
If the 'x' axis is of particular interest you can slice it out with:
End of explanation
"""
water[water['atom'] != 'O'].view()
"""
Explanation: With boolean slicing it is very easy to cut all the oxygens away:
End of explanation
"""
water[(water['atom'] != 'O') & (water['x'] < 1)].view()
"""
Explanation: This can be combined with other selections:
End of explanation
"""
middle.view()
"""
Explanation: Returned type
The indexing behaves like Indexing and Selecting data in
Pandas.
You can slice with Cartesian.loc[key], Cartesian.iloc[keys], and Cartesian[key].
The only question is about the return type.
If the information in the columns is enough to draw a molecule,
an instance of the own class (e.g. Cartesian)
is returned.
If the information in the columns is not enough to draw a molecule, there
are two cases to consider:
A pandas.Series instance is returned for one dimensional slices
A pandas.DataFrame instance is returned in all other cases:molecule.loc[:, ['atom', 'x', 'y', 'z']] returns a `Cartesian`.
molecule.loc[:, ['atom', 'x']]`` returns a `pandas.DataFrame`.
molecule.loc[:, 'atom']`` returns a `pandas.Series`.
Sideeffects and Method chaining
Two general rules are:
1. All functions are sideeffect free unless stated otherwise in the documentation.
2. Where possible the methods return an instance of the own class, to allow method chaining.
Have a look at the unmodified molecule
End of explanation
"""
middle.cut_sphere(radius=5, preserve_bonds=False).view()
"""
Explanation: Chain the methods:
End of explanation
"""
middle.view()
"""
Explanation: The molecule itself remains unchanged.
End of explanation
"""
water.get_bonds()
"""
Explanation: Chemical bonds
One really important method is get_bonds().
It returns a connectivity table, which is represented by a dictionary.
Each index points to set of indices, that are connected to it.
End of explanation
"""
for i in range(3):
middle.get_coordination_sphere(13, n_sphere=i, only_surface=False).view()
time.sleep(1)
"""
Explanation: Now the focus switches to another molecule (MIL53_middle)
Let's explore the coordinationsphere of the Cr atom with the index 7.
End of explanation
"""
(water + 3).view()
(get_rotation_matrix([1, 0, 0], np.radians(90)) @ water).view()
# If you use python2.x the @ operator is not supported. then you have to use xyz_functions.dot
"""
Explanation: Binary operators
Mathematical Operations:
Binary operators are supported in the logic of the scipy stack, but you need
python3.x for using the matrix multiplication operator @.
The general rule is that mathematical operations using the binary operators
(+ - * / @) and the unary operatos (+ - abs)
are only applied to the ['x', 'y', 'z'] columns.
Addition/Subtraction/Multiplication/Division:
If you add a scalar to a Cartesian it is added elementwise onto the
['x', 'y', 'z'] columns.
If you add a 3-dimensional vector, list, tuple... the first element of this
vector is added elementwise to the 'x' column of the
Cartesian instance and so on.
The last possibility is to add a matrix with
shape=(len(Cartesian), 3) which is again added elementwise.
The same rules are true for subtraction, division and multiplication.
Matrixmultiplication:
Only leftsided multiplication with a matrix of shape=(n, 3),
where n is a natural number, is supported.
The usual usecase is for example
np.diag([1, 1, -1]) @ cartesian_instance
to mirror on the x-y plane.
Note that if A is the matrix which is multiplied from the left, and X is the shape=(n, 3)-matrix
consisting of the ['x', 'y', 'z'] columns. The actual calculation is:
(A @ X.T).T
End of explanation
"""
water == water + 1e-15
cc.xyz_functions.isclose(water, water + 1e-15)
cc.xyz_functions.allclose(water, water + 1e-15)
"""
Explanation: Comparison:
The comparison operators == and != are supported and require molecules indexed in the same way:
In some cases it is better to test for numerical equality $ |a - b| < \epsilon$. This is done using
allclose or isclose (elementwise)
End of explanation
"""
import sympy
sympy.init_printing()
x = sympy.Symbol('x')
symb_water = water.copy()
symb_water['x'] = [x + i for i in range(len(symb_water))]
symb_water
symb_water.subs(x, 2)
symb_water.subs(x, 2).view()
"""
Explanation: Symbolic evaluation
It is possible to use symbolic expressions from sympy.
End of explanation
"""
moved = get_rotation_matrix([1, 2, 3], 1.1) @ middle + 15
moved.view()
m1, m2 = middle.align(moved)
cc.xyz_functions.view([m1, m2])
# If your viewer of choice does not support molden files, you have to call separately:
# m1.view()
# m2.view()
"""
Explanation: Alignment
End of explanation
"""
np.random.seed(77)
dist_molecule = small.copy()
dist_molecule += np.random.randn(len(dist_molecule), 3) / 25
dist_molecule.get_pointgroup(tolerance=0.1)
eq = dist_molecule.symmetrize(max_n=25, tolerance=0.3, epsilon=1e-5)
eq['sym_mol'].get_pointgroup(tolerance=0.1)
a, b = small.align(dist_molecule)
a, c = small.align(eq['sym_mol'])
d1 = (a - b).get_distance_to()
d2 = (a - c).get_distance_to()
cc.xyz_functions.view([a, b, c])
# If your viewer of choice does not support molden files, you have to call separately:
# a.view()
# b.view()
# c.view()
"""
Explanation: Symmetry
It is possible to detect the point group and symmetrize a molecule.
Let's distort a $C_{2,v}$ symmetric molecule and symmetrize it back:
End of explanation
"""
(d1['distance'].sum() - d2['distance'].sum()) / d1['distance'].sum()
"""
Explanation: As we can see, the symmetrised molecule is a lot more similar to the original molecule.
The average deviation from the original positions decreased by 35 %.
End of explanation
"""
|
weleen/mxnet | example/notebooks/moved-from-mxnet/class_active_maps.ipynb | apache-2.0 | # -*- coding: UTF-8 –*-
import matplotlib.pyplot as plt
%matplotlib inline
from IPython import display
import os
ROOT_DIR = '.'
import sys
sys.path.insert(0, os.path.join(ROOT_DIR, 'lib'))
import cv2
import numpy as np
import mxnet as mx
import matplotlib.pyplot as plt
"""
Explanation: This demo shows the method proposed in "Zhou, Bolei, et al. "Learning Deep Features for Discriminative Localization." arXiv preprint arXiv:1512.04150 (2015)".
The proposed method can automatically localize the discriminative regions in an image using global average pooling
(GAP) in CNNs.
You can download the pretrained Inception-V3 network from here. Other networks with similar structure(use global average pooling after the last conv feature map) should also work.
End of explanation
"""
im_file = os.path.join(ROOT_DIR, 'sample_pics/barbell.jpg')
synset_file = os.path.join(ROOT_DIR, 'models/inception-v3/synset.txt')
net_json = os.path.join(ROOT_DIR, 'models/inception-v3/Inception-7-symbol.json')
conv_layer = 'ch_concat_mixed_10_chconcat_output'
prob_layer = 'softmax_output'
arg_fc = 'fc1'
params = os.path.join(ROOT_DIR, 'models/inception-v3/Inception-7-0001.params')
mean = (128, 128, 128)
raw_scale = 1.0
input_scale = 1.0/128
width = 299
height = 299
resize_size = 340
top_n = 5
ctx = mx.cpu(1)
"""
Explanation: Set the image you want to test and the classification network you want to use. Notice "conv_layer" should be the last conv layer before the average pooling layer.
End of explanation
"""
synset = [l.strip() for l in open(synset_file).readlines()]
"""
Explanation: Load the label name of each class.
End of explanation
"""
symbol = mx.sym.load(net_json)
internals = symbol.get_internals()
symbol = mx.sym.Group([internals[prob_layer], internals[conv_layer]])
save_dict = mx.nd.load(params)
arg_params = {}
aux_params = {}
for k, v in save_dict.items():
l2_tp, name = k.split(':', 1)
if l2_tp == 'arg':
arg_params[name] = v
if l2_tp == 'aux':
aux_params[name] = v
mod = mx.model.FeedForward(symbol,
arg_params=arg_params,
aux_params=aux_params,
ctx=ctx,
allow_extra_params=False,
numpy_batch_size=1)
"""
Explanation: Build network symbol and load network parameters.
End of explanation
"""
weight_fc = arg_params[arg_fc+'_weight'].asnumpy()
# bias_fc = arg_params[arg_fc+'_bias'].asnumpy()
im = cv2.imread(im_file)
rgb = cv2.cvtColor(cv2.resize(im, (width, height)), cv2.COLOR_BGR2RGB)
"""
Explanation: Read the weight of the fc layer in softmax classification layer. Bias can be neglected since it does not really affect the result.
Load the image you want to test and convert it from BGR to RGB(opencv use BGR by default).
End of explanation
"""
def im2blob(im, width, height, mean=None, input_scale=1.0, raw_scale=1.0, swap_channel=True):
blob = cv2.resize(im, (height, width)).astype(np.float32)
blob = blob.reshape((1, height, width, 3))
# from nhwc to nchw
blob = np.swapaxes(blob, 2, 3)
blob = np.swapaxes(blob, 1, 2)
if swap_channel:
blob[:, [0, 2], :, :] = blob[:, [2, 0], :, :]
if raw_scale != 1.0:
blob *= raw_scale
if isinstance(mean, np.ndarray):
blob -= mean
elif isinstance(mean, tuple) or isinstance(mean, list):
blob[:, 0, :, :] -= mean[0]
blob[:, 1, :, :] -= mean[1]
blob[:, 2, :, :] -= mean[2]
elif mean is None:
pass
else:
raise TypeError, 'mean should be either a tuple or a np.ndarray'
if input_scale != 1.0:
blob *= input_scale
return blob
blob = im2blob(im, width, height, mean=mean, swap_channel=True, raw_scale=raw_scale, input_scale=input_scale)
outputs = mod.predict(blob)
score = outputs[0][0]
conv_fm = outputs[1][0]
score_sort = -np.sort(-score)[:top_n]
inds_sort = np.argsort(-score)[:top_n]
"""
Explanation: Feed the image data to our network and get the outputs.
We select the top 5 classes for visualization by default.
End of explanation
"""
def get_cam(conv_feat_map, weight_fc):
assert len(weight_fc.shape) == 2
if len(conv_feat_map.shape) == 3:
C, H, W = conv_feat_map.shape
assert weight_fc.shape[1] == C
detection_map = weight_fc.dot(conv_feat_map.reshape(C, H*W))
detection_map = detection_map.reshape(-1, H, W)
elif len(conv_feat_map.shape) == 4:
N, C, H, W = conv_feat_map.shape
assert weight_fc.shape[1] == C
M = weight_fc.shape[0]
detection_map = np.zeros((N, M, H, W))
for i in xrange(N):
tmp_detection_map = weight_fc.dot(conv_feat_map[i].reshape(C, H*W))
detection_map[i, :, :, :] = tmp_detection_map.reshape(-1, H, W)
return detection_map
plt.figure(figsize=(18, 6))
plt.subplot(1, 1+top_n, 1)
plt.imshow(rgb)
cam = get_cam(conv_fm, weight_fc[inds_sort, :])
for k in xrange(top_n):
detection_map = np.squeeze(cam.astype(np.float32)[k, :, :])
heat_map = cv2.resize(detection_map, (width, height))
max_response = detection_map.mean()
heat_map /= heat_map.max()
im_show = rgb.astype(np.float32)/255*0.3 + plt.cm.jet(heat_map/heat_map.max())[:, :, :3]*0.7
plt.subplot(1, 1+top_n, k+2)
plt.imshow(im_show)
print 'Top %d: %s(%.6f), max_response=%.4f' % (k+1, synset[inds_sort[k]], score_sort[k], max_response)
plt.show()
"""
Explanation: Localize the discriminative regions by analysing the class's response in the network's last conv feature map.
End of explanation
"""
|
BYUFLOWLab/BYUFLOWLab.github.io | onboarding/PythonPrimer.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Why Python
For this comparison I'm going to assume most of you are primarily Matlab users. Matlab is great, especially in a university environment. It's an easy to use, interpreted, high-level language with automatic memory management and lots of supporting libraries. Python shares those same advantages. What's the difference then? Python has some benefits that may become more important to you as your problems get larger, or as you move into industry. Some of these benefits include:
It's free. Matlab is cheap as a student, but it's definitely not cheap once you are out of the university setting.
It's open source. Often as you advance in your research you will need to dive into the details of an algorithm and may need to make some modifications. That's not possible with most of Matlab's libraries.
Performance. Matlab is not fast. To be clear, Python is also an interpreted language so it isn't fast either. But with Python it is easy to wrap C and Fortran code. In fact that is one of its primary uses in scientific omputing. It acts as a "glue" code between different codes. We've used it to connect a CFD code in C++, blade design tools in Fortran 95, a structural FEM code in C++, a cost tool in C, other cost tools in pure Python, and an optimizer in Fortran 77, etc. The workhorse code remains in a compiled langauge, but you can interact with and connect the tools in a simple, concise, scriptable language. You get both speed and ease of use! Matlab does have mex files, but that approach is much more complicated. A common workflow with Python is to write everything in pure Python, then once tested, profile and move bottlenecks in the code to C or Fortran if necessary.
Parallelization. This is related to 2. Matlab does support parallel computing, but that requires an additional expensive license to the parallel computing toolbox, and is not as capable. Similar comment for cloud computing.
Unlike Matlab, Python is a full featured programming langauge. In additional to procedural style it also supports object oriented and functional styles. (Matlab has some OO support, but it is very weak). Python has dictionaries, package managment, modern string support, cloud computing, error handling, connections to web servers, etc.
Are there any drawbacks? The only one I've come across is Simulink. I don't know of anything of comparable capability in the Python world. But, since this is a fluid dynamics conference, probably none of us is worried about that.
Installation
If you are on Linux or OS X you should already have Python. Using the system version is fine for testing the waters, but if you are going to upgrade versions or start using different packages you should install your own version of Python rather than rely on the system version.
There are lots of ways to install Python. I recommend using conda by downloading Anaconda or Miniconda. See here for helping deciding which one to select.
Editing
I just use a text editor, either ST3 or Atom. I like using a text editor for all my languages because I heavily rely on keyboard shortcuts.
But for getting started, I would probably recommend using an IDE. There are many. PyCharm seems to be well-liked by my students.
IPython notebooks (now Jupyter notebooks), of which this notebook is an example, is another great option. They are useful for combining code along with text (with LaTeX support). Many people love using it. I like it for demos like this, but don't like using it for development (again, because I like my keyboard shortcuts). If you want to go the notebook route you should definitely download it locally, rather than using this remotely hosted server that we are using for the demo.
Useful Packages
Some packages you may want to install include (you will definitely need the first three):
NumPy: array methods
SciPy: integration, root finding, optimization, linear algebra, etc.
Matplotlib: plotting package (also see Bokeh, Plotly, etc.)
IPython: interactive shells and notebook
pandas: data structure and data analysis tools (like R)
scikit-learn: machine learning
many others...
Tutorials
Many exist, and I'm not familiar enough with them all to recommend one over another. I like the one in the official Python docs. It's probably more detailed than you want for a first exposure, but it is a good resource.
Matlab Users
Here are two useful resources called NumPy for Matlab Users: one, two.
Examples
Today we will go through two simple examples to introduce you to the syntax and to show how to wrap an external Fortran/C code. If you are viewing this in nbviewer I suggest you download it first (button in upper right corner), then open it up at https://try.jupyter.org so you can edit and follow along. Later you can install IPython so you can make edits locally rather than on a remote server. First, evaluate the cell below. Press shift-enter to evaluate a cell. This cell imports some libraries we will need. The first line is only for IPython (you wouldn't use it in a normal Python script). It just tells IPython that we want to see plots inline. The next line imports numpy, which contains a lot of important array operations. The last imports matplotlib, which is a plotting package.
End of explanation
"""
from math import pi
# all in standard English units
# atmosphere
rho = 0.0024 # air density
# geometry
b = 8.0 # wing span
chord = 1.0
# mass properties
W = 2.4 # total weight of aircraft
# other parameters
e = 0.9 # Oswald efficiency factor
CDp = 0.02 # could compute but just input for simplicity
# an array of wind speeds
V = np.linspace(10, 30, 100)
# Induced drag
q = 0.5*rho*V**2 # dyamic pressure
L = W # equilibrium flight
Di = L**2/(q*pi*b**2*e)
# parasite drag
S = b*chord
Dp = CDp*q*S
# these next 3 lines purely for style in the plot (loading a predefined styleshet)
# I have my own custom styles I use, but for this example let's use one of matplotlib's
plt.style.use('ggplot')
plt.rcParams.update({'font.size': 16})
colors = plt.rcParams['axes.color_cycle'] # grab the current color scheme
# plot it
plt.figure()
plt.plot(V, Di)
plt.plot(V, Dp)
plt.plot(V, Di+Dp)
plt.xlabel('V (ft/s)')
plt.ylabel('Drag (lbs)')
# label the plots
plt.text(25, 0.06, 'induced drag', color=colors[0])
plt.text(12, 0.06, 'parasite drag', color=colors[1])
plt.text(20, 0.17, 'total drag', color=colors[2])
"""
Explanation: Simple UAV drag curves
We are going to compute induced and parasite drag (in a very basic way) as an example to introduce scripting.
\begin{align}
q&= \frac{1}{2} \rho V_\infty^2 \
D_i &= \frac{L^2}{q \pi b^2 e} \
D_p &= {C_D}_p q S
\end{align}
The first few lines make some imports. The first line is only for Jupyter notebooks. It just tells the notebook to show plots inline. You wouldn't need that in a normal python script.
End of explanation
"""
def func(x, y):
add = x + y
mult = x * y
return add, mult
a, m = func(1.0, 3.0)
print 'a =', a, 'm =', m
a, m = func(2.0, 7.0)
print 'a =', a, 'm =', m
def induced_drag():
pass
def parasite_drag():
pass
# atmosphere
rho = 0.0024 # air density
# geometry
b = 8.0 # wing span
chord = 1.0
# mass properties
W = 2.4 # total weight of aircraft
# other parameters
e = 0.9 # Oswald efficiency factor
CDp = 0.02 # could compute but just input for simplicity
# wind speeds
V = np.linspace(10, 30, 100)
"""
Explanation: Try it yourself. Let's do the same calculation, but with reusable functions. In Python functions are easy to define. A simple example is below (note that unlike Matlab, you can have as many functions as you want in a file.)
End of explanation
"""
class UAV(object):
def __init__(self, b, chord, W, rho):
self.b = b
self.S = b*chord
self.L = W
self.rho = rho
def induced_drag(self, V, e):
q = 0.5*self.rho*V**2
Di = self.L**2/(q*pi*self.b**2*e)
return Di
def parasite_drag(self, V, CDp):
q = 0.5*self.rho*V**2
Dp = CDp*q*self.S
return Dp
# atmosphere
rho = 0.0024 # air density
# geometry
b = 8.0 # wing span
chord = 1.0
# mass properties
W = 2.4 # total weight of aircraft
# setup UAV object
uav = UAV(b, chord, W, rho)
# setup sweep
V = np.linspace(10, 30, 100)
# idrag
e = 0.9 # Oswald efficiency factor
Di = uav.induced_drag(V, e)
# pdrag
CDp = 0.02
Dp = uav.parasite_drag(V, CDp)
# style
plt.style.use('fivethirtyeight')
# plot it
plt.figure()
plt.plot(V, Di)
plt.plot(V, Dp)
plt.plot(V, Di+Dp)
plt.xlabel('V (ft/s)')
plt.ylabel('Drag (lbs)')
# label the plots
colors = plt.rcParams['axes.color_cycle']
plt.text(25, 0.06, 'induced drag', color=colors[0])
plt.text(12, 0.06, 'parasite drag', color=colors[1])
plt.text(20, 0.17, 'total drag', color=colors[2])
"""
Explanation: Finally, let's do it once more, but in an object-oriented style.
End of explanation
"""
from math import fabs
def laplace_grid_python(n, top, bottom, left, right, tol, iter_max):
# initialize
phi = np.zeros((n+1, n+1))
iters = 0 # number of iterations
err_max = 1e6 # maximum error in grid (start at some arbitrary number just to enter loop)
# set boundary conditions
# run while loop until tolerance reached or max iterations
while ():
# reset the maximum error to something small (I suggest something like -1)
err_max = -1.0
# loop over all *interior* cells
for i in range():
for j in range():
# save previous point for computing error later
phi_prev = phi[i, j]
# update point
phi[i, j] =
# update maximum error
err_max =
# update iteration count
iters += 1
return phi, err_max, iters
# run a sample case (50 x 50 grid with bottom and left and 1.0, top and right at 0.0)
n = 50
top = 0.0
bottom = 1.0
left = 1.0
right = 0.0
tol = 1e-5
iter_max = 10000
phi, err_max, iters = laplace_grid_python(n, top, bottom, left, right, tol, iter_max)
# plot it
x = np.linspace(0, 1, n+1)
y = np.linspace(0, 1, n+1)
[X, Y] = np.meshgrid(x, y)
plt.figure()
plt.contourf(X, Y, phi, 100, cmap=plt.cm.get_cmap("YlGnBu"))
plt.colorbar()
plt.show()
"""
Explanation: Wrapping Fortran/C
This next example is going to solve Laplace's equation on a grid. First, we will do it in pure Python, then we will rewrite a portion of the code in Fortran and call it in Python for improved speed. Recall Laplace's equation:
$$ \nabla^2 \phi = 0 $$
where $\phi is some scalar. For a regular rectangular grid, with equal spacing in x and y, you might recall that a simple iterative method for solving this equation consists of the following update rule:
$$ \phi_{i, j} = \frac{1}{4} (\phi_{i+1, j} + \phi_{i-1, j} + \phi_{i, j+1} + \phi_{i, j-1})$$
In other words, each cell updates its value using the average value of all of its neighbors (note that they are much more efficient ways to solve Laplace's equation on a grid. For our purpose we just want to keep things simple). This process must be repeated for every cell in the domain, and repeated until converged.
We are going to run a simple case where boundary values are provided at the top, bottom, left, and right edges. You should iterate until the maximum change in $\phi$ is below some tolerance (tol) or until a maximum number of iterations is reached (iter_max). n is the number of cells (same discretization in x and y).
I've started a script for you below. See if you can fill in the details. I've not provided all the syntax you will need to know so you may have to look some things up. A full implementation is down below, but don't peek unless you are really stuck!
End of explanation
"""
%%timeit
from math import fabs
def laplace_grid_python(n, top, bottom, left, right, tol, iter_max):
# initialize
phi = np.zeros((n+1, n+1))
iters = 0 # number of iterations
err_max = 1e6 # maximum error in grid (start at some arbitrary number just to enter loop)
# set boundary conditions
phi[0, :] = bottom
phi[-1, :] = top
phi[:, 0] = left
phi[:, -1] = right
# run while loop until tolerance reached or max iterations
while (err_max > tol and iters < iter_max):
# reset the maximum error to something small (I suggest something like -1)
err_max = -1.0
# loop over all interior cells
for i in range(1, n):
for j in range(1, n):
# save previous point
phi_prev = phi[i, j]
# update point
phi[i, j] = (phi[i-1,j] + phi[i+1,j] + phi[i,j-1] + phi[i,j+1])/4.0
# update maximum error
err_max = max(err_max, fabs(phi[i, j] - phi_prev))
# update iteration count
iters += 1
return phi, err_max, iters
# run a sample case (50 x 50 grid with bottom and left and 1.0, top and right at 0.0)
n = 50
top = 0.0
bottom = 1.0
left = 1.0
right = 0.0
tol = 1e-5
iter_max = 10000
phi, err_max, iters = laplace_grid_python(n, top, bottom, left, right, tol, iter_max)
# plot it
x = np.linspace(0, 1, n+1)
y = np.linspace(0, 1, n+1)
[X, Y] = np.meshgrid(x, y)
plt.figure()
plt.contourf(X, Y, phi, 100, cmap=plt.cm.get_cmap("YlGnBu"))
plt.colorbar()
plt.show()
"""
Explanation: In our implementation below we are adding the IPython flag %%timeit at the top. This will run the whole block some number of times and report the best time back.
Adding some blank space below just to visually separate the answer.
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
End of explanation
"""
%%timeit
from _laplace import laplacegridfortran
n = 50
top = 0.0
bottom = 1.0
left = 1.0
right = 0.0
tol = 1e-5
iter_max = 10000
phi, err_max, iters = laplacegridfortran(n, top, bottom, left, right, tol, iter_max)
# plot it
x = np.linspace(0, 1, n+1)
y = np.linspace(0, 1, n+1)
[X, Y] = np.meshgrid(x, y)
plt.figure()
plt.contourf(X, Y, phi, 100, cmap=plt.cm.get_cmap("YlGnBu"))
plt.colorbar()
plt.show()
"""
Explanation: This takes a while. Let's move the double for loop computation to Fortran. I've supplied a file called laplace.f90 where I've done this for you. We just need to build this as a shared library so we can call it from Python. Open a terminal (you can do this in try.jupyter.org as well). We will compile the fortran code to a shared library with f2py, (can also use standard gfortran compilation but you get more flexibility and automatic setup with f2py). In all of the below I am using an O2 optimization flag. Note the underscore in the shared library name. This is just convention. Note that if you make a mistake in importing, IPython caches your modules so you'd need to restart the Kernel. You can even do all of this <try.jupyter.org> by opening a terminal.
Using f2py
f2py -c --opt=-O2 -m _laplace laplace.f90
Using setup script. Open up a file and call it setup.py. At a minimum this is all it needs:
from numpy.distutils.core import setup, Extension
setup(
ext_modules=[Extension('_laplace', ['laplace.f90'], extra_compile_args=['-O2'])]
)
Usually a lot more information would be added (name, license, other python packages, etc.) You can read more about setuptools later.
To build it one normally just uses build or install commands, but we will built it inplace for testing.
python setup.py build_ext --inplace
Now we can call it from Python just as a regular method. An example is shown below doing the exact same thing as before, but calling the Fortran code in laplacegridfortran. This runs over 10x faster (and could be even faster if we used ifort instead of gfortran). How much faster the code is of course depends ont he problem as you change n the difference will either become more or less significant.
End of explanation
"""
|
wutienyang/facebook_fanpage_analysis | Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb | mit | import math
import datetime
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# 讀取粉絲頁posts
page_id = "appledaily.tw"
path = 'post/'+page_id+'_post.csv'
df = pd.read_csv(path, encoding = 'utf8')
"""
Explanation: 如何分析Facebook粉絲頁資料並匯出excel報表?
接下來會以前面爬下來的蘋果日報粉絲頁當作本文範例
使用套件
pandas : 在做資料分析的時候,通常都會把資料轉成pandas中DataFrame,因為可以快速有效地處理這些資料(統計,過濾,分組...)
matplotlib : python中最著名的繪圖套件,可以很輕易地畫出各種分析統計圖表
seaborn : 在matplotlib更強大方便的繪圖套件,提供更高階的API(用matplotlib要設定比較多,自由度較高,seaborn自由度沒那麼高,但比較易用)
End of explanation
"""
df.head()
"""
Explanation: 來看看資料的前5筆
End of explanation
"""
df['status_link'][0]
"""
Explanation: 要如何找到這則在FB上的post呢?
可以透過status_link去找回這則po文
End of explanation
"""
len(df)
"""
Explanation: 處理前總共5234筆
End of explanation
"""
df = df[(df['num_reactions']!=0) & (df['status_message'].notnull())].reindex()
"""
Explanation: 把這些過濾掉,並且重新做reindex,原因是因為內建過濾的時候,它的index是不會改變的
End of explanation
"""
len(df)
"""
Explanation: 處理後剩下5061筆,總共過濾掉了173筆
End of explanation
"""
df['datetime'] = df['status_published'].apply(lambda x: datetime.datetime.strptime(x,'%Y-%m-%d %H:%M:%S'))
df['weekday'] = df['datetime'].apply(lambda x: x.weekday_name)
df['hour'] = df['datetime'].apply(lambda x: x.hour)
"""
Explanation: 處理日期和新增星期,小時
先處理日期,由於讀入的是string,先轉成datatime的object
就可以再取出星期(發文星期)和小時(單日發文時段)
End of explanation
"""
df.plot(x='datetime', y=['num_likes', 'num_loves', 'num_wows', 'num_hahas', 'num_sads', 'num_angrys'] ,
figsize=(12,8))
"""
Explanation: reactions隨時間變化趨勢圖
由於臉書在2016年有更新,除了按讚(like),還多了loves,wows,hahas,sads,angry,而reaction就是全部的總和
該篇post總共得到多少回應(reaction)
x軸是時間,y軸是likes,loves,wows,hahas,sads,angry的數目,可以看出隨時間變化的各個趨勢
2017年看得出來很用心在經營,2014似乎有空窗期
End of explanation
"""
df.plot(x='datetime', y=['num_reactions', 'num_comments', 'num_shares'],
figsize=(12,8))
"""
Explanation: 按讚,留言,分享隨時間變化趨勢圖
x軸是時間,y軸是reactions,comments,wows,shares的數目,可以看出隨時間變化的按讚,留言,分享的趨勢
2016有數次綠色高於其他顏色,也就是說留言數大於反應和分享,推測可能是留言送禮的post吧
End of explanation
"""
import datetime
delta_datetime = df['datetime'].shift(1) - df['datetime']
delta_datetime_df = pd.Series(delta_datetime).describe().apply(str)
delta_datetime_df = delta_datetime_df.to_frame(name='frequent of posts')
delta_datetime_df
"""
Explanation: 發文頻率統計
統計每個post的發布時間間隔,就可以了解發文的頻率
雖然平均的發文頻率是12小時,但是看四分位數,前四分之一是半小時,前四分之三是五小時,
表示發文是非常頻繁的,一天好幾po文,至於平均數和標準差會這麼大,推測是初期經營臉書的發文頻率過低的影響
被前期的outlier影響
End of explanation
"""
def weekday(d):
list_key = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
list_value = []
for one in list_key:
if one in d.keys():
list_value.append(d[one])
else:
list_value.append(0)
df = pd.DataFrame(index = list_key, data = {'weekday': list_value}).reset_index()
return df
df_weekday = weekday(dict(df['weekday'].value_counts()))
df_weekday
"""
Explanation: 處理星期和小時
要重新創造DataFrame,理由有兩個
第一個 - 假設某個時間是0的話,要填上0
第二個 - key要按照順序,畫圖時才會是周一到週五,不然會順序會亂掉
End of explanation
"""
sns.barplot(x='index', y='weekday', data = df_weekday)
def hour(d):
list_key = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]
list_value = []
for one in list_key:
if one in d.keys():
list_value.append(d[one])
else:
list_value.append(0)
df = pd.DataFrame(index = list_key, data = {'hour': list_value}).reset_index()
return df
df_hour = hour(dict(df['hour'].value_counts()))
df_hour
"""
Explanation: 星期幾發文數目統計長條圖
看起來星期日較少發文
End of explanation
"""
ax = sns.barplot(x='index', y='hour', data = df_hour)
df_status_type = df['status_type'].value_counts().to_frame(name='status_type')
df_status_type
"""
Explanation: 單日第幾時發文數目統計長條圖
看這發文的曲線,小編真難當,晚上凌晨,一大清早還是要發文
End of explanation
"""
sns.barplot(x='index', y='status_type', data = df_status_type.reset_index())
"""
Explanation: 發文種類數目統計長條圖
最喜歡的發文形式是圖片,再來是分享連結和影片
End of explanation
"""
sns.stripplot(x="status_type", y="num_reactions", data=df, jitter=True)
"""
Explanation: 發文種類散佈圖
單一個點代表該發文是什麼種類和得到多少reaction,photo發文的reaction在0-50000最為密集
End of explanation
"""
sns.stripplot(x="weekday", y="num_reactions", data=df, jitter=True)
"""
Explanation: 星期幾發文散佈圖
單一個點代表該post是什麼種類和得到多少reaction
End of explanation
"""
sns.stripplot(x="hour", y="num_reactions", data=df, jitter=True)
"""
Explanation: 單日第幾時發文散佈圖
單一個點代表該post是什麼種類和得到多少reaction
End of explanation
"""
g = sns.FacetGrid(df, col="status_type")
g.map(plt.hist, "num_reactions")
"""
Explanation: 各個不同發文類型的reaction長條圖
End of explanation
"""
df_reaction = df[['num_likes', 'num_loves', 'num_wows', 'num_hahas', 'num_sads', 'num_angrys']]
colormap = plt.cm.viridis
plt.title('Pearson Correlation of Features', y=1.05, size=15)
sns.heatmap(df_reaction.astype(float).corr(),linewidths=0.1,vmax=1.0, square=True, cmap=colormap, linecolor='white', annot=True)
"""
Explanation: 如果我們想看不同reaction間的關係就需要用到Pearson Correlation: 主要衡量兩變數間線性關聯性的高低程度
由此圖我們可以得知reaction之間的相關性都不高
End of explanation
"""
df_tmp = df[['num_reactions', 'num_comments', 'num_shares']]
colormap = plt.cm.viridis
plt.title('Pearson Correlation of Features', y=1.05, size=15)
sns.heatmap(df_tmp.astype(float).corr(),linewidths=0.1,vmax=1.0, square=True, cmap=colormap, linecolor='white', annot=True)
"""
Explanation: share和comment相關性較高,留言和分享的人數相關性比留言和reaction的人數相關性高
End of explanation
"""
import jieba
import jieba.analyse
import operator
from wordcloud import WordCloud
# 安裝jieba套件的時候,就有繁體詞庫
jieba.set_dictionary('/home/wy/anaconda3/envs/python3/lib/python3.6/site-packages/jieba/extra_dict/dict.txt.big')
"""
Explanation: 分析post中的文字
需要用到jieba!
jieba是一個強大的中文斷詞程式,可以幫你去分析你要分析的文本內容
wordcloud 則是畫文字雲的套件
End of explanation
"""
list(df['status_message'])[99]
for one in jieba.cut(list(df['status_message'])[99]):
print (one)
jieba.analyse.extract_tags(list(df['status_message'])[99], topK=120)
"""
Explanation: 先介紹一下jieba的使用
一般使用jieba分詞呼叫的api jieba.cut 而本文則是使用 jieba.analyse.extract_tags(基於 TF-IDF算法關鍵詞抽取)
原因在於透過TF-IDF評估單詞對於文件的集合或詞庫中一份文件的重要程度,就可以過濾掉不重要的字,看以下範例 :
End of explanation
"""
def jieba_extract(message_list):
word_count = {}
for message in message_list:
# 在抽取關鍵字時,可能會發生錯誤,先把錯誤的message收集起來,看看是怎麼一回事
seg_list = jieba.analyse.extract_tags(message, topK=120)
for seg in seg_list:
if not seg in word_count:
word_count[seg] = 1
else:
word_count[seg] += 1
sorted_word_count = sorted(word_count.items(), key=operator.itemgetter(1))
sorted_word_count.reverse()
return sorted_word_count
sorted_word_count = jieba_extract(list(df['status_message']))
"""
Explanation: 因此我們可以把每一篇發文的內容透過jieba的關鍵字抽取,抽取出重要的字
然後統計這些字出現的頻率,並使用WordCloud畫成文字雲
End of explanation
"""
print (sorted_word_count[:10])
"""
Explanation: 看一下最常出現在post的字前十名是啥
End of explanation
"""
tpath = '/home/wy/font/NotoSansCJKtc-Black.otf'
wordcloud = WordCloud(max_font_size=120, relative_scaling=.1, width=900, height=600, font_path=tpath).fit_words(sorted_word_count)
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
"""
Explanation: 出現文字雲! 文字雲其實還有蠻多可以調整的,甚至可以把圖片填滿字,請參考 wordcloud
會出現http com www 是因為每篇發文都會有link連結,所以就把link連結解析出來了
End of explanation
"""
tpath = '/home/wy/font/NotoSansCJKtc-Black.otf'
wordcloud = WordCloud(max_font_size=120, relative_scaling=.1, width=900, height=600, font_path=tpath).fit_words(sorted_word_count[30:])
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
"""
Explanation: 所以我們把前30名頻率高的拿掉在畫文字雲
sorted_word_count[30:]
End of explanation
"""
# 讀入comment csv
c_path = path = 'comment/'+page_id+'_comment.csv'
c_df = pd.read_csv(c_path)
c_df.head()
c_df = c_df[c_df['comment_message'].notnull()].reindex()
sorted_comment_message = jieba_extract(list(c_df['comment_message']))
print (sorted_comment_message[:10])
tpath = '/home/wy/font/NotoSansCJKtc-Black.otf'
wordcloud = WordCloud(max_font_size=120, relative_scaling=.1, width=900, height=600, font_path=tpath).fit_words(sorted_comment_message)
plt.figure()
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
c_df = c_df[c_df['comment_author'].notnull()].reindex()
def word_count(data_list):
d = {}
for one in data_list:
if one not in d:
d[one] = 1
else:
d[one] += 1
return d
d = word_count(list(c_df['comment_author']))
comment_authors = [(k, d[k]) for k in sorted(d, key=d.get, reverse=True)]
print (comment_authors[:10])
tpath = '/home/wy/font/NotoSansCJKtc-Black.otf'
wordcloud = WordCloud(max_font_size=120, relative_scaling=.1, width=900, height=600, font_path=tpath).fit_words(comment_authors)
plt.figure()
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
"""
Explanation: comments
End of explanation
"""
import xlsxwriter
"""
Explanation: 輸出excel報表
當我們在上面做了許多分析,那能不能把這些分析都輸成excel報表!?
用python輸出excel就靠它了 xlsxwriter !
End of explanation
"""
df_num_reactions = df['num_reactions'].describe().to_frame(name='reactions')
df_num_reactions
df_num_comments = df['num_comments'].describe().to_frame(name='comments')
df_num_comments
df_num_shares = df['num_shares'].describe().to_frame(name='shares')
df_num_shares
"""
Explanation: 我們先用pandas中的describe (Generate various summary statistics),去統計該欄位的平均數,中位數,標準差,四分位數...
End of explanation
"""
# 設定路徑
excel_path = 'excel/'+page_id+'_analysis.xlsx'
writer = pd.ExcelWriter(excel_path, engine='xlsxwriter')
# 把DataFrame寫入到xlsx
df_num_reactions.to_excel(writer, sheet_name=page_id, startcol=0, startrow=0)
df_num_comments.to_excel(writer, sheet_name=page_id, startcol=3, startrow=0)
df_num_shares.to_excel(writer, sheet_name=page_id, startcol=6, startrow=0)
delta_datetime_df.to_excel(writer, sheet_name=page_id, startcol=9, startrow=0)
df_status_type.to_excel(writer, sheet_name=page_id, startcol=0, startrow=11)
df_weekday.set_index('index').to_excel(writer, sheet_name=page_id, startcol=0, startrow=25)
df_hour.set_index('index').to_excel(writer, sheet_name=page_id, startcol=0, startrow=39)
# 畫出內建長條圖
workbook = writer.book
# 發文種類長條統計圖
chart1 = workbook.add_chart({'type': 'column'})
chart1.add_series({
'categories': '='+page_id+'!$A$13:$A$18',
'values': '='+page_id+'!$B$13:$B$18',
})
chart1.set_title ({'name': '發文種類長條統計圖'})
chart1.set_x_axis({'name': 'status_type'})
chart1.set_y_axis({'name': 'count'})
worksheet = writer.sheets[page_id]
worksheet.insert_chart('D12', chart1)
# 星期幾發文統計長條圖
chart2 = workbook.add_chart({'type': 'column'})
chart2.add_series({
'categories': '='+page_id+'!$A$27:$A$33',
'values': '='+page_id+'!$B$27:$B$33',
})
chart2.set_title ({'name': '星期幾發文統計長條圖'})
chart2.set_x_axis({'name': 'hour'})
chart2.set_y_axis({'name': 'count'})
worksheet = writer.sheets[page_id]
worksheet.insert_chart('D26', chart2)
# 單日幾時發文統計長條圖
chart3 = workbook.add_chart({'type': 'column'})
chart3.add_series({
'categories': '='+page_id+'!$A$41:$A$64',
'values': '='+page_id+'!$B$41:$B$64',
})
chart3.set_title ({'name': '單日幾時發文統計長條圖'})
chart3.set_x_axis({'name': 'weekday'})
chart3.set_y_axis({'name': 'count'})
worksheet = writer.sheets[page_id]
worksheet.insert_chart('D40', chart3)
# 示範插入image, 當把上面的圖畫出來之後,要先存起來才能插入到xlsx
df.plot(x='datetime', y=['num_likes', 'num_loves', 'num_wows', 'num_hahas', 'num_sads', 'num_angrys'])
plt.savefig('image/image1.png')
worksheet.insert_image('L12', 'image/image1.png')
"""
Explanation: 然後把這些DataFrame寫入xlsx
官網介紹 : Working with Python Pandas and XlsxWriter
xlsxwriter 調整位置
當你在寫入excel的時候,需要設定你要寫入的位置 :
第一種 : 把DataFrame寫入到xlsx
df_num_reactions.to_excel(writer, sheet_name=page_id, startcol=0, startrow=0)
startcol,startrow : 像座標的形式
sheet_name : 工作表
第二種繪製內建圖表 :
chart1.add_series({
'categories': '='+page_id+'!$A$13:$A$18',
'values': '='+page_id+'!$B$13:$B$18',
})
categories : name來自A13-A18
values : value來自B13-B18
第三種插入圖表 :
worksheet.insert_image('L12', 'image/image1.png')
'L12' : 插入位置
'image/image1.png' : 圖片路徑
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cas/cmip6/models/fgoals-f3-l/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'fgoals-f3-l', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CAS
Source ID: FGOALS-F3-L
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:44
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
quasars100/Resonance_testing_scripts | python_tutorials/Checkpoints.ipynb | gpl-3.0 | import rebound
rebound.add(m=1.)
rebound.add(m=1e-6, a=1.)
rebound.add(a=2.)
rebound.save("checkpoint.bin")
"""
Explanation: Checkpoints
You can easily save and load particle positions to a binary file with REBOUND. The binary file includes the masses, positions and velocities of all particles, as well as the current simulation time (but nothing else!).
Let's add two particles to REBOUND and save them to a file.
End of explanation
"""
rebound.reset()
rebound.load("checkpoint.bin")
rebound.status()
"""
Explanation: The binary files are small in size and store every floating point number exactly, so you don't have to worry about efficiency or loosing precision. You can make lots of checkpoints if you want!
Let's reset REBOUND (that deletes the particles from memory) and then read the binary file we just saved.
End of explanation
"""
|
microsoft/dowhy | docs/source/example_notebooks/dowhy_mediation_analysis.ipynb | mit | import numpy as np
import pandas as pd
from dowhy import CausalModel
import dowhy.datasets
# Warnings and logging
import warnings
warnings.filterwarnings('ignore')
"""
Explanation: Mediation analysis with DoWhy: Direct and Indirect Effects
End of explanation
"""
# Creating a dataset with a single confounder and a single mediator (num_frontdoor_variables)
data = dowhy.datasets.linear_dataset(10, num_common_causes=1, num_samples=10000,
num_instruments=0, num_effect_modifiers=0,
num_treatments=1,
num_frontdoor_variables=1,
treatment_is_binary=False,
outcome_is_binary=False)
df = data['df']
print(df.head())
"""
Explanation: Creating a dataset
End of explanation
"""
model = CausalModel(df,
data["treatment_name"],data["outcome_name"],
data["gml_graph"],
missing_nodes_as_confounders=True)
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
"""
Explanation: Step 1: Modeling the causal mechanism
We create a dataset following a causal graph based on the frontdoor criterion. That is, there is no direct effect of the treatment on outcome; all effect is mediated through the frontdoor variable FD0.
End of explanation
"""
# Natural direct effect (nde)
identified_estimand_nde = model.identify_effect(estimand_type="nonparametric-nde",
proceed_when_unidentifiable=True)
print(identified_estimand_nde)
# Natural indirect effect (nie)
identified_estimand_nie = model.identify_effect(estimand_type="nonparametric-nie",
proceed_when_unidentifiable=True)
print(identified_estimand_nie)
"""
Explanation: Step 2: Identifying the natural direct and indirect effects
We use the estimand_type argument to specify that the target estimand should be for a natural direct effect or the natural indirect effect. For definitions, see Interpretation and Identification of Causal Mediation by Judea Pearl.
Natural direct effect: Effect due to the path v0->y
Natural indirect effect: Effect due to the path v0->FD0->y (mediated by FD0).
End of explanation
"""
import dowhy.causal_estimators.linear_regression_estimator
causal_estimate_nde = model.estimate_effect(identified_estimand_nie,
method_name="mediation.two_stage_regression",
confidence_intervals=False,
test_significance=False,
method_params = {
'first_stage_model': dowhy.causal_estimators.linear_regression_estimator.LinearRegressionEstimator,
'second_stage_model': dowhy.causal_estimators.linear_regression_estimator.LinearRegressionEstimator
}
)
print(causal_estimate_nde)
"""
Explanation: Step 3: Estimation of the effect
Currently only two stage linear regression is supported for estimation. We plan to add a non-parametric Monte Carlo method soon as described in Imai, Keele and Yamamoto (2010).
Natural Indirect Effect
The estimator converts the mediation effect estimation to a series of backdoor effect estimations.
1. The first-stage model estimates the effect from treatment (v0) to the mediator (FD0).
2. The second-stage model estimates the effect from mediator (FD0) to the outcome (Y).
End of explanation
"""
print(causal_estimate_nde.value, data["ate"])
"""
Explanation: Note that the value equals the true value of the natural indirect effect (up to random noise).
End of explanation
"""
causal_estimate_nie = model.estimate_effect(identified_estimand_nde,
method_name="mediation.two_stage_regression",
confidence_intervals=False,
test_significance=False,
method_params = {
'first_stage_model': dowhy.causal_estimators.linear_regression_estimator.LinearRegressionEstimator,
'second_stage_model': dowhy.causal_estimators.linear_regression_estimator.LinearRegressionEstimator
}
)
print(causal_estimate_nie)
"""
Explanation: The parameter is called ate because in the simulated dataset, the direct effect is set to be zero.
Natural Direct Effect
Now let us check whether the direct effect estimator returns the (correct) estimate of zero.
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/exams/interro_rapide_20_minutes_2014_12.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 1A.e - Correction de l'interrogation écrite du 14 novembre 2014
dictionnaires
End of explanation
"""
def make_squares(n):
squares = [i**2 for i in range(n)]
"""
Explanation: Enoncé 1
Q1
Le code suivant produit une erreur. Laquelle ?
End of explanation
"""
def make_squares(n):
squares = [i**2 for i in range(n)]
print ( make_squares(2) )
"""
Explanation: Comme il n'y a pas d'instruction return, la fonction retourne toujours None quelque chose le résultat de ce qu'elle calcule.
End of explanation
"""
s = 1
a = 0
for i in range(4):
a += s
s += 2
a
"""
Explanation: Q2
Que vaut a ?
End of explanation
"""
s = 1
a = 0
for i in range(4):
print(a,s)
a += s
s += 2
a
"""
Explanation: Si on affiche les résultats intermédiaires :
End of explanation
"""
d = {i:chr(i+97) for i in range(10)}
x = d[4]
x
"""
Explanation: Q3
On rappelle que ord('a')=97. Que vaut x ?
End of explanation
"""
notes = { "Alice": 17, "Bob": 18, "Jean−Ma": 17 }
notes['Claire'] = 18
def mystere(d):
a = 0
b = []
for k,v in d.items():
if v >= a:
a = v
b.append(k)
return (b,a)
print(mystere(notes))
notes
"""
Explanation: Il suffit de remplacer i par 4. x vaut chr(97+4) et on se déplace de 4 lettres dans l'alphabet, soit e.
Q4
Que fait le programme suivant ?
End of explanation
"""
notes = { "Alice": 17, "Bob": 18, "Jean−Ma": 17 }
notes['Claire'] = 18
def mystere(d):
a = 0
b = []
for k,v in d.items():
if v == a:
b.append(k)
elif v > a:
a = v
b = [ k ]
return (b,a)
print(mystere(notes))
"""
Explanation: Le programme commence par ajouter la clé Claire au dictionnaire. La variable a mémorise la valeur numérique la plus grande. En l'état, le résultat programme est assez imprévisible puisqu'il dépend de l'ordre dans lequel on parcourt les éléments. Je pense que la fonction devrait récupérer dans une liste l'ensemble des prénoms correspondant à cette valeur maximale s'il était écrit comme ceci :
End of explanation
"""
def f(n):
while n != 1:
if n%2 == 0:
n = n/2
else:
n = 3*n + 1
return n
f(3)
f(4)
"""
Explanation: Q5
Que renvoie la fonction suivante en fonction de n ?
End of explanation
"""
|
srikarpv/CV_PA1 | PA1-Q3-1.ipynb | mit | import math
from scipy import ndimage
from PIL import Image
from numpy import *
from matplotlib import pyplot as plt
from pylab import *
import cv2
import time
# input image
# x vertex of corner
# y vertex of corner
def plott (I,x,y):
plt.figure()
plt.imshow(I,cmap = cm.gray) # plots the image in greyscale
plot(x,y,'r.') # mark the corners in red
plt.axis([0,len(I[0,:]),len(I[:,0]),0])
return show()
# gaussian filter func
def gfilter (x,y,s):
gfilter = (1/(math.sqrt(2*(math.pi))*s))*exp(-((x**2) + (y**2))/2/s**2)
return gfilter
#gaussian filter first derivative func
def gfilter1 (x,y,s,z):
if(z =='x'):
gfilter1 = gfilter(x,y,s)*(-x/(s**2))
elif(z=='y'):
gfilter1 = gfilter(x,y,s)*(-y/(s**2))
return gfilter1
#gaussian filter second derivative func
def gfilter2 (x,y,s,z):
if(z =='x'):
gfilter2 = gfilter(x,y,s)*(((x**2)/(s**2))-1)/s**2
elif(z=='y'):
gfilter2 = gfilter(x,y,s)*(((x**2)/(s**2))-1)/s**2
return gfilter2
"""
Explanation: Question 3: Corner Detection [2 pts]<br/>
In this question, you will implement three different versions of the corner detection algorithms for given three input
images (input1.png, input2.png, and input3.png).<br/>
[0.5 pts] Implement corner detection algorithm based on Hessian matrix (H) computation. Note that Hessian
matrix is defined for a given image I at a pixel p such that eigen-decomposition (spectral decomposition) of this matrix yields two eigenvalues as: λ 1 and λ 2 . If both
λ 1 , λ 2 are large, we are at a corner. Provide the detected corners in the resulting output images in color.
End of explanation
"""
# inputt -the input image name
# s -Standard Deviation value
# t -The threshold of eigen value to be considered as edge
def hessian(inputt,s,t):
start = time.time()
I = array(Image.open(inputt).convert('L')) # reads the input image into I
G = []
for i in range(-2,2+1):
G.append(gfilter(i,0,s)) # equating y to 0 for a 1D matrix
Gx = [] #gaussian in x direction
for i in range(-size,size+1):
Gx.append(gfilter1(i,0,s,'x'))
Gy = [] #gaussian in y direction
for i in range(-size,size+1):
Gy.append([gfilter1(0,i,s,'y')])
Gx2 = []
for i in range(-size,size+1):
Gx2.append(gfilter2(i,0,s,'x'))
Gy2 = []
for i in range(-size,size+1):
Gy2.append([gfilter2(0,i,s,'y')])
Ix = []
for i in range(len(I[:,0])):
Ix.extend([convolve(I[i,:],Gx)]) # I*G in x direction
Ix = array(matrix(Ix))
Iy = []
for i in range(len(I[0,:])):
Iy.extend([convolve(I[:,i],Gx)]) # I*G in y direction
Iy = array(matrix(transpose(Iy)))
Ixx = []
for i in range(len(Ix[:,0])):
Ixx.extend([convolve(Ix[i,:],Gx2)]) # Ix * Gx in x direction
Ixx = array(matrix(Ixx))
Iyy = []
for i in range(len(Iy[0,:])):
Iyy.extend([convolve(Ix[:,i],Gx2)]) # Iy * Gy in y direction
Iyy = array(matrix(transpose(Iyy)))
Ixy = []
for i in range(len(Iy[0,:])):
Ixy.extend([convolve(Ix[:,i],Gx2)]) # Iy * Gy in y direction
Ixy = array(matrix(transpose(Ixy)))
#store values in x,y to plot the corners
x = [] # array x[] stores x coordinates of the corner
y = [] # array y[] stores y coordinates of the corner
for i in range(len(I[:,0])):
for j in range(len(I[0,:])):
H1 = linalg.eigvals(([Ixx[i,j],Ixy[i,j]],[Ixy[i,j],Iyy[i,j]]))
if((abs(H1[0])>t) & (abs(H1[1])>t)): # if corner
y.append(i-2) # appending y index to mark corners
x.append(j-2) # appending y index to mark corners
plott(I,x,y)
return time.time() - start
s = 1.5 #input standard deviation aka sigma value
inp1 = hessian('/home/srikar/CVPA1/CVV/input1.png',s,3.95695) # size of filter 3.95695
inp2 = hessian('/home/srikar/CVPA1/CVV/input2.png',s,5)
inp3 = hessian('/home/srikar/CVPA1/CVV/input3.png',s,5)
print ('The Accuracy/Time for execution is:\nInput Image 1: %.2fseconds\nInput Image 2: %.2fseconds\nInput Image 3: %.2fseconds)'%(inp1,inp2,inp3))
"""
Explanation: Hessian function for marking corners
End of explanation
"""
|
chengsoonong/didbits | Estimation/SVM_rbf_gamma.ipynb | apache-2.0 | import itertools
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
from sklearn import datasets
from sklearn.model_selection import train_test_split
%matplotlib inline
"""
Explanation: Picking the gamma value for a SVM with a Radial Basis Function kernel
End of explanation
"""
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
y = iris.target
def make_meshgrid(x, y, h=.02):
"""Create a mesh of points to plot in
Parameters
----------
x: data to base x-axis meshgrid on
y: data to base y-axis meshgrid on
h: stepsize for meshgrid, optional
Returns
-------
xx, yy : ndarray
"""
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
def plot_contours(ax, clf, xx, yy, **params):
"""Plot the decision boundaries for a classifier.
Parameters
----------
ax: matplotlib axes object
clf: a classifier
xx: meshgrid ndarray
yy: meshgrid ndarray
params: dictionary of params to pass to contourf, optional
"""
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
def plot_data(ax, X0, X1, y, xx, yy, title):
"""Plot the data
Parameters
----------
ax: matplotlib axes object
X0: first (horizontal) dimension
X1: second (vertical) dimension
y: label
xx: meshgrid ndarray
yy: meshgrid ndarray
title: text to display above figure
"""
ax.scatter(X0, X1, c=y, cmap=plt.cm.bwr, s=50, edgecolors='k')
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xlabel('first feature')
ax.set_ylabel('second feature')
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(title)
"""
Explanation: When we train a support vector machine (SVM) with a radial basis function (rbf) kernel, we pick two main parameters.
The C parameter, common to all kernels, determines how much we maximise the SVM's margin vs correctly classifying more of the training data. This is a regularising parameter.
The gamma parameter appears as follows.
Using a rbf kernel, we measure similarity between datapoints by $k(x, y) = \exp(-\gamma||x-y||^2)$.
If $\gamma||x-y||^2$ is very small -- i.e., if $\frac{1}{\gamma}$ is much larger than $||x-y||^2$ -- $k(x, y)$ will be close to 1.
So a small $\gamma$ value, relative to the distances between datapoints, means the influence of a single datapoint reaches over most of the dataset.
If $\gamma||x-y||^2$ is very large, then $k(x, y)$ will be close to 0.
That is, for a large $\gamma$ value, a given datapoint has influence only on datapoints very close to it.
See https://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html for an exploration of how these parameters influence the decision boundary. We use code from https://scikit-learn.org/stable/auto_examples/svm/plot_iris.html for plotting.
First, we load the dataset. We will use the first two dimensions of the iris dataset.
End of explanation
"""
pairs = itertools.combinations(X, r=2)
distances = [np.linalg.norm(a-b)**2 for a,b in pairs]
distances.sort()
gamma_values = [distances[int(len(distances)*frac)] for frac in [0.1, 0.3, 0.5, 0.7, 0.9]]
print(gamma_values)
"""
Explanation: Picking a good gamma value
To pick a good gamma value, we would use cross validation. To do cross validation, we need to know what range sensible gamma values lie in.
To find this range, we use a heuristic.
We want the influence of each datapoint to extend over some but not all of the dataset -- that is, we want gamma to be some value such that $\gamma||x-y||^2$ is neither very large nor very small.
We thus want $\frac{1}{\gamma}$ to be of similar magnitude to "typical" values of $||x-y||^2$.
End of explanation
"""
# Split the dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify=y)
gamma_values.extend([0.001, 0.01, 40, 400])
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
C = 1.0 # SVM regularization parameter
fig, sub = plt.subplots(3, 3, figsize=(21, 15))
plt.subplots_adjust(wspace=0.2, hspace=0.2)
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1)
for gamma, ax in zip(gamma_values, sub.flatten()[:len(gamma_values)]):
clf = svm.SVC(kernel='rbf', gamma=gamma, C=C)
clf.fit(X_train, y_train)
plot_contours(ax, clf, xx, yy)
plot_data(ax, X0, X1, y, xx, yy, 'gamma: %.3f\n Score: %.4f' % (gamma, clf.score(X_test, y_test)))
"""
Explanation: We see that the distances between points in the dataset are largely between 0.13 and 4.04.
We plot the SVM decision boundaries with these gamma values.
We also plot the SVM decision boundaries with gamma values of 0.001, 0.01, 40 and 400, to show values outside this range.
We split the dataset to use a third for validation.
Looking at the scores on the validation set, we get a better model for gamma values in the range between 0.13 and 4.04
End of explanation
"""
|
UoS-SNe/LSST_tools | opsimout/notebooks/OpSim_basics_notebook.ipynb | gpl-3.0 | from __future__ import print_function ## Force python3-like printing
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import time
import sqlite3
from sqlalchemy import create_engine
opsimdbpath = os.environ.get('OPSIMDBPATH')
print(opsimdbpath)
engine = create_engine('sqlite:///' + opsimdbpath)
conn = sqlite3.connect(opsimdbpath)
cursor = conn.cursor()
query = 'SELECT COUNT(*) FROM Summary'
cursor.execute(query)
cursor.fetchall()
# opsimdf = pd.read_sql_query('SELECT * FROM Summary WHERE night < 1000', engine)
## Look at the first year - to get a feel
# opsimdf = pd.read_sql_query('SELECT * FROM Summary WHERE night < 366', engine)
## Just one night
opsimdf = pd.read_sql_query('SELECT * FROM Summary WHERE night = 1000', engine)
now = time.time()
print (now - then)
opsimdf.head()
# Definitions of the columns are
opsimdf[['obsHistID', 'filter', 'night', 'expMJD',
'fieldID', 'fieldRA', 'ditheredRA', 'ditheredRA', 'ditheredDec',
'propID', 'fiveSigmaDepth']].head()
opsimdf.propID.unique()
ddf = opsimdf.query('propID == 56') ## 56 is DDF
print(len(ddf))
filters = np.unique(ddf["filter"])
print(filters)
"""
Explanation: OpSim Basics Notebook
opsim : https://www.lsst.org/scientists/simulations/opsim
The Operations Simulator (OpSim) is an application that simulates the field selection and image acquisition process of the LSST over the 10-year life of the planned survey. Each visit or image of a field in a particular filter is selected by combining science program requirements, the mechanics of the telescope design, and the modelled environmental conditions. The output of the simulator is a detailed record of the telescope movements and a complete description of the observing conditions as well as the characteristics of each image. OpSim is capable of balancing cadence goals from multiple science programs, and attempts to minimize time spent slewing as it carries out these goals. LSST operations can be simulated using realistic seeing distributions, historical weather data, scheduled engineering downtime and current telescope and camera parameters.
The Simulator has a sophisticated model of the telescope and dome to properly constrain potential observing cadences. This model has also proven useful for investigating various engineering issues ranging from sizing of slew motors, to design of cryogen lines to the camera. The LSST Project developed the Operations Simulator to verify that the LSST Science Requirements could be met with the telescope design. It was used to demonstrated the capability of the LSST to deliver a 26,000 square degree survey probing the time domain and with 18,000 square degrees for the Wide-Fast-Deep survey to the design specifications of the Science Requirements Document, while effectively surveying for NEOs over the same area. Currently, the Operations Simulation Team is investigating how to optimally observe the sky to obtain a single 10-year dataset that can be used to accomplish multiple science goals.
Outputs
|Column Name | Type | Units | Description |
|------------|-------|------- |------------------------------------------------------------|
|obsHistID |integer|- |Unique visit identifier (same as ObsHistory.obsHistID). |
|sessionID |integer|- |Session identifier which is unique for simulated surveys created on a particular machine or hostname. Simulated surveys are uniquely named using the form hostname_sessionID.|
|propID |integer|- |Unique (on each machine) identifier for every proposal (observing mode) specified in a simulated survey. Note that a single visit can satisfy multiple proposals, and so duplicate rows (except for the propID) can exist in the Summary table (same as Proposal.propID).|
|fieldID |integer|- |Unique field (or target on the sky) identifier (same as Field.fieldID). OpSim uses a set of 5292 fields (targets) obtained from a fixed tessellation of the sky.|
|fieldRA |real |radians |Right Ascension (J2000) of the field center for this visit (same as Field.fieldRA).|
|fieldDec |real |radians |Declination (J2000) of the field center for this visit (same as Field.fieldDec).|
|filter |text |- |Filter used during the visit; one of u, g, r, i, z, or y.|
|expDate |integer|seconds |Time of the visit relative to 0 sec at the start of a simulated survey.
|expMJD |real |days |Modified Julian Date at the start of a visit.|
|night |integer|none |The integer number of nights since the start (expDate = 0 sec) of the survey. The first night is night = 0.
|visitTime |real |seconds |Currently, a visit comprises two 15-second exposures and each exposure needs 1 sec for the shutter action and 2 sec for the CCD readout. The second readout is assumed to occur while moving to the next field (see slewTime), so the length of each visit for the WFD observing mode is 34 sec.
|visitExpTime|real |seconds |Total integration time on the sky during a visit, which for current observing modes is 30 sec (see visitTime).|
|finRank |real |- |Target rank among all proposals including all priorities and penalties (generally used for diagnostic purposes).|
|FWHMgeom |real |arcseconds |"Geometrical" full-width at half maximum. The actual width at half the maximum brightness. Use FWHMgeom to represent the FWHM of a double-gaussian representing the physical width of a PSF.|
|FWHMeff |real |arcseconds |"Effective" full-width at half maximum, typically ~15% larger than FWHMgeom. Use FWHMeff to calculate SNR for point sources, using FWHMeff as the FWHM of a single gaussian describing the PSF.|
|transparency|real |- |The value (in 8ths) from the Cloud table closest in time to this visit.|
|airmass |real |- |Airmass at the field center of the visit.
|vSkyBright |real |mag/arcsec2|The sky brightness in the Johnson V band calculated from a Krisciunas and Schaeffer model with a few modifications. This model uses the Moon phase, angular distance between the field and the Moon and the field’s airmass to calculate added brightness to the zero-Moon, zenith sky brightness (e.g. Krisciunas 1997, PASP, 209, 1181; Krisciunas and Schaefer 1991, PASP, 103, 1033; Benn and Ellison 1998, La Palma Technical Note 115).|
|filtSkyBrightness|real|mag/arcsec2|Measurements of the color of the sky as a function of lunar phase are used to correctvSkyBright to the sky brightness in the filter used during this visit.|
|rotSkyPos|real|radians|The orientation of the sky in the focal plane measured as the angle between North on the skyand the "up" direction in the focal plane.
|rotTelPos|real|radians|The physical angle of the rotator with respect to the mount. rotSkyPos = rotTelPos - ParallacticAngle|
|lst|real|radians|Local SiderealTime at the start of the visit.|
|altitude|real|radians|Altitude of the field center at the start of the visit.|
|azimuth|real|radians|Azimuth of the field center at the start of the visit.|
|dist2Moon|real|radians|Distance from the field center to the moon's center on the sky.|
|solarElong|real|degrees|Solar elongation or the angular distance between the field center and the sun (0 - 180 deg).|
|moonRA|real|radians|Right Ascension of the Moon.|
|moonDec|real|radians|Declination of the Moon.|
|moonAlt|real|radians|Altitude of the Moon taking into account the elevation of the site.|
|moonAZ|real|radians|Azimuth of the Moon|
|moonPhase|real|%|Percent illumination of the Moon (0=new, 100=full)|
|sunAlt|real|radians|Altitude of the Sun taking into account the elevation of the site, but with no correction for atmospheric refraction.|
|sunAz|real|radians|Azimuth of the Sun with no correction for atmospheric refraction.|
|phaseAngle|real|-|Intermediate values in the calculation of vSkyBright using the Krisciunas and Schaeffer models.|
|rScatter|real|-|" "|
|mieScatter|real|-|" "|
|moonBright|real|-|" "|
|darkBright|real|-|" "|
|rawSeeing|real|arcseconds|The seeing as taken from the Seeing table which is an ideal seeing at zenith and at 500 nm.|
|wind|real|-|A placeholder for real telemetry.|
|humidity|real|-|A placeholder for real telemetry.|
|slewDist|real|radians|Distance on the sky between the target field center and the field center of the previous visit.|
|slewTime|real|seconds|The time between the end of the second exposure in the previous visit and the beginning of the first exposure in the current visit.|
|fiveSigmaDepth|real|magnitudes|The magnitude of a point source that would be a 5-sigma detection (see Z. Ivezic et al, http://arxiv.org/pdf/0805.2366.pdf (link is external)).|
|ditheredRA|real|radians|The offset from the Right Ascension of the field center representing a "hex-dithered" pattern.|
|ditheredDec|real|radians|The offset from the Declination of the field center representing a "hex-dithered" pattern.|
End of explanation
"""
fig = plt.figure(figsize=[8, 4])
fig.subplots_adjust(left = 0.09, bottom = 0.13, top = 0.99,
right = 0.99, hspace=0, wspace = 0)
ax1 = fig.add_subplot(111)
histstruct = {}
bins = np.arange(0, 2.0, 0.1)
for i, f in enumerate(filters):
seeing_dist = ddf.query("filter == u'" + f + "'")["rawSeeing"]
histstruct[f] = ax1.hist(seeing_dist, color = rfc.hex[f], histtype = "step",
lw = 2, bins = bins)
"""
Explanation: Looking at outputs
The Raw Seeing is the ideal seeing at zenith
End of explanation
"""
fig = plt.figure(figsize=[8, 4])
fig.subplots_adjust(left = 0.09, bottom = 0.13, top = 0.99,
right = 0.99, hspace=0, wspace = 0)
ax1 = fig.add_subplot(111)
histstruct = {}
bins = np.arange(0, 2.0, 0.1)
for i, f in enumerate(filters):
seeing_dist = ddf.query("filter == u'" + f + "'")["FWHMeff"]
histstruct[f] = ax1.hist(seeing_dist, color = rfc.hex[f], histtype = "step",
lw = 2, bins = bins)
fig = plt.figure(figsize=[8, 4])
fig.subplots_adjust(left = 0.09, bottom = 0.13, top = 0.99,
right = 0.99, hspace=0, wspace = 0)
ax1 = fig.add_subplot(111)
histstruct = {}
bins = np.arange(20, 26, 0.1)
for i, f in enumerate(filters):
depth_dist = ddf.query("filter == u'" + f + "'")["fiveSigmaDepth"]
histstruct[f] = ax1.hist(depth_dist, color = rfc.hex[f], histtype = "step",
lw = 2, bins= bins)
for i, f in enumerate(filters):
print(i, f, len(ddf.query("filter == u'" + f + "'")))
xx = opsimdf.query('fieldID == 316')
xx.head()
"""
Explanation: A better value to use is FWHMeff
End of explanation
"""
xx.query('propID == 54')
"""
Explanation: Some unexpected issues
End of explanation
"""
test = opsimdf.drop_duplicates()
all(test == opsimdf)
test = opsimdf.drop_duplicates(subset='obsHistID')
len(test) == len(opsimdf)
opsimdf.obsHistID.size
opsimdf.obsHistID.unique().size
test.obsHistID.size
"""
Explanation: How to read the table:
obsHistID indexes a pointing ('fieldRA', 'fieldDec', 'ditheredRA', 'ditheredDec')
Additionally a pointing may be assigned a propID to describe what a pointing achieves
The meaning of the propID is given in the Proposal Table. For minion_1016_sqlite.db, the WFD is 54, and the DDF is 56, but this coding might change.
If a pointing achieves the task of succeeding in two different proposals, this is represented by haveing two records with the same pointng and different propID
End of explanation
"""
|
bpgc-cte/python2017 | Week 4/Lecture_9_Inheritance_Overloading_Overidding.ipynb | mit | class Student():
def __init__(self, name, id_no=None):
self.name = name
self.id_no = id_no if id_no is not None else "Not Allocated"
def __str__(self):
s = self.name
return s + "\n" + "Name : " + self.name + " , ID : " + self.id_no
def __add__(self, a):
return self.name + a.name
def __eq__(self, a):
return self.id_no == a.id_no
A = Student("Sebastin", "2015B4A70370G")
B = Student("Mayank", "2015B4A70370G")
print(A)
print(B)
print(A + B)
print(A.__add__(B))
print(A == B)
"""
Explanation: Object Oriented Programming - Inheritance, Overloading and Overidding
Constructor Overloading
End of explanation
"""
# BITSian class
class BITSian():
def __init__(self, name, id_no, hostel):
self.name = name
self.id_no = id_no
self.hostel = hostel
def get_name(self):
return self.name
def get_id(self):
return self.id_no
def get_hostel(self):
return self.hostel
# IITian class
class IITian():
def __init__(self, name, id_no, hall):
self.name = name
self.id_no = id_no
self.hall = hall
def get_name(self):
return self.name
def get_id(self):
return self.id_no
def get_hall(self):
return self.hall
"""
Explanation: Inheritance
Inheritance is an OOP practice where a certain class(called subclass/child class) inherits the properties namely data and behaviour of another class(called superclass/parent class). Let us see through an example.
End of explanation
"""
class CollegeStudent():
def __init__(self, name, id_no):
self.name = name
self.id_no = id_no
def get_name(self):
return self.name
def get_id(self):
return self.id_no
# BITSian class
class BITSian(CollegeStudent):
def __init__(self, name, id_no, hostel):
self.name = name
self.id_no = id_no
self.hostel = hostel
def get_hostel(self):
return self.hostel
# IITian class
class IITian(CollegeStudent):
def __init__(self, name, id_no, hall):
self.name = name
self.id_no = id_no
self.hall = hall
def get_hall(self):
return self.hall
a = BITSian("Arif", "2015B4A70370G", "AH-5")
b = IITian("Abhishek", "2213civil32K", "Hall-10")
print(a.get_name())
print(b.get_name())
print(a.get_hostel())
print(b.get_hall())
"""
Explanation: While writing code you must always make sure that you keep it as concise as possible and avoid any sort of repitition. Now, we can clearly see the commonalitites between BITSian and IITian classes.
It would be natural to assume that every college student whether from BITS or IIT or pretty much any other institution in the world will have a name and a unique ID number.
Such a degree of commonality means that there could be a higher level of abstraction to describe both BITSian and IITian to a decent extent.
End of explanation
"""
class Student():
def __init__(self, name):
self.name = name
def get_name(self):
return self.name
class CollegeStudent(Student):
def __init__(self, name, id_no):
super().__init__(name)
self.id_no = id_no
def get_id(self):
return self.id_no
# BITSian class
class BITSian(CollegeStudent):
def __init__(self, name, id_no, hostel):
super().__init__(name, id_no)
self.hostel = hostel
def get_hostel(self):
return self.hostel
# IITian class
class IITian(CollegeStudent):
def __init__(self, name, id_no, hall):
super().__init__(name, id_no)
self.hall = hall
def get_hall(self):
return self.hall
a = BITSian("Arif", "2015B4A70370G", "AH-5")
b = IITian("Abhishek", "2213civil32K", "Hall-10")
print(a.get_name())
print(b.get_name())
print(a.get_hostel())
print(b.get_hall())
"""
Explanation: So, the class definition is as such : class SubClassName(SuperClassName):
Using super()
The main usage of super() in Python is to refer to parent classes without naming them expicitly. This becomes really useful in multiple inheritance where you won't have to worry about parent class name.
End of explanation
"""
class Student():
def __init__(self, name):
self.name = name
def get_name(self):
return "Student : " + self.name
class CollegeStudent(Student):
def __init__(self, name, id_no):
super().__init__(name)
self.id_no = id_no
def get_id(self):
return self.id_no
def get_name(self):
return "College Student : " + self.name
class BITSian(CollegeStudent):
def __init__(self, name, id_no, hostel):
super().__init__(name, id_no)
self.hostel = hostel
def get_hostel(self):
return self.hostel
def get_name(self):
return "Gen BITSian --> " + self.name
class IITian(CollegeStudent):
def __init__(self, name, id_no, hall):
super().__init__(name, id_no)
self.hall = hall
def get_hall(self):
return self.hall
def get_name(self):
return "IITian --> " + self.name
a = BITSian("Arif", "2015B4A70370G", "AH-5")
b = IITian("Abhishek", "2213civil32K", "Hall-10")
print(a.get_name())
print(b.get_name())
print()
print(super(BITSian, a).get_name())
print(super(IITian, b).get_name())
print(super(CollegeStudent, a).get_name())
"""
Explanation: You may come across the following constructor call for a superclass on the net : super(self.__class__, self).__init__(). Please do not do this. It can lead to infinite recursion.
Go through this link for more clarification : Understanding Python Super with init methods
Method Overidding
This is a phenomenon where a subclass method with the same name is executed in preference to it's superclass method with a similar name.
End of explanation
"""
|
dmittov/misc | Kinder Surprise.ipynb | apache-2.0 | def expect_value(k, p):
steps = [k / p / (k - i) for i in range(k)]
return sum(steps)
k = 10
ps = [1., .5, .33, .25, .2, .1]
count = np.vectorize(lambda p: expect_value(k, p), otypes=[np.float])(ps)
plt.scatter(ps, count)
plt.xlabel('Lion probability')
plt.ylabel('Purchase count')
count
"""
Explanation: Коллекция львят
<img src="http://victoria.tc.ca/~quantum/leo.jpg"/>
У каждого они были, но я не видел человека, который собрал бы их всех. А ведь интересно, сколько усилий для этого нужно? И реально ли это вообще?
Модель
Пусть у нас уже есть часть коллекции и я покупаю очередное яйцо. С вероятностью $p$ там окажется один из львят. Но с вероятностью $q = 1 - p$ там оказывается левая сборная игрушка типа такой:
<img src="http://nerdywithchildren.com/wp-content/uploads/2013/08/5875976204_8e2f27a421_z.jpg" width="200px" align="left" margin="50px"/>
К победе это нас нисколько не приблизит. Если же нам попался львенок, то каждую игрушку коллекции я считаю равновероятной. Понятно как обобщить модель на разные вероятности, но таких данных у меня нет, а параметров будет слишком много, чтобы можно было как-то прицениться. В общем, такой подставы, как разные вероятности элементов коллекции ,я сейчас не ожидаю.
Тогда:
<div border="2px solid black" outline="black solid 5px">
$\mathbb{P}(i, n) = \mathbb{P}(i, n - 1) [q + p \frac{i}{k}] +$
$\mathbb{P}(i - 1, n - 1) [p \frac{k - i + 1}{k}]$,
$\mathbb{P}(0, 1) = 0$,
$\mathbb{P}(0, 0) = 1$
</div>
Где $\mathbb{P}(i, n)$ - вероятность получить ровно $0 < i \leq k$ львят за ровно $n > 0$ покупок. А $k$ - общее количество элементов коллекции.
Сколько же нужно купить
На данный вопрос нам ответит expected value. Но свернуть сумму с рекуретным выражением, чтобы явно его посчитать проблематично. Поэтому, пойдем другим путем: определим сколько яиц нужно купить, чтобы получить очередной элемент коллекции. Когда $i$ фиксировано - это простой эксперимент Бернулли: либо получилось, либо нет. С константой вероятностью (она поменяется только на следующем шаге). Мат ожидание такой величины известно: $1/\mathbb{P}$ [если $\mathbb{P} = 1/n$, то в среднем нужно купить $n$ яиц]. А так как шаги независимы - просуммируем их.
Если уже есть $i$ львят, то следующий достанется с вероятностью $\mathbb{P} = p \frac{k - i}{k}$
End of explanation
"""
def prob(N, k, p):
q = 1. - p
dynamic_table = np.zeros((N + 1) * (k + 1)).reshape(k + 1, N + 1)
for n in range(N + 1):
dynamic_table[0][n] = q ** n
for n in xrange(1, N + 1):
for i in range(1, k + 1):
dynamic_table[i][n] = \
dynamic_table[i][n - 1] * (p * float(i) / k + q) + \
dynamic_table[i - 1][n - 1] * p * float(k - i + 1) / k
return dynamic_table[k]
"""
Explanation: Если бы в каждом яйце был львенок, нужно было бы в среднем купить 29.29 яиц, чтобы собрать коллекцию. Но когда львенок в каждом третьем - это уже 88.76 яиц.
Каковы же мои шансы?
Expectation хороший ориентир, но он недостаточно хорошо отвечает на вопрос. Ведь человек, купивший 100 яиц может и не собрать коллецию, тогда он навсегда может разочароваться в математике. Обычно в таких случаях используют интервальные оценки. Но ответ, что с 95% вероятностью нужно купить от X до Y яиц озадачит еще больше. Так сколько же нужно брать?
Понятно, что можно быть очень удачливым и уложиться в 10 покупок. А можно не собрать коллекцию и за 10000 попыток - вероятность такого события не нулевая. Поэтому нарисуем график количество попыток - вероятность собрать коллекцию. Таким образом, можно будет определить для себя вероятность: хочу собрать коллецию с 80% вероятностью и понять сколько нужно брать? Или определить бюджет: есть $100, какова вероятность собрать коллекцию? То есть, нарисуем CDF.
End of explanation
"""
N = 200
k = 10
plt.plot(prob(N, k, 1.), label='p = 1')
plt.plot(prob(N, k, 0.5), label='p = 0.5')
plt.plot(prob(N, k, 0.33), label='p = 0.33')
plt.ylabel('Probability')
plt.xlabel('Kinder surprises')
plt.legend()
"""
Explanation: Я видел, что задача о коллекционере для $p = 1$ разобрана на хабре, но там все магическим образом сведено с ряду Стирлинга 2ого рода с поправочным коэффициентом. Считать заведомо большее число не хочется, чтобы не словить сложностей с большими float'ами. А на асимптотике вроде выигрыша вроде нет, так как здесь нужно точное значение факториала, а не его приближение. Раз есть красивая формула выше, можно сделать простой динамикой.
End of explanation
"""
purchase_prob = prob(150, 10, 0.33)
count = np.argwhere(purchase_prob >= 0.8).min()
count, purchase_prob[count]
"""
Explanation: Чтобы при $p = 0.33$ собрать коллецию с вероятностью ~80% нужно купить 115 яиц.
End of explanation
"""
def simulation(k, p):
lion_collection = set()
toy_type_dist = stats.bernoulli(p)
lion_dist = stats.randint(0, k)
purchaes_counter = 0
while len(lion_collection) < k:
purchaes_counter += 1
if toy_type_dist.rvs() == 1:
lion_collection.add(lion_dist.rvs())
return purchaes_counter
purchases = np.vectorize(lambda iteration: simulation(10, .33))(np.arange(10000))
plt.plot(sp.diff(prob(250, 10, 0.33)))
sns.distplot(purchases)
"""
Explanation: Численный эксперимент
Хорошо, график CDF есть. Но распределение понятнее, когда есть график плотности. Проведем численный эксперимет, чтобы проверить результаты и заодно нарисуем PDF.
Отчаяно покупаем пока не соберется вся коллекция.
End of explanation
"""
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
import scipy as sp
import scipy.stats as stats
%matplotlib inline
"""
Explanation: Важные выводы
Кого сильно пробило на ностальгию, могут за 500 руб купить коллецию на ebay, например здесь.
При цене одного яйца в утконосе в 259 руб очевидно, что антиквариат из них так себе, и денег на продаже собранных коллекций не поднять.
Imports
End of explanation
"""
|
mohsinhaider/pythonbootcampacm | Errors and Exceptions/Errors and Exceptions.ipynb | mit | # Producing an Error
print("hey there)
"""
Explanation: Errors and Exceptions
Running into errors and exceptions is inevitable, and debugging is a huge part of modern-day product development. As of now, we've worked with the roots of Python syntax, and have encountered many types of errors that are built-in. Of course, as you proceed, you will pass more complex errors. Let's see how it's done.
Before explaining what an Exception is, let's look at a quick syntax error.
End of explanation
"""
# Catching a SyntaxError
try:
print(8/0)
except ZeroDivisionError:
print("There is no dividing by 0")
"""
Explanation: What if we wanted to prevent this error from stopping our program? For that case, we use what's called try-except statements. Here's the general syntax.
try:
// possible error inducing code
except Error:
// handle the error
except AnotherError:
// you can handle more than 1 error
else:
// if no exception runs
finally:
// run no matter what
This is the full syntax of try-except. We don't even need else or finally statements. However, we will use them after these basic examples.
Example 1 Let's try catching a divide by 0 error.
End of explanation
"""
try:
user_input = float("Is this a float?")
except ValueError:
print("Put a number in there, man!")
"""
Explanation: Example 2 What if try sending in a string as a float?
End of explanation
"""
# Mutation of a String?
try:
my_str = "hello"
my_str[1] = "tr"
except TypeError:
print("We've got a TypeError")
else:
print("This shouldn't run!")
finally:
print("Finally, I always print!")
"""
Explanation: Let's quickly go over else and finally. Recall, the else block will run if no exception runs, and the finally will always run.
Example 3 Catching a TypeErrror (trying to mutate a string type - recall these sequences are immutable)
End of explanation
"""
# Broadening the except statement
try:
str[0] == "h"
except (TypeError, SyntaxError):
print("String mutate")
# Trying to exce
try:
print(4/0)
str[1] = "e"
except (ZeroDivisionError, TypeError):
print("what!")
"""
Explanation: Let's look at some other features.
You can check for more than one exception! Now, it's important to understand that except block will only handle the first exception it finds. Order them in a tuple!
One thing we have not discussed is that as soon as
End of explanation
"""
try:
print(4/0)
except:
print("wow")
# Catching the error in its respective statement
try:
print(4/0)
except ZeroDivisionError:
print("ZeroDiv Error")
except:
print("There's an error, I just don't know what!")
else:
print("There wasn't an error!")
# Catching the error in the deault, if the specific except fails
try:
str[4] = "wow"
except ZeroDivisionError:
print("ZeroDiv Error")
except:
print("There's an error, I just don't know what!")
else:
print("There wasn't an error!")
"""
Explanation: The last except (or the only one, if there's only one) doesn't need an Exception associated with it. It's know as the "default except".
End of explanation
"""
try:
raise TypeError
except TypeError:
print("Error raised")
class MyException(Exception):
def __init__(self, result):
self.result = result
def __str__(self):
return "MyException occured, result that caused it: {}".format(self.result)
"""
Explanation: Raising Errors
Sometimes, you may need to physically raise your own errors. This is possible with the raise keyword. It works as expected.
End of explanation
"""
try:
raise MyException("error string")
except MyException as myexcep:
print(myexcep)
"""
Explanation: Below, we'll catch it, and treat it with "as" to shorten the name and not cause the statement to confuse MyException.
End of explanation
"""
|
mattilyra/gensim | docs/notebooks/Poincare Tutorial.ipynb | lgpl-2.1 | % cd ../..
%load_ext autoreload
%autoreload 2
import os
import logging
import numpy as np
from gensim.models.poincare import PoincareModel, PoincareKeyedVectors, PoincareRelations
logging.basicConfig(level=logging.INFO)
poincare_directory = os.path.join(os.getcwd(), 'docs', 'notebooks', 'poincare')
data_directory = os.path.join(poincare_directory, 'data')
wordnet_mammal_file = os.path.join(data_directory, 'wordnet_mammal_hypernyms.tsv')
"""
Explanation: Tutorial on Poincaré Embeddings
This notebook discusses the basic ideas and use-cases for Poincaré embeddings and demonstrates what kind of operations can be done with them. For more comprehensive technical details and results, this blog post may be a more appropriate resource.
1. Introduction
1.1 Concept and use-case
Poincaré embeddings are a method to learn vector representations of nodes in a graph. The input data is of the form of a list of relations (edges) between nodes, and the model tries to learn representations such that the vectors for the nodes accurately represent the distances between them.
The learnt embeddings capture notions of both hierarchy and similarity - similarity by placing connected nodes close to each other and unconnected nodes far from each other; hierarchy by placing nodes lower in the hierarchy farther from the origin, i.e. with higher norms.
The paper uses this model to learn embeddings of nodes in the WordNet noun hierarchy, and evaluates these on 3 tasks - reconstruction, link prediction and lexical entailment, which are described in the section on evaluation. We have compared the results of our Poincaré model implementation on these tasks to other open-source implementations and the results mentioned in the paper.
The paper also describes a variant of the Poincaré model to learn embeddings of nodes in a symmetric graph, unlike the WordNet noun hierarchy, which is directed and asymmetric. The datasets used in the paper for this model are scientific collaboration networks, in which the nodes are researchers and an edge represents that the two researchers have co-authored a paper.
This variant has not been implemented yet, and is therefore not a part of our tutorial and experiments.
1.2 Motivation
The main innovation here is that these embeddings are learnt in hyperbolic space, as opposed to the commonly used Euclidean space. The reason behind this is that hyperbolic space is more suitable for capturing any hierarchical information inherently present in the graph. Embedding nodes into a Euclidean space while preserving the distance between the nodes usually requires a very high number of dimensions. A simple illustration of this can be seen below -
Here, the positions of nodes represent the positions of their vectors in 2-D euclidean space. Ideally, the distances between the vectors for nodes (A, D) should be the same as that between (D, H) and as that between H and its child nodes. Similarly, all the child nodes of H must be equally far away from node A. It becomes progressively hard to accurately preserve these distances in Euclidean space as the degree and depth of the tree grows larger. Hierarchical structures may also have cross-connections (effectively a directed graph), making this harder.
There is no representation of this simple tree in 2-dimensional Euclidean space which can reflect these distances correctly. This can be solved by adding more dimensions, but this becomes computationally infeasible as the number of required dimensions grows exponentially.
Hyperbolic space is a metric space in which distances aren't straight lines - they are curves, and this allows such tree-like hierarchical structures to have a representation that captures the distances more accurately even in low dimensions.
2. Training the embedding
End of explanation
"""
model = PoincareModel(train_data=[('node.1', 'node.2'), ('node.2', 'node.3')])
"""
Explanation: The model can be initialized using an iterable of relations, where a relation is simply a pair of nodes -
End of explanation
"""
relations = PoincareRelations(file_path=wordnet_mammal_file, delimiter='\t')
model = PoincareModel(train_data=relations)
"""
Explanation: The model can also be initialized from a csv-like file containing one relation per line. The module provides a convenience class PoincareRelations to do so.
End of explanation
"""
model = PoincareModel(train_data=relations, size=2, burn_in=0)
model.train(epochs=1, print_every=500)
"""
Explanation: Note that the above only initializes the model and does not begin training. To train the model -
End of explanation
"""
model.train(epochs=1, print_every=500)
"""
Explanation: The same model can be trained further on more epochs in case the user decides that the model hasn't converged yet.
End of explanation
"""
# Saves the entire PoincareModel instance, the loaded model can be trained further
model.save('/tmp/test_model')
PoincareModel.load('/tmp/test_model')
# Saves only the vectors from the PoincareModel instance, in the commonly used word2vec format
model.kv.save_word2vec_format('/tmp/test_vectors')
PoincareKeyedVectors.load_word2vec_format('/tmp/test_vectors')
"""
Explanation: The model can be saved and loaded using two different methods -
End of explanation
"""
# Load an example model
models_directory = os.path.join(poincare_directory, 'models')
test_model_path = os.path.join(models_directory, 'gensim_model_batch_size_10_burn_in_0_epochs_50_neg_20_dim_50')
model = PoincareModel.load(test_model_path)
"""
Explanation: 3. What the embedding can be used for
End of explanation
"""
# Distance between any two nodes
model.kv.distance('plant.n.02', 'tree.n.01')
model.kv.distance('plant.n.02', 'animal.n.01')
# Nodes most similar to a given input node
model.kv.most_similar('electricity.n.01')
model.kv.most_similar('man.n.01')
# Nodes closer to node 1 than node 2 is from node 1
model.kv.nodes_closer_than('dog.n.01', 'carnivore.n.01')
# Rank of distance of node 2 from node 1 in relation to distances of all nodes from node 1
model.kv.rank('dog.n.01', 'carnivore.n.01')
# Finding Poincare distance between input vectors
vector_1 = np.random.uniform(size=(100,))
vector_2 = np.random.uniform(size=(100,))
vectors_multiple = np.random.uniform(size=(5, 100))
# Distance between vector_1 and vector_2
print(PoincareKeyedVectors.vector_distance(vector_1, vector_2))
# Distance between vector_1 and each vector in vectors_multiple
print(PoincareKeyedVectors.vector_distance_batch(vector_1, vectors_multiple))
"""
Explanation: The learnt representations can be used to perform various kinds of useful operations. This section is split into two - some simple operations that are directly mentioned in the paper, as well as some experimental operations that are hinted at, and might require more work to refine.
The models that are used in this section have been trained on the transitive closure of the WordNet hypernym graph. The transitive closure is the list of all the direct and indirect hypernyms in the WordNet graph. An example of a direct hypernym is (seat.n.03, furniture.n.01) while an example of an indirect hypernym is (seat.n.03, physical_entity.n.01).
3.1 Simple operations
All the following operations are based simply on the notion of distance between two nodes in hyperbolic space.
End of explanation
"""
# Closest child node
model.kv.closest_child('person.n.01')
# Closest parent node
model.kv.closest_parent('person.n.01')
# Position in hierarchy - lower values represent that the node is higher in the hierarchy
print(model.kv.norm('person.n.01'))
print(model.kv.norm('teacher.n.01'))
# Difference in hierarchy between the first node and the second node
# Positive values indicate the first node is higher in the hierarchy
print(model.kv.difference_in_hierarchy('person.n.01', 'teacher.n.01'))
# One possible descendant chain
model.kv.descendants('mammal.n.01')
# One possible ancestor chain
model.kv.ancestors('dog.n.01')
"""
Explanation: 3.2 Experimental operations
These operations are based on the notion that the norm of a vector represents its hierarchical position. Leaf nodes typically tend to have the highest norms, and as we move up the hierarchy, the norm decreases, with the root node being close to the center (or origin).
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/mohc/cmip6/models/sandbox-2/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-2', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-2
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
physion/ovation-python | examples/file-upload.ipynb | gpl-3.0 | import ovation.core as core
from ovation.session import connect
from ovation.upload import upload_revision, upload_file, upload_folder
from ovation.download import download_revision
from pprint import pprint
from getpass import getpass
from tqdm import tqdm_notebook as tqdm
"""
Explanation: File (Revision) upload example
To run this example, you'll need the Ovation Python API. Install with pip:
pip install ovation
End of explanation
"""
session = connect(input('Ovation email: '), org=input('Organization (enter for default): ') or 0)
"""
Explanation: Connection
You use a connection.Session to interact with the Ovaiton REST API. Use the connect method to create an authenticated Session.
End of explanation
"""
project_id = input('Project UUID: ')
# Get a project by ID
proj = session.get(session.path('project', project_id))
"""
Explanation: Upload a file (revision)
The Python API wraps the Ovation REST API, using the awesome requests library. The Session provides some convenient additions to make working with Ovation's API a little easier. For example, it automatically sets the content type to JSON and handles URL creation from path and host.
The example below shows retrieving a project by ID, adding a new File and uploading a new Revision (a version) of that file using the ovation.revisions.upload_revision convenience method.
End of explanation
"""
folder = upload_folder(session, proj, '/path/to/project_fastq_folder')
"""
Explanation: You can upload an entire folder or individual files to the Project. First, let's upload a folder:
End of explanation
"""
import os
folder = core.create_folder(session, proj, 'FASTQ')
for f in os.glob('/path/to/project_fastq_folder/*.fastq')
"""
Explanation: Alternatively, we can create a folder and upload individual files:
End of explanation
"""
# Create a new File
r = session.post(project_url,
data={'entities': [{'type': 'File',
'attributes': {'name': 'example.vcf'}}]})
file = r[0]
pprint(file)
# Create a new Revision (version) of the new File by uploading a local file
revision = upload_revision(session, file, '/Users/barry/Desktop/example.vcf')
pprint(revision)
"""
Explanation: For advanced users, we can create a File and then upload a Revision.
End of explanation
"""
file_path = download_revision(session, revision._id)
"""
Explanation: upload_revision is also how you can upload a new Revision to an existing file.
Download a revision
The Ovation API generates a temporary authenticated URL for downloading a Revision. This example uses the ovation.revisions.download_revision function to get this authenticated URL and then to download it to the local file system, returning the downloaded file's path:
End of explanation
"""
|
GoogleCloudPlatform/bigquery-notebooks | notebooks/official/template_notebooks/visualizing_bigquery_public_data.ipynb | apache-2.0 | %%bigquery
SELECT
source_year AS year,
COUNT(is_male) AS birth_count
FROM `bigquery-public-data.samples.natality`
GROUP BY year
ORDER BY year DESC
LIMIT 15
"""
Explanation: Vizualizing BigQuery data in a Jupyter notebook
BigQuery is a petabyte-scale analytics data warehouse that you can use to run SQL queries over vast amounts of data in near realtime.
Data visualization tools can help you make sense of your BigQuery data and help you analyze the data interactively. You can use visualization tools to help you identify trends, respond to them, and make predictions using your data. In this tutorial, you use the BigQuery Python client library and pandas in a Jupyter notebook to visualize data in the BigQuery natality sample table.
Using Jupyter magics to query BigQuery data
The BigQuery Python client library provides a magic command that allows you to run queries with minimal code.
The BigQuery client library provides a cell magic, %%bigquery. The %%bigquery magic runs a SQL query and returns the results as a pandas DataFrame. The following cell executes a query of the BigQuery natality public dataset and returns the total births by year.
End of explanation
"""
%%bigquery total_births
SELECT
source_year AS year,
COUNT(is_male) AS birth_count
FROM `bigquery-public-data.samples.natality`
GROUP BY year
ORDER BY year DESC
LIMIT 15
"""
Explanation: The following command to runs the same query, but this time the results are saved to a variable. The variable name, total_births, is given as an argument to the %%bigquery. The results can then be used for further analysis and visualization.
End of explanation
"""
total_births.plot(kind="bar", x="year", y="birth_count");
"""
Explanation: The next cell uses the pandas DataFrame.plot method to visualize the query results as a bar chart. See the pandas documentation to learn more about data visualization with pandas.
End of explanation
"""
%%bigquery births_by_weekday
SELECT
wday,
SUM(CASE WHEN is_male THEN 1 ELSE 0 END) AS male_births,
SUM(CASE WHEN is_male THEN 0 ELSE 1 END) AS female_births
FROM `bigquery-public-data.samples.natality`
WHERE wday IS NOT NULL
GROUP BY wday
ORDER BY wday ASC
"""
Explanation: Run the following query to retrieve the number of births by weekday. Because the wday (weekday) field allows null values, the query excludes records where wday is null.
End of explanation
"""
births_by_weekday.plot(x="wday");
"""
Explanation: Visualize the query results using a line chart.
End of explanation
"""
from google.cloud import bigquery
client = bigquery.Client()
"""
Explanation: Using Python to query BigQuery data
Magic commands allow you to use minimal syntax to interact with BigQuery. Behind the scenes, %%bigquery uses the BigQuery Python client library to run the given query, convert the results to a pandas Dataframe, optionally save the results to a variable, and finally display the results. Using the BigQuery Python client library directly instead of through magic commands gives you more control over your queries and allows for more complex configurations. The library's integrations with pandas enable you to combine the power of declarative SQL with imperative code (Python) to perform interesting data analysis, visualization, and transformation tasks.
To use the BigQuery Python client library, start by importing the library and initializing a client. The BigQuery client is used to send and receive messages from the BigQuery API.
End of explanation
"""
sql = """
SELECT
plurality,
COUNT(1) AS count,
year
FROM
`bigquery-public-data.samples.natality`
WHERE
NOT IS_NAN(plurality) AND plurality > 1
GROUP BY
plurality, year
ORDER BY
count DESC
"""
df = client.query(sql).to_dataframe()
df.head()
"""
Explanation: Use the Client.query method to run a query. Execute the following cell to run a query to retrieve the annual count of plural births by plurality (2 for twins, 3 for triplets, etc.).
End of explanation
"""
pivot_table = df.pivot(index="year", columns="plurality", values="count")
pivot_table.plot(kind="bar", stacked=True, figsize=(15, 7));
"""
Explanation: To chart the query results in your DataFrame, run the following cell to pivot the data and create a stacked bar chart of the count of plural births over time.
End of explanation
"""
sql = """
SELECT
gestation_weeks,
COUNT(1) AS count
FROM
`bigquery-public-data.samples.natality`
WHERE
NOT IS_NAN(gestation_weeks) AND gestation_weeks <> 99
GROUP BY
gestation_weeks
ORDER BY
gestation_weeks
"""
df = client.query(sql).to_dataframe()
"""
Explanation: Run the following query to retrieve the count of births by the number of gestation weeks.
End of explanation
"""
ax = df.plot(kind="bar", x="gestation_weeks", y="count", figsize=(15, 7))
ax.set_title("Count of Births by Gestation Weeks")
ax.set_xlabel("Gestation Weeks")
ax.set_ylabel("Count");
"""
Explanation: Finally, chart the query results in your DataFrame.
End of explanation
"""
|
jamesjia94/BIDMach | tutorials/NVIDIA/BIDMat_Scala_Features.ipynb | bsd-3-clause | import BIDMat.{CMat,CSMat,DMat,Dict,IDict,FMat,FND,GMat,GDMat,GIMat,GLMat,GSMat,GSDMat,
HMat,IMat,Image,LMat,Mat,ND,SMat,SBMat,SDMat}
import BIDMat.MatFunctions._
import BIDMat.SciFunctions._
import BIDMat.Solvers._
import BIDMat.JPlotting._
Mat.checkMKL
Mat.checkCUDA
Mat.setInline
if (Mat.hasCUDA > 0) GPUmem
"""
Explanation: Features of BIDMat and Scala
BIDMat is a multi-platform matrix library similar to R, Matlab, Julia or Numpy/Scipy. It takes full advantage of the very powerful Scala Language. Its intended primarily for machine learning, but is has a broad set of operations and datatypes and should be suitable for many other applications. BIDMat has several unique features:
Built from the ground up with GPU + CPU backends. BIDMat code is implementation independent.
GPU memory management uses caching, designed to support iterative algorithms.
Natural and extensible syntax (thanks to scala). Math operators include +,-,*,/,⊗,∙,∘
Probably the most complete support for matrix types: dense matrices of float32, double, int and long. Sparse matrices with single or double elements. All are available on CPU or GPU.
Highest performance sparse matrix operations on power-law data.
BIDMat has several other state-of-the-art features:
* Interactivity. Thanks to the Scala language, BIDMat is interactive and scriptable.
* Massive code base thanks to Java.
* Easy-to-use Parallelism, thanks to Scala's actor framework and parallel collection classes.
* Runs on JVM, extremely portable. Runs on Mac, Linux, Windows, Android.
* Cluster-ready, leverages Hadoop, Yarn, Spark etc.
BIDMat is a library that is loaded by a startup script, and a set of imports that include the default classes and functions. We include them explicitly in this notebook.
End of explanation
"""
val n = 4096 // "val" designates a constant. n is statically typed (as in Int here), but its type is inferred.
val a = rand(n,n) // Create an nxn matrix (on the CPU)
%type a // Most scientific funtions in BIDMat return single-precision results by default.
"""
Explanation: These calls check that CPU and GPU native libs loaded correctly, and what GPUs are accessible.
If you have a GPU and CUDA installed, GPUmem will printout the fraction of free memory, the absolute free memory and the total memory for the default GPU.
CPU and GPU matrices
BIDMat's matrix types are given in the table below. All are children of the "Mat" parent class, which allows code to be written generically. Many of BIDMach's learning algorithms will run with either single or double precision, dense or sparse input data.
<table style="width:4in" align="left">
<tr><td/><td colspan="2"><b>CPU Matrices</b></td><td colspan="2"><b>GPU Matrices</b></td></tr>
<tr><td></td><td><b>Dense</b></td><td><b>Sparse</b></td><td><b>Dense</b></td><td><b>Sparse</b></td></tr>
<tr><td><b>Float32</b></td><td>FMat</td><td>SMat</td><td>GMat</td><td>GSMat</td></tr>
<tr><td><b>Float64</b></td><td>DMat</td><td>SDMat</td><td>GDMat</td><td>GSDMat</td></tr>
<tr><td><b>Int32</b></td><td>IMat</td><td></td><td>GIMat</td><td></td></tr>
<tr><td><b>Int64</b></td><td>LMat</td><td></td><td>GLMat</td><td></td></tr>
</table>
End of explanation
"""
flip; val b = a * a; val gf=gflop
print("The product took %4.2f seconds at %3.0f gflops" format (gf._2, gf._1))
gf
"""
Explanation: CPU matrix operations use Intel MKL acceleration for linear algebra, scientific and statistical functions. BIDMat includes "tic" and "toc" for timing, and "flip" and "flop" for floating point performance.
End of explanation
"""
val ga = grand(n,n) // Another nxn random matrix
flip; val gb = ga * ga; val gf=gflop
print("The product took %4.2f seconds at %3.0f gflops" format (gf._2, gf._1))
gf
%type ga
"""
Explanation: GPU matrices behave very similarly.
End of explanation
"""
def SVD(M:Mat, ndims:Int, niter:Int) = {
var Q = M.zeros(M.nrows, ndims) // A block of ndims column vectors
normrnd(0, 1, Q) // randomly initialize the vectors
Mat.useCache = true // Turn matrix caching on
for (i <- 0 until niter) { // Perform subspace iteration
val P = (Q.t * M *^ M).t // Compute P = M * M^t * Q efficiently
QRdecompt(P, Q, null) // QR-decomposition of P, saving Q
}
Mat.useCache = false // Turn caching off after the iteration
val P = (Q.t * M *^ M).t // Compute P again.
(Q, P ∙ Q) // Return Left singular vectors and singular values
}
"""
Explanation: But much of the power of BIDMat is that we dont have to worry about matrix types. Lets explore that with an example.
SVD (Singular Value Decomposition) on a Budget
Now lets try solving a real problem with this infrastructure: An approximate Singular-Value Decomposition (SVD) or PCA of a matrix $M$. We'll do this by computing the leading eigenvalues and eigenvectors of $MM^T$. The method we use is subspace iteration and it generalizes the power method for computing the largest-magnitude eigenvalue. An eigenvector is a vector $v$ such that
$$Mv =\lambda v$$
where $\lambda$ is a scalar called the eigenvalue.
End of explanation
"""
val ndims = 32 // Number of PCA dimension
val niter = 128 // Number of iterations to do
val S = loadSMat("../data/movielens/train.smat.lz4")(0->10000,0->4000)
val M = full(S) // Put in a dense matrix
flip;
val (svecs, svals) = SVD(M, ndims, niter); // Compute the singular vectors and values
val gf=gflop
print("The calculation took %4.2f seconds at %2.1f gflops" format (gf._2, gf._1))
svals.t
"""
Explanation: Notice that the code above used only the "Mat" matrix type. If you examine the variables V and P in a Scala IDE (Eclipse has one) you will find that they both also have type "Mat". Let's try it with an FMat (CPU single precision, dense matrix).
Movie Data Example
We load some data from the MovieLens project.
End of explanation
"""
S.nnz
plot(svals)
"""
Explanation: Let's take a peek at the singular values on a plot
End of explanation
"""
loglog(row(1 to svals.length), svals)
"""
Explanation: Which shrinks a little too fast. Lets look at it on a log-log plot instead:
End of explanation
"""
val G = GMat(M) // Try a dense GPU matrix
flip;
val (svecs, svals) = SVD(G, ndims, niter); // Compute the singular vectors and values
val gf=gflop
print("The calculation took %4.2f seconds at %2.1f gflops" format (gf._2, gf._1))
svals.t
"""
Explanation: Now lets try it with a GPU, single-precision, dense matrix.
End of explanation
"""
flip; // Try a sparse CPU matrix
val (svecs, svals) = SVD(S, ndims, niter); // Compute the singular vectors and values
val gf=gflop
print("The calculation took %4.2f seconds at %2.1f gflops" format (gf._2, gf._1))
svals.t
"""
Explanation: That's not bad, the GPU version was nearly 4x faster. Now lets try a sparse, CPU single-precision matrix. Note that by construction our matrix was only 10% dense anyway.
Sparse SVD
End of explanation
"""
val GS = GSMat(S) // Try a sparse GPU matrix
flip;
val (svecs, svals) = SVD(GS, ndims, niter); // Compute the singular vectors and values
val gf=gflop
print("The calculation took %4.2f seconds at %2.1f gflops" format (gf._2, gf._1))
svals.t
"""
Explanation: This next one is important. Dense matrix operations are the bread-and-butter of scientific computing, and now most deep learning. But other machine learning tasks (logistic regression, SVMs, k-Means, topic models etc) most commonly take sparse input data like text, URLs, cookies etc. And so performance on sparse matrix operations is critical.
GPU performance on sparse data, especially power law data - which covers most of the case above (the commerically important cases) - has historically been poor. But in fact GPU hardware supports extremely fast sparse operations when the kernels are carefully designed. Such kernels are only available in BIDMat right now. NVIDIA's sparse matrix kernels, which have been tuned for sparse scientific data, do not work well on power-law data.
In any case, let's try BIDMat's GPU sparse matrix type:
End of explanation
"""
val GSD = GSDMat(GS) // Try a sparse, double GPU matrix
flip;
val (svecs, svals) = SVD(GSD, ndims, niter); // Compute the singular vectors and values
val gf=gflop
print("The calculation took %4.2f seconds at %2.1f gflops" format (gf._2, gf._1))
svals.t
"""
Explanation: That's a 10x improvement end-to-end, which is similar to the GPU's advantage on dense matrices. This result is certainly not specific to SVD, and is reproduced in most ML algorithms. So GPUs have a key role to play in general machine learning, and its likely that at some point they will assume a central role as they currently enjoy in scientific computing and deep learning.
GPU Double Precision
One last performance issue: GPU hardware normally prioritizes single-precision floating point over double-precision, and there is a big gap on dense matrix operations. But calculations on sparse data are memory-limited and this largely masks the difference in arithmetic. Lets try a sparse, double-precision matrix, which will force all the calculations to double precision.
End of explanation
"""
def SVD(M:Mat, ndims:Int, niter:Int) = {
var Q = M.zeros(M.nrows, ndims)
normrnd(0, 1, Q)
Mat.useCache = true
for (i <- 0 until niter) { // Perform subspace iteration
val P = M * (M ^* Q) // Compute P = M * M^t * Q with cusparse
QRdecompt(P, Q, null)
}
Mat.useCache = false
val P = M * (M ^* Q) // Compute P again.
(Q, getdiag(P ^* Q)) // Left singular vectors and singular values
}
// Try sparse GPU matrix
flip;
val (svecs, svals) = SVD(GS, ndims, niter);
val gf=gflop
print("The calculation took %4.2f seconds at %2.1f gflops" format (gf._2, gf._1))
svals.t
"""
Explanation: Which is noticebly slower, but still 3x faster than the CPU version running in single precision.
Using Cusparse
NVIDIA's cusparse library, which is optimized for scientific data, doesnt perform as well on power-law data.
End of explanation
"""
val a = ones(4,1) * row(1->5)
val b = col(1->5) * ones(1,4)
"""
Explanation: Unicode Math Operators, Functions and Variables
As well as the standard operators +,-,*,/, BIDMat includes several other important operators with their standard unicode representation. They have an ASCII alias in case unicode input is difficult. Here they are:
<pre>
Unicode operator ASCII alias Operation
================ =========== =========
∘ *@ Element-wise (Hadamard) product
∙ dot Column-wise dot product
∙→ dotr Row-wise dot product
⊗ kron Kronecker (Cartesian) product
</pre>
End of explanation
"""
b ∘ a
"""
Explanation: Hadamard (element-wise) multiply
End of explanation
"""
b ∙ a
"""
Explanation: Dot product, by default along columns
End of explanation
"""
b ∙→ a
"""
Explanation: Dot product along rows
End of explanation
"""
b ⊗ a
"""
Explanation: Kronecker product
End of explanation
"""
val ii = row(1->10)
ii on Γ(ii) // Stack this row on the results of a Gamma function applied to it
"""
Explanation: As well as operators, functions in BIDMach can use unicode characters. e.g.
End of explanation
"""
def √(x:Mat) = sqrt(x)
def √(x:Double) = math.sqrt(x)
√(ii)
"""
Explanation: You can certainly define new unicode operators:
End of explanation
"""
val α = row(1->10)
val β = α + 2
val γ = β on Γ(β)
"""
Explanation: and use as much Greek as you want:
End of explanation
"""
class NewMat(nr:Int, nc:Int, data0:Array[Float]) extends FMat(nr,nc,data0) {
def quick(a:FMat) = this * a;
def fox(a:FMat) = this + a;
def over(a:FMat) = this - a;
def lazzy(a:FMat) = this / a ;
}
implicit def convNew(a:FMat):NewMat = new NewMat(a.nrows, a.ncols, a.data)
val n = 2;
val the = rand(n,n);
val brown = rand(n,n);
val jumps = rand(n,n);
val dog = rand(n,n);
the quick brown fox jumps over the lazzy dog
"""
Explanation: or English:
End of explanation
"""
a ^* b
a.t * b
a *^ b
a * b.t
"""
Explanation: Transposed Multiplies
Matrix multiply is the most expensive step in many calculations, and often involves transposed matrices. To speed up those calcualtions, we expose two operators that combine the transpose and multiply operations:
<pre>
^* - transpose the first argument, so a ^* b is equivalent to a.t * b
*^ - transpose the second argument, so a *^ b is equivalent to a * b.t
</pre>
these operators are implemented natively, i.e. they do not actually perform transposes, but implement the effective calculation. This is particulary important for sparse matrices since transpose would involve an index sort.
End of explanation
"""
import java.util.Random
val random = new Random()
def rwalk(m:FMat) = {
val n = m.length
m(0) = random.nextFloat
var i = 1
while (i < n) {
m(i) = m(i-1) + random.nextFloat - 0.5f
i += 1
}
}
val n = 100000000
val a = zeros(n, 1)
tic; val x = rwalk(a); val t=toc
print("computed %2.1f million steps per second in %2.1f seconds" format (n/t/1e6f, t))
"""
Explanation: Highlights of the Scala Language
Scala is a remarkable language. It is an object-oriented language with similar semantics to Java which it effectively extends. But it also has a particular clean functional syntax for anonymous functions and closures.
It has a REPL (Read-Eval-Print-Loop) like Python, and can be used interactively or it can run scripts in or outside an interactive session.
Like Python, types are determined by assignments, but they are static rather than dynamic. So the language has the economy of Python, but the type-safety of a static language.
Scala includes a tuple type for multiple-value returns, and on-the-fly data structuring.
Finally it has outstanding support for concurrency with parallel classes and an actor system called Akka.
Performance
First we examine the performance of Scala as a scientific language. Let's implement an example that has been widely used to illustrate the performance of the Julia language. Its a random walk, i.e. a 1D array with random steps from one element to the next.
End of explanation
"""
tic; rand(a); val b=cumsum(a-0.5f); val t=toc
print("computed %2.1f million steps per second in %2.1f seconds" format (n/t/1e6f, t))
"""
Explanation: If we try the same calculation in the Julia language (a new language designed for scientific computing) and in Python we find that:
<table style="width:4in" align="left">
<tr><td></td><td><b>Scala</b></td><td><b>Julia</b></td><td><b>Python</b></td></tr>
<tr><td><b>with rand</b></td><td>1.0s</td><td>0.43s</td><td>147s</td></tr>
<tr><td><b>without rand</b></td><td>0.1s</td><td>0.26s</td><td>100s</td></tr>
</table>
Vectorized Operations
But does this matter? A random walk can be computed efficiently with vector operations: vector random numbers and a cumulative sum. And in general most ML algorithms can be implemented with vector and matrix operations efficiently. Let's try in BIDMat:
End of explanation
"""
val ga = GMat(a)
tic; rand(ga); val gb=cumsum(ga-0.5f); val t=toc
print("computed %2.1f million steps per second in %2.1f seconds" format (n/t/1e6f, t))
"""
Explanation: Which is better, due to the faster random number generation in the vectorized rand function. But More interesting is the GPU running time:
End of explanation
"""
<img style="width:4in" alt="NGC 4414 (NASA-med).jpg" src="https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/NGC_4414_%28NASA-med%29.jpg/1200px-NGC_4414_%28NASA-med%29.jpg"/>
"""
Explanation: If we run similar operators in Julia and Python we find:
<table style="width:5in" align="left">
<tr><td></td><td><b>BIDMach(CPU)</b></td><td><b>BIDMach(GPU)</b></td><td><b>Julia</b></td><td><b>Python</b></td></tr>
<tr><td><b>with rand</b></td><td>0.6s</td><td>0.1s</td><td>0.44s</td><td>1.4s</td></tr>
<tr><td><b>without rand</b></td><td>0.3s</td><td>0.05s</td><td>0.26s</td><td>0.5s</td></tr>
</table>
Vectorized operators even the playing field, and bring Python up to speed compared to the other systems. On the other hand, GPU hardware maintains a near-order-of-magnitude advantage for vector operations.
GPU Performance Summary
GPU-acceleration gives an order-of-magnitude speedup (or more) for the following operations:
* Dense matrix multiply
* Sparse matrix multiply
* Vector operations and reductions
* Random numbers and transcendental function evaluation
* Sorting
So its not just for scientific computing or deep learning, but for a much wider gamut of data processing and ML.
Tapping the Java Universe
End of explanation
"""
import org.apache.commons.math3.stat.inference.TestUtils._
"""
Explanation: Almost every piece of Java code can be used in Scala. And therefore any piece of Java code can be used interactively.
There's very little work to do. You find a package and add it to your dependencies and then import as you would in Java.
End of explanation
"""
val x = normrnd(0,1,1,40)
val y = normrnd(0,1,1,40) + 0.5
"""
Explanation: Apache Commons Math includes a Statistics package with many useful functions and tests. Lets create two arrays of random data and compare them.
End of explanation
"""
val dx = DMat(x)
val dy = DMat(y)
tTest(dx.data, dy.data)
"""
Explanation: BIDMat has enriched matrix types like FMat, SMat etc, while Apache Commons Math expects Java Arrays of Double precision floats. To get these, we can convert FMat to DMat (double) and extra the data field which contains the matrices data.
End of explanation
"""
implicit def fMatToDarray(a:FMat):Array[Double] = DMat(a).data
"""
Explanation: But rather than doing this conversion every time we want to use some BIDMat matrices, we can instruct Scala to do the work for us. We do this with an implicit conversion from FMat to Array[Double]. Simply defining this function will case a coercion whenever we supply an FMat argument to a function that expects Array[Double].
End of explanation
"""
tTest(x, y)
"""
Explanation: And magically we can perform t-Tests on BIDMat matrices as though they had known each other all along.
End of explanation
"""
import org.apache.commons.math3.distribution._
val betadist = new BetaDistribution(2,5)
val n = 100000
val x = new DMat(1, n, (0 until n).map(x => betadist.sample).toArray); null
hist(x, 100)
"""
Explanation: and its important to get your daily dose of beta:
End of explanation
"""
<image src="https://sketchesfromthealbum.files.wordpress.com/2015/01/jacquesderrida.jpg" style="width:4in"/>
"""
Explanation: Deconstruction
End of explanation
"""
val i = row(0->10).data
"""
Explanation: Let's make a raw Java Array of float integers.
End of explanation
"""
val j = i.map(x => (x, x*x))
"""
Explanation: First of all, Scala supports Tuple types for ad-hoc data structuring.
End of explanation
"""
j.map{case (x,y) => (y,x)}
"""
Explanation: We can also deconstruct tuples using Scala Pattern matching:
End of explanation
"""
val k = j.reduce((ab,cd) => {val (a,b) = ab; val (c,d) = cd; (a+c, b+d)})
"""
Explanation: And reduce operations can use deconstruction as well:
End of explanation
"""
|
bonadio/bike-share-rnn | .ipynb_checkpoints/original-nn-checkpoint.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
# self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
def sigmoid(x):
return 1 / (1 + np.exp(-x))
self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot( X, self.weights_input_to_hidden )
#(2,)
hidden_outputs = self.activation_function( hidden_inputs ) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot( hidden_outputs , self.weights_hidden_to_output ) # signals into final output layer
# derivative from f(x)=x is 1
#(1,)
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs
# TODO: Backpropagated error terms - Replace these values with your calculations.
# output_error_term = error * f'(x) = error * 1
output_error_term = error
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot( self.weights_hidden_to_output, output_error_term)
hidden_error_term = hidden_error * hidden_outputs * ( 1- hidden_outputs )
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:,None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term[:,None] * hidden_outputs[:,None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
hidden_inputs = np.dot(features,self.weights_input_to_hidden)# signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot( hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
"""
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
"""
import sys
### Set the hyperparameters here ###
# 1- 10000 0.01 5 = Training loss: 0.305 ... Validation loss: 0.485
# 2- 10000 0.45 2 = m Taining loss: 0.172 ... Validation loss: 0.328
# 4- 15000 0.45 2 = Training loss: 0.156 ... Validation loss: 0.256
# 5- 12000 0.45 2 = Training loss: 0.099 ... Validation loss: 0.179
# 3- 3000 0.45 2 = Training loss: 0.217 ... Validation loss: 0.393
# Aumentar o numero de hidden nodes
# 6 12000 0.45 5 = Training loss: 0.072 ... Validation loss: 0.180 (grafico ficou maior que dados)
# 7 12000 0.45 4 = Training loss: 0.065 ... Validation loss: 0.201 ( validacao piorou)
# 8 12000 0.45 3 = Training loss: 0.099 ... Validation loss: 0.173 ( validatin foi a melhor)
# mudar o learging rate
# 9 12000 0.44 3 = Training loss: 0.065 ... Validation loss: 0.140 ( validacao melhorou )
# 10 12000 0.43 3 = Training loss: 0.078 ... Validation loss: 0.152 ( validacao e train piorou)
# 11 12000 0.435 3 = Training loss: 0.074 ... Validation loss: 0.165 ( piorou )
# 12 12000 0.445 3 = Training loss: 0.080 ... Validation loss: 0.164 (piorou)
# 13 12000 0.439 3 = Training loss: 0.066 ... Validation loss: 0.138 (melhor)
# 14 6000 0.439 3 = Training loss: 0.074 ... Validation loss: 0.186 (piorou)
# 15 12000 0.439 3 = Training loss: 0.099 ... Validation loss: 0.185
# 16 12000 0.439 3 = Training loss: 0.081 ... Validation loss: 0.178
# As the reviewer suggests lets add hidden nodes
# 4000 0.1 15 Training loss: 0.247 ... Validation loss: 0.434 (loss curve shows high learning rate shape )
# 4000 0.01 15 Training loss: 0.499 ... Validation loss: 0.835 (loss curve shows low learning rate shape )
# 4000 0.05 15 Training loss: 0.283 ... Validation loss: 0.452 (loss curve shows high learning rate shape)
# 40000 0.027 15 Training loss: 0.143 ... Validation loss: 0.291
# 40000 0.015 15 Training loss: 0.229 ... Validation loss: 0.390 ficou pior
# 40000 0.03 15 Training loss: 0.112 ... Validation loss: 0.241
# 40000 0.035 15 Training loss: 0.101 ... Validation loss: 0.231
# 40000 0.04 15 Training loss: 0.080 ... Validation loss: 0.183
# 40000 0.045 15 Training loss: 0.071 ... Validation loss: 0.166
# 40000 0.05 15 Training loss: 0.061 ... Validation loss: 0.139 (pick)
# Training loss: 0.072 ... Validation loss: 0.167
# 40000 0.06 15 Training loss: 0.057 ... Validation loss: 0.147
# 40000 0.05 20 Training loss: 0.069 ... Validation loss: 0.160
iterations = 40000
learning_rate = 0.05
hidden_nodes = 15
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
|
tensorflow/federated | docs/openmined2020/openmined_conference_2020.ipynb | apache-2.0 | #@title Upgrade tensorflow_federated and load TensorBoard
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
%load_ext tensorboard
import sys
if not sys.warnoptions:
import warnings
warnings.simplefilter("ignore")
#@title
import collections
from matplotlib import pyplot as plt
from IPython.display import display, HTML, IFrame
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
np.random.seed(0)
def greetings():
display(HTML('<b><font size="6" color="#ff00f4">Greetings, virtual tutorial participants!</font></b>'))
return True
l = tff.federated_computation(greetings)()
"""
Explanation: Copyright 2020 The TensorFlow Authors.
Before we start
To edit the colab notebook, please go to "File" -> "Save a copy in Drive" and make any edits on your copy.
Before we start, please run the following to make sure that your environment is
correctly setup. If you don't see a greeting, please refer to the
Installation guide for instructions.
End of explanation
"""
# Code for loading federated data from TFF repository
emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data()
"""
Explanation: TensorFlow Federated for Image Classification
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.14.0/docs/tutorials/federated_learning_for_image_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.14.0/docs/tutorials/federated_learning_for_image_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Let's experiment with federated learning in simulation. In this tutorial, we use the classic MNIST training example to introduce the
Federated Learning (FL) API layer of TFF, tff.learning - a set of
higher-level interfaces that can be used to perform common types of federated
learning tasks, such as federated training, against user-supplied models
implemented in TensorFlow.
Tutorial Outline
We'll be training a model to perform image classification using the classic MNIST dataset, with the neural net learning to classify digit from image. In this case, we'll be simulating federated learning, with the training data distributed on different devices.
<p><b>Sections</b></p>
Load TFF Libraries.
Explore/preprocess federated EMNIST dataset.
Create a model.
Set up federated averaging process for training.
Analyze training metrics.
Set up federated evaluation computation.
Analyze evaluation metrics.
Preparing the input data
Let's start with the data. Federated learning requires a federated data set,
i.e., a collection of data from multiple users. Federated data is typically
non-i.i.d.,
which poses a unique set of challenges. Users typically have different distributions of data depending on usage patterns.
In order to facilitate experimentation, we seeded the TFF repository with a few
datasets.
Here's how we can load our sample dataset.
End of explanation
"""
len(emnist_train.client_ids)
# Let's look at the shape of our data
example_dataset = emnist_train.create_tf_dataset_for_client(
emnist_train.client_ids[0])
example_dataset.element_spec
# Let's select an example dataset from one of our simulated clients
example_dataset = emnist_train.create_tf_dataset_for_client(
emnist_train.client_ids[0])
# Your code to get an example element from one client:
example_element = next(iter(example_dataset))
example_element['label'].numpy()
plt.imshow(example_element['pixels'].numpy(), cmap='gray', aspect='equal')
plt.grid(False)
_ = plt.show()
"""
Explanation: The data sets returned by load_data() are instances of
tff.simulation.datasets.ClientData, an interface that allows you to enumerate the set
of users, to construct a tf.data.Dataset that represents the data of a
particular user, and to query the structure of individual elements.
Let's explore the dataset.
End of explanation
"""
## Example MNIST digits for one client
f = plt.figure(figsize=(20,4))
j = 0
for e in example_dataset.take(40):
plt.subplot(4, 10, j+1)
plt.imshow(e['pixels'].numpy(), cmap='gray', aspect='equal')
plt.axis('off')
j += 1
# Number of examples per layer for a sample of clients
f = plt.figure(figsize=(12,7))
f.suptitle("Label Counts for a Sample of Clients")
for i in range(6):
ds = emnist_train.create_tf_dataset_for_client(emnist_train.client_ids[i])
k = collections.defaultdict(list)
for e in ds:
k[e['label'].numpy()].append(e['label'].numpy())
plt.subplot(2, 3, i+1)
plt.title("Client {}".format(i))
for j in range(10):
plt.hist(k[j], density=False, bins=[0,1,2,3,4,5,6,7,8,9,10])
# Let's play around with the emnist_train dataset.
# Let's explore the non-iid charateristic of the example data.
for i in range(5):
ds = emnist_train.create_tf_dataset_for_client(emnist_train.client_ids[i])
k = collections.defaultdict(list)
for e in ds:
k[e['label'].numpy()].append(e['pixels'].numpy())
f = plt.figure(i, figsize=(12,5))
f.suptitle("Client #{}'s Mean Image Per Label".format(i))
for j in range(10):
mn_img = np.mean(k[j],0)
plt.subplot(2, 5, j+1)
plt.imshow(mn_img.reshape((28,28)))#,cmap='gray')
plt.axis('off')
# Each client has different mean images -- each client will be nudging the model
# in their own directions.
"""
Explanation: Exploring non-iid data
End of explanation
"""
NUM_CLIENTS = 10
NUM_EPOCHS = 5
BATCH_SIZE = 20
SHUFFLE_BUFFER = 100
PREFETCH_BUFFER=10
def preprocess(dataset):
def batch_format_fn(element):
"""Flatten a batch `pixels` and return the features as an `OrderedDict`."""
return collections.OrderedDict(
x=tf.reshape(element['pixels'], [-1, 784]),
y=tf.reshape(element['label'], [-1, 1]))
return dataset.repeat(NUM_EPOCHS).shuffle(SHUFFLE_BUFFER).batch(
BATCH_SIZE).map(batch_format_fn).prefetch(PREFETCH_BUFFER)
"""
Explanation: Preprocessing the data
Since the data is already a tf.data.Dataset, preprocessing can be accomplished using Dataset transformations. See here for more detail on these transformations.
End of explanation
"""
preprocessed_example_dataset = preprocess(example_dataset)
sample_batch = tf.nest.map_structure(lambda x: x.numpy(),
next(iter(preprocessed_example_dataset)))
sample_batch
"""
Explanation: Let's verify this worked.
End of explanation
"""
def make_federated_data(client_data, client_ids):
return [
preprocess(client_data.create_tf_dataset_for_client(x))
for x in client_ids
]
"""
Explanation: Here's a simple helper function that will construct a list of datasets from the
given set of users as an input to a round of training or evaluation.
End of explanation
"""
sample_clients = emnist_train.client_ids[0:NUM_CLIENTS]
# Your code to get the federated dataset here for the sampled clients:
federated_train_data = make_federated_data(emnist_train, sample_clients)
print('Number of client datasets: {l}'.format(l=len(federated_train_data)))
print('First dataset: {d}'.format(d=federated_train_data[0]))
"""
Explanation: Now, how do we choose clients?
End of explanation
"""
def create_keras_model():
return tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(784,)),
tf.keras.layers.Dense(10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
"""
Explanation: Creating a model with Keras
If you are using Keras, you likely already have code that constructs a Keras
model. Here's an example of a simple model that will suffice for our needs.
End of explanation
"""
## Centralized training with keras ---------------------------------------------
# This is separate from the TFF tutorial, and demonstrates how to train a
# Keras model in a centralized fashion (contrasting training in a federated env)
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Preprocess the data (these are NumPy arrays)
x_train = x_train.reshape(60000, 784).astype("float32") / 255
y_train = y_train.astype("float32")
mod = create_keras_model()
mod.compile(
optimizer=tf.keras.optimizers.RMSprop(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]
)
h = mod.fit(
x_train,
y_train,
batch_size=64,
epochs=2
)
# ------------------------------------------------------------------------------
"""
Explanation: Centralized training with Keras
End of explanation
"""
def model_fn():
# We _must_ create a new model here, and _not_ capture it from an external
# scope. TFF will call this within different graph contexts.
keras_model = create_keras_model()
return tff.learning.from_keras_model(
keras_model,
input_spec=preprocessed_example_dataset.element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
"""
Explanation: Federated training using a Keras model
In order to use any model with TFF, it needs to be wrapped in an instance of the
tff.learning.Model interface.
More keras metrics you can add are found here.
End of explanation
"""
iterative_process = tff.learning.build_federated_averaging_process(
model_fn,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02),
# Add server optimizer here!
server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0))
"""
Explanation: Training the model on federated data
Now that we have a model wrapped as tff.learning.Model for use with TFF, we
can let TFF construct a Federated Averaging algorithm by invoking the helper
function tff.learning.build_federated_averaging_process, as follows.
End of explanation
"""
state = iterative_process.initialize()
"""
Explanation: What just happened? TFF has constructed a pair of federated computations and
packaged them into a tff.templates.IterativeProcess in which these computations
are available as a pair of properties initialize and next.
An iterative process will usually be driven by a control loop like:
```
def initialize():
...
def next(state):
...
iterative_process = IterativeProcess(initialize, next)
state = iterative_process.initialize()
for round in range(num_rounds):
state = iterative_process.next(state)
```
Let's invoke the initialize computation to construct the server state.
End of explanation
"""
# Run one single round of training.
state, metrics = iterative_process.next(state, federated_train_data)
print('round 1, metrics={}'.format(metrics['train']))
"""
Explanation: The second of the pair of federated computations, next, represents a single
round of Federated Averaging, which consists of pushing the server state
(including the model parameters) to the clients, on-device training on their
local data, collecting and averaging model updates, and producing a new updated
model at the server.
Let's run a single round of training and visualize the results. We can use the
federated data we've already generated above for a sample of users.
End of explanation
"""
NUM_ROUNDS = 11
for round_num in range(2, NUM_ROUNDS):
state, metrics = iterative_process.next(state, federated_train_data)
print('round {:2d}, metrics={}'.format(round_num, metrics['train']))
"""
Explanation: Let's run a few more rounds. As noted earlier, typically at this point you would
pick a subset of your simulation data from a new randomly selected sample of
users for each round in order to simulate a realistic deployment in which users
continuously come and go, but in this interactive notebook, for the sake of
demonstration we'll just reuse the same users, so that the system converges
quickly.
End of explanation
"""
#@test {"skip": true}
import os
import shutil
logdir = "/tmp/logs/scalars/training/"
if os.path.exists(logdir):
shutil.rmtree(logdir)
# Your code to create a summary writer:
summary_writer = tf.summary.create_file_writer(logdir)
state = iterative_process.initialize()
"""
Explanation: Training loss is decreasing after each round of federated training, indicating
the model is converging. There are some important caveats with these training
metrics, however, see the section on Evaluation later in this tutorial.
Displaying model metrics in TensorBoard
Next, let's visualize the metrics from these federated computations using Tensorboard.
Let's start by creating the directory and the corresponding summary writer to write the metrics to.
End of explanation
"""
#@test {"skip": true}
with summary_writer.as_default():
for round_num in range(1, NUM_ROUNDS):
state, metrics = iterative_process.next(state, federated_train_data)
for name, value in metrics['train'].items():
tf.summary.scalar(name, value, step=round_num)
"""
Explanation: Plot the relevant scalar metrics with the same summary writer.
End of explanation
"""
#@test {"skip": true}
%tensorboard --logdir /tmp/logs/scalars/ --port=0
"""
Explanation: Start TensorBoard with the root log directory specified above. It can take a few seconds for the data to load.
End of explanation
"""
# Construct federated evaluation computation here:
evaluation = tff.learning.build_federated_evaluation(model_fn)
"""
Explanation: In order to view evaluation metrics the same way, you can create a separate eval folder, like "logs/scalars/eval", to write to TensorBoard.
Evaluation
To perform evaluation on federated data, you can construct another federated
computation designed for just this purpose, using the
tff.learning.build_federated_evaluation function, and passing in your model
constructor as an argument.
End of explanation
"""
import random
shuffled_ids = emnist_test.client_ids.copy()
random.shuffle(shuffled_ids)
sample_clients = shuffled_ids[0:NUM_CLIENTS]
federated_test_data = make_federated_data(emnist_test, sample_clients)
len(federated_test_data), federated_test_data[0]
# Run evaluation on the test data here, using the federated model produced from
# training:
test_metrics = evaluation(state.model, federated_test_data)
str(test_metrics)
"""
Explanation: Now, let's compile a test sample of federated data and rerun evaluation on the
test data. The data will come from a different sample of users, but from a
distinct held-out data set.
End of explanation
"""
emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data()
NUM_CLIENTS = 10
BATCH_SIZE = 20
def preprocess(dataset):
def batch_format_fn(element):
"""Flatten a batch of EMNIST data and return a (features, label) tuple."""
return (tf.reshape(element['pixels'], [-1, 784]),
tf.reshape(element['label'], [-1, 1]))
return dataset.batch(BATCH_SIZE).map(batch_format_fn)
client_ids = np.random.choice(emnist_train.client_ids, size=NUM_CLIENTS, replace=False)
federated_train_data = [preprocess(emnist_train.create_tf_dataset_for_client(x))
for x in client_ids
]
"""
Explanation: This concludes the tutorial. We encourage you to play with the
parameters (e.g., batch sizes, number of users, epochs, learning rates, etc.), to modify the code above to simulate training on random samples of users in
each round, and to explore the other tutorials we've developed.
Build your own FL algorithms
In the previous tutorials, we learned how to set up model and data pipelines, and use these to perform federated training using the tff.learning API.
Of course, this is only the tip of the iceberg when it comes to FL research. In this tutorial, we are going to discuss how to implement federated learning algorithms without deferring to the tff.learning API. We aim to accomplish the following:
Goals:
Understand the general structure of federated learning algorithms.
Explore the Federated Core of TFF.
Use the Federated Core to implement Federated Averaging directly.
Preparing the input data
We first load and preprocess the EMNIST dataset included in TFF. We essentially use the same code as in the first tutorial.
End of explanation
"""
def create_keras_model():
return tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(784,)),
tf.keras.layers.Dense(10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
"""
Explanation: Preparing the model
We use the same model as the first tutorial, which has a single hidden layer, followed by a softmax layer.
End of explanation
"""
def model_fn():
keras_model = create_keras_model()
return tff.learning.from_keras_model(
keras_model,
input_spec=federated_train_data[0].element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
"""
Explanation: We wrap this Keras model as a tff.learning.Model.
End of explanation
"""
def initialize_fn():
model = model_fn()
return model.weights.trainable
"""
Explanation: Cutomizing FL Algorithm
While the tff.learning API encompasses many variants of Federated Averaging, there are many other algorithms that do not fit neatly into this framework. For example, you may want to add regularization, clipping, or more complicated algorithms such as federated GAN training. You may also be instead be interested in federated analytics.
For these more advanced algorithms, we'll have to write our own custom FL algorithm.
In general, FL algorithms have 4 main components:
A server-to-client broadcast step.
A local client update step.
A client-to-server upload step.
A server update step.
In TFF, we generally represent federated algorithms as an IterativeProcess. This is simply a class that contains an initialize_fn and a next_fn. The initialize_fn will be used to initialize the server, and the next_fn will perform one communication round of federated averaging. Let's write a skeleton of what our iterative process for FedAvg should look like.
First, we have an initialize function that simply creates a tff.learning.Model, and returns its trainable weights.
End of explanation
"""
def next_fn(server_weights, federated_dataset):
# Broadcast the server weights to the clients.
server_weights_at_client = broadcast(server_weights)
# Each client computes their updated weights.
client_weights = client_update(federated_dataset, server_weights_at_client)
# The server averages these updates.
mean_client_weights = mean(client_weights)
# The server updates its model.
server_weights = server_update(mean_client_weights)
return server_weights
"""
Explanation: This function looks good, but as we will see later, we will need to make a small modification to make it a TFF computation.
We also want to sketch the next_fn.
End of explanation
"""
@tf.function
def client_update(model, dataset, server_weights, client_optimizer):
"""Performs training (using the server model weights) on the client's dataset."""
# Initialize the client model with the current server weights.
client_weights = model.weights.trainable
# Assign the server weights to the client model.
tf.nest.map_structure(lambda x, y: x.assign(y),
client_weights, server_weights)
# Use the client_optimizer to update the local model.
for batch in dataset:
with tf.GradientTape() as tape:
# Compute a forward pass on the batch of data
outputs = model.forward_pass(batch)
# Compute the corresponding gradient
grads = tape.gradient(outputs.loss, client_weights)
grads_and_vars = zip(grads, client_weights)
# Apply the gradient using a client optimizer.
client_optimizer.apply_gradients(grads_and_vars)
return client_weights
"""
Explanation: We'll focus on implementing these four components separately. We'll first focus on the parts that can be implemented in pure TensorFlow, namely the client and server update steps.
TensorFlow Blocks
Client update
We will use our tff.learning.Model to do client training in essentially the same way you would train a TF model. In particular, we will use tf.GradientTape to compute the gradient on batches of data, then apply these gradient using a client_optimizer.
Note that each tff.learning.Model instance has a weights attribute with two sub-attributes:
trainable: A list of the tensors corresponding to trainable layers.
non_trainable: A list of the tensors corresponding to non-trainable layers.
For our purposes, we will only use the trainable weights (as our model only has those!).
End of explanation
"""
@tf.function
def server_update(model, mean_client_weights):
"""Updates the server model weights as the average of the client model weights."""
model_weights = model.weights.trainable
# Assign the mean client weights to the server model.
tf.nest.map_structure(lambda x, y: x.assign(y),
model_weights, mean_client_weights)
return model_weights
"""
Explanation: Server Update
The server update will require even less effort. We will implement vanilla federated averaging, in which we simply replace the server model weights by the average of the client model weights. Again, we will only focus on the trainable weights.
End of explanation
"""
federated_float_on_clients = tff.type_at_clients(tf.float32)
"""
Explanation: Note that the code snippet above is clearly overkill, as we could simply return mean_client_weights. However, more advanced implementations of Federated Averaging could use mean_client_weights with more sophisticated techniques, such as momentum or adaptivity.
So far, we've only written pure TensorFlow code. This is by design, as TFF allows you to use much of the TensorFlow code you're already familiar with. However, now we have to specify the orchestration logic, that is, the logic that dictates what the server broadcasts to the client, and what the client uploads to the server.
This will require the "Federated Core" of TFF.
Introduction to the Federated Core
The Federated Core (FC) is a set of lower-level interfaces that serve as the foundation for the tff.learning API. However, these interfaces are not limited to learning. In fact, they can be used for analytics and many other computations over distributed data.
At a high-level, the federated core is a development environment that enables compactly expressed program logic to combine TensorFlow code with distributed communication operators (such as distributed sums and broadcasts). The goal is to give researchers and practitioners expliict control over the distributed communication in their systems, without requiring system implementation details (such as specifying point-to-point network message exchanges).
One key point is that TFF is designed for privacy-preservation. Therefore, it allows explicit control over where data resides, to prevent unwanted accumulation of data at the centralized server location.
Federated data
Similar to "Tensor" concept in TensorFlow, which is one of the fundamental concepts, a key concept in TFF is "federated data", which refers to a collection of data items hosted across a group of devices in a distributed system (eg. client datasets, or the server model weights). We model the entire collection of data items across all devices as a single federated value.
For example, suppose we have client devices that each have a float representing the temperature of a sensor. We could represent it as a federated float by
End of explanation
"""
str(federated_float_on_clients)
"""
Explanation: Federated types are specified by a type T of its member constituents (eg. tf.float32) and a group G of devices. We will focus on the cases where G is either tff.CLIENTS or tff.SERVER. Such a federated type is represented as {T}@G, as shown below.
End of explanation
"""
@tff.federated_computation(tff.type_at_clients(tf.float32))
def get_average_temperature(client_temperatures):
return tff.federated_mean(client_temperatures)
"""
Explanation: Why do we care so much about placements? A key goal of TFF is to enable writing code that could be deployed on a real distributed system. This means that it is vital to reason about which subsets of devices execute which code, and where different pieces of data reside.
TFF focuses on three things: data, where the data is placed, and how the data is being transformed. The first two are encapsulated in federated types, while the last is encapsulated in federated computations.
Federated computations
TFF is a strongly-typed functional programming environment whose basic units are federated computations. These are pieces of logic that accept federated values as input, and return federated values as output.
For example, suppose we wanted to average the temperatures on our client sensors. We could define the following (using our federated float):
End of explanation
"""
str(get_average_temperature.type_signature)
"""
Explanation: You might ask, how is this different from the tf.function decorator in TensorFlow? The key answer is that the code generated by tff.federated_computation is neither TensorFlow nor Python code; It is a specification of a distributed system in an internal platform-independent glue language.
While this may sound complicated, you can think of TFF computations as functions with well-defined type signatures. These type signatures can be directly queried.
End of explanation
"""
get_average_temperature([68.5, 70.3, 69.8])
"""
Explanation: This tff.federated_computation accepts arguments of federated type {float32}@CLIENTS, and returns values of federated type {float32}@SERVER. Federated computations may also go from server to client, from client to client, or from server to server. Federated computations can also be composed like normal functions, as long as their type signatures match up.
To support development, TFF allows you to invoke a tff.federated_computation as a Python function. For example, we can call
End of explanation
"""
@tff.tf_computation(tf.float32)
def add_half(x):
return tf.add(x, 0.5)
"""
Explanation: Non-eager computations and TensorFlow
There are two key restrictions to be aware of. First, when the Python interpreter encounters a tff.federated_computation decorator, the function is traced once and serialized for future use. Therefore, TFF computations are fundamentally non-eager. This behavior is somewhat analogous to that of the tf.function decorator in TensorFlow.
Second, a federated computation can only consist of federated operators (such as tff.federated_mean), they cannot contain TensorFlow operations. TensorFlow code must be confined to blocks decorated with tff.tf_computation. Most ordinary TensorFlow code can be directly decotrated, such as the following function that takes a number and adds 0.5 to it.
End of explanation
"""
str(add_half.type_signature)
"""
Explanation: These also have type signatures, but without placements. For example, we can call
End of explanation
"""
@tff.federated_computation(tff.type_at_clients(tf.float32))
def add_half_on_clients(x):
return tff.federated_map(add_half, x)
"""
Explanation: Here we see an important difference between tff.federated_computation and tff.tf_computation. The former has explicit placements, while the latter does not.
We can use tff.tf_computation blocks in federated computations by specifying placements. Let's create a function that adds half, but only to federated floats at the clients. We can do this by using tff.federated_map, which applies a given tff.tf_computation, while preserving the placement.
End of explanation
"""
str(add_half_on_clients.type_signature)
"""
Explanation: This function is almost identical to add_half, except that it only accepts values with placement at tff.CLIENTS, and returns values with the same placement. We can see this in its type signature:
End of explanation
"""
@tff.tf_computation
def server_init():
model = model_fn()
return model.weights.trainable
"""
Explanation: In summary:
TFF operates on federated values.
Each federated value has a federated type, with a type (eg. tf.float32) and a placement (eg. tff.CLIENTS).
Federated values can be transformed using federated computations, which must be decorated with tff.federated_computation and a federated type signature.
TensorFlow code must be contained in blocks with tff.tf_computation decorators.
These blocks can then be incorporated into federated computations.
Building your own FL Algorithm (Part 2)
Now that we've peeked at the Federated Core, we can build our own federated learning algorithm. Remember that above, we defined an initialize_fn and next_fn for our algorithm. The next_fn will make use of the client_update and server_update we defined using pure TensorFlow code.
However, in order to make our algorithm a federated computation, we will need both the next_fn and initialize_fn to be tff.federated_computations.
TensorFlow Federated blocks
Creating the initialization computation
The initialize function will be quite simple: We will create a model using model_fn. However, remember that we must separate out our TensorFlow code using tff.tf_computation.
End of explanation
"""
@tff.federated_computation
def initialize_fn():
return tff.federated_value(server_init(), tff.SERVER)
"""
Explanation: We can then pass this directly into a federated computation using tff.federated_value.
End of explanation
"""
whimsy_model = model_fn()
tf_dataset_type = tff.SequenceType(whimsy_model.input_spec)
"""
Explanation: Creating the next_fn
We now use our client and server update code to write the actual algorithm. We will first turn our client_update into a tff.tf_computation that accepts a client datasets and server weights, and outputs an updated client weights tensor.
We will need the corresponding types to properly decorate our function. Luckily, the type of the server weights can be extracted directly from our model.
End of explanation
"""
str(tf_dataset_type)
"""
Explanation: Let's look at the dataset type signature. Remember that we took 28 by 28 images (with integer labels) and flattened them.
End of explanation
"""
model_weights_type = server_init.type_signature.result
"""
Explanation: We can also extract the model weights type by using our server_init function above.
End of explanation
"""
str(model_weights_type)
"""
Explanation: Examining the type signature, we'll be able to see the architecture of our model!
End of explanation
"""
@tff.tf_computation(tf_dataset_type, model_weights_type)
def client_update_fn(tf_dataset, server_weights):
model = model_fn()
client_optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
return client_update(model, tf_dataset, server_weights, client_optimizer)
"""
Explanation: We can now create our tff.tf_computation for the client update.
End of explanation
"""
@tff.tf_computation(model_weights_type)
def server_update_fn(mean_client_weights):
model = model_fn()
return server_update(model, mean_client_weights)
"""
Explanation: The tff.tf_computation version of the server update can be defined in a similar way, using types we've already extracted.
End of explanation
"""
federated_server_type = tff.type_at_server(model_weights_type)
federated_dataset_type = tff.type_at_clients(tf_dataset_type)
"""
Explanation: Last, but not least, we need to create the tff.federated_computation that brings this all together. This function will accept two federated values, one corresponding to the server weights (with placement tff.SERVER), and the other corresponding to the client datasets (with placement tff.CLIENTS).
Note that both these types were defined above! We simply need to give them the proper placement using `tff.type_at_{server/clients}``.
End of explanation
"""
@tff.federated_computation(federated_server_type, federated_dataset_type)
def next_fn(server_weights, federated_dataset):
# Broadcast the server weights to the clients.
server_weights_at_client = tff.federated_broadcast(server_weights)
# Each client computes their updated weights.
client_weights = tff.federated_map(
client_update_fn, (federated_dataset, server_weights_at_client))
# The server averages these updates.
mean_client_weights = tff.federated_mean(client_weights)
# The server updates its model.
server_weights = tff.federated_map(server_update_fn, mean_client_weights)
return server_weights
"""
Explanation: Remember the 4 elements of an FL algorithm?
A server-to-client broadcast step.
A local client update step.
A client-to-server upload step.
A server update step.
Now that we've built up the above, each part can be compactly represented as a single line of TFF code. This simplicity is why we had to take extra care to specify things such as federated types!
End of explanation
"""
federated_algorithm = tff.templates.IterativeProcess(
initialize_fn=initialize_fn,
next_fn=next_fn
)
"""
Explanation: We now have a tff.federated_computation for both the algorithm initialization, and for running one step of the algorithm. To finish our algorithm, we pass these into tff.templates.IterativeProcess.
End of explanation
"""
str(federated_algorithm.initialize.type_signature)
"""
Explanation: Let's look at the type signature of the initialize and next functions of our iterative process.
End of explanation
"""
str(federated_algorithm.next.type_signature)
"""
Explanation: This reflects the fact that federated_algorithm.initialize is a no-arg function that returns a single-layer model (with a 784-by-10 weight matrix, and 10 bias units).
End of explanation
"""
central_emnist_test = emnist_test.create_tf_dataset_from_all_clients().take(1000)
central_emnist_test = preprocess(central_emnist_test)
"""
Explanation: Here, we see that federated_algorithm.next accepts a server model and client data, and returns an updated server model.
Evaluating the algorithm
Let's run a few rounds, and see how the loss changes. First, we will define an evaluation function using the centralized approach discussed in the second tutorial.
We first create a centralized evaluation dataset, and then apply the same preprocessing we used for the training data.
Note that we only take the first 1000 elements for reasons of computational efficiency, but typically we'd use the entire test dataset.
End of explanation
"""
def evaluate(server_state):
keras_model = create_keras_model()
keras_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]
)
keras_model.set_weights(server_state)
keras_model.evaluate(central_emnist_test)
"""
Explanation: Next, we write a function that accepts a server state, and uses Keras to evaluate on the test dataset. If you're familiar with tf.Keras, this will all look familiar, though note the use of set_weights!
End of explanation
"""
server_state = federated_algorithm.initialize()
evaluate(server_state)
"""
Explanation: Now, let's initialize our algorithm and evaluate on the test set.
End of explanation
"""
for round in range(15):
server_state = federated_algorithm.next(server_state, federated_train_data)
evaluate(server_state)
"""
Explanation: Let's train for a few rounds and see if anything changes.
End of explanation
"""
|
JeffAbrahamson/MLWeek | practicum/teste_installation.ipynb | gpl-3.0 | import logging
import time
"""
Explanation: Confirmer l'installation de python
Le seul but de ce notebook est de vous permettre de confirmer la bonne installation de python. Les exercises étaient testées avec python 2.7.12 et ipython 5.1.0. Il y a toute raison à croire que n'importe quelle version de python 2.7 suffirait.
Le plus facile est simplement d'executer le notebook entier : "cell -> run all" (avec possiblité selon votre installation que ce soit en français). Puis scanner ce notebook pour voir s'il y a des erreurs qui indiquent une module manquante.
Les modules suivants doivent être installées automatiquement avec python.
End of explanation
"""
import numpy as np
import scipy.stats as ss
import matplotlib.pyplot as plt
import sklearn
import pandas as pd
from sklearn import datasets
from sklearn import svm
import pylab as pl
from matplotlib.colors import ListedColormap
import sklearn as sk
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model.logistic import LogisticRegression
import matplotlib.font_manager
import matplotlib
%matplotlib inline
"""
Explanation: Les modules suivants vous avez installées avec pip à partir de requirements.txt. Ou bien vous avez installé anaconda (Mac ou Windows), et dans ce cas toutes les modules sont incluses.
End of explanation
"""
import tensorflow
"""
Explanation: L'installation de TensorFlow se fait différemment : il faut suivre la mode d'emploi ici.
End of explanation
"""
x = np.linspace(-100, 100, 201)
plt.plot(x, x * x)
"""
Explanation: Et puis, pour tester le point le plus difficile, nous allons dessiner un parabole. Vous devez voir un parabole ci-dessous.
End of explanation
"""
|
morganics/BayesPy | examples/notebook/iris_anomaly_detection.ipynb | apache-2.0 | %matplotlib notebook
import pandas as pd
import sys
sys.path.append("../../../bayespy")
import bayespy
from bayespy.network import Builder as builder
import logging
import os
import matplotlib.pyplot as plt
from IPython.display import display
logger = logging.getLogger()
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.INFO)
bayespy.jni.attach(logger)
db_folder = bayespy.utils.get_path_to_parent_dir("")
iris = pd.read_csv(os.path.join(db_folder, "data/iris.csv"), index_col=False)
"""
Explanation: Anomaly detection using the Iris dataset
The Iris dataset is not perhaps the most natural dataset for anomaly detection, so I will get round to writing one with a slightly more appropriate dataset at some point.
Regardless, the principles are the same; unsupervised learning of a 'normal' model, e.g. generating a model of normality and using that to identify anomalous data points. It's then possible to use the Log Likelihood score to identify points that aren't 'normal'.
In practical terms there are some hurdles, in that ideally the model of normality should be (in general) normal, and abnormal points in the system that you're monitoring should be removed (it really is quite sensitive to training data); whether manually or automatically. If performing automatic removal you can use a more constrained model rather than using exactly the same structure. On the other hand, there are a host of benefits; it's possible to identify the likely variables that caused the abnormality, and it's also possible to identify what the model is expecting it to be. In that regard (unlike many other approaches), the model can essenentially be debugged by identifying which variables are not performing in the way that would be expected. This article won't cover those two additional points, but I will return to it at a later date.
First off, define all the imports.
End of explanation
"""
network = bayespy.network.create_network()
cluster = builder.create_cluster_variable(network, 4)
petal_length = builder.create_continuous_variable(network, "petal_length")
petal_width = builder.create_continuous_variable(network, "petal_width")
sepal_length = builder.create_continuous_variable(network, "sepal_length")
sepal_width = builder.create_continuous_variable(network, "sepal_width")
nodes = [petal_length, petal_width, sepal_length, sepal_width]
for i, node in enumerate(nodes):
builder.create_link(network, cluster, node)
for j in range(i+1, len(nodes)):
builder.create_link(network, node, nodes[j])
layout = bayespy.visual.NetworkLayout(network)
graph = layout.build_graph()
pos = layout.fruchterman_reingold_layout(graph)
layout.visualise(graph, pos)
"""
Explanation: Rather than using a template to build the network, it's fairly easy to define it by hand. The network looks something like the following:
End of explanation
"""
# build the 'normal' model on two of the classes
model = bayespy.model.NetworkModel(network, logger)
with bayespy.data.DataSet(iris.drop('iris_class', axis=1), db_folder, logger) as dataset:
subset = dataset.subset(
iris[(iris.iris_class == "Iris-versicolor") | (iris.iris_class == "Iris-virginica")].index.tolist())
model.train(subset)
"""
Explanation: Using the above network, train the model on only two of the classes (for clarity, I chose the two that are least separated, versicolor and virginica)
End of explanation
"""
with bayespy.data.DataSet(iris.drop('iris_class', axis=1), db_folder, logger) as dataset:
# get the loglikelihood value for the whole model on each individual sample,
# the lower the loglikelihood value the less likely the data point has been
# generated by the model.
results = model.batch_query(dataset, [bayespy.model.QueryModelStatistics()])
display(results)
cmap = plt.cm.get_cmap('Blues_r')
fig = plt.figure(figsize=(10, 10))
k = 1
for i, v in enumerate(nodes):
for j in range(i+1, len(nodes)):
v_name = v.getName()
v1_name = nodes[j].getName()
ax = fig.add_subplot(3,2,k)
ax.set_title("{} vs {}".format(v_name, v1_name))
h = ax.scatter(x=iris[v_name].tolist(), y=iris[v1_name].tolist(), c=results['loglikelihood'].tolist(),
vmin=results.loglikelihood.min(), vmax=results.loglikelihood.max(), cmap=cmap
)
k+=1
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(h, cax=cbar_ax)
plt.show()
"""
Explanation: The network is then ready for anomaly detection; this entails applying the entire dataset to the trained model, and plotting the results. The Log Likelihood will always give a negative value, the closer to 0, the more normal the applied data sample is.
End of explanation
"""
|
ananswam/bioscrape | inference examples/Stochastic Inference.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = "retina"
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["figure.dpi"] = 100
rcParams["font.size"] = 20
%matplotlib inline
import bioscrape as bs
from bioscrape.types import Model
from bioscrape.simulator import py_simulate_model
import numpy as np
import pylab as plt
import pandas as pd
species = ['I','X']
reactions = [(['X'], [], 'massaction', {'k':'d1'}), ([], ['X'], 'hillpositive', {'s1':'I', 'k':'k1', 'K':'KR', 'n':2})]
k1 = 50.0
d1 = 0.5
params = [('k1', k1), ('d1', d1), ('KR', 20)]
initial_condition = {'X':0, 'I':0}
M = Model(species = species, reactions = reactions, parameters = params,
initial_condition_dict = initial_condition)
"""
Explanation: Parameter identification example
Here is a simple toy model that we use to demonstrate the working of the inference package
$\emptyset \xrightarrow[]{k_1(I)} X \; \; \; \; X \xrightarrow[]{d_1} \emptyset$
$ k_1(I) = \frac{k_1 I^2}{K_R^2 + I^2}$
End of explanation
"""
num_trajectories = 4 # each with different initial condition
initial_condition_list = [{'I':5},{'I':10},{'I':15},{'I':20}]
timepoints = np.linspace(0,5,100)
result_list = []
for init_cond in initial_condition_list:
M.set_species(init_cond)
result = py_simulate_model(timepoints, Model = M)['X']
result_list.append(result)
plt.plot(timepoints, result, label = 'I =' + str(list(init_cond.values())[0]))
plt.xlabel('Time')
plt.ylabel('[X]')
plt.legend()
plt.show()
exp_data = pd.DataFrame()
exp_data['timepoints'] = timepoints
for i in range(num_trajectories):
exp_data['X' + str(i)] = result_list[i] + np.random.normal(5, 1, size = np.shape(result))
plt.plot(timepoints, exp_data['X' + str(i)], 'r', alpha = 0.3)
plt.plot(timepoints, result_list[i], 'k', linewidth = 3)
plt.xlabel('Time')
plt.ylabel('[X]')
plt.show()
"""
Explanation: Generate experimental data for multiple initial conditions
Simulate bioscrape model
Add Gaussian noise of non-zero mean and non-zero variance to the simulation
Create appropriate Pandas dataframes
Write the data to a CSV file
End of explanation
"""
exp_data.to_csv('birth_death_data_multiple_conditions.csv')
exp_data
"""
Explanation: CSV looks like:
End of explanation
"""
from bioscrape.inference import py_inference
# Import data from CSV
# Import a CSV file for each experiment run
exp_data = []
for i in range(num_trajectories):
df = pd.read_csv('birth_death_data_multiple_conditions.csv', usecols = ['timepoints', 'X'+str(i)])
df.columns = ['timepoints', 'X']
exp_data.append(df)
prior = {'k1' : ['uniform', 0, 100]}
sampler, pid = py_inference(Model = M, exp_data = exp_data, measurements = ['X'], time_column = ['timepoints'],
nwalkers = 15, init_seed = 0.15, nsteps = 5000, sim_type = 'stochastic',
params_to_estimate = ['k1'], prior = prior, plot_show = False, convergence_check = False)
pid.plot_mcmc_results(sampler, convergence_check = False);
"""
Explanation: Run the bioscrape MCMC algorithm to identify parameters from the experimental data
End of explanation
"""
M_fit = M
timepoints = pid.timepoints[0]
flat_samples = sampler.get_chain(discard=200, thin=15, flat=True)
inds = np.random.randint(len(flat_samples), size=200)
for init_cond in initial_condition_list:
for ind in inds:
sample = flat_samples[ind]
for pi, pi_val in zip(pid.params_to_estimate, sample):
M_fit.set_parameter(pi, pi_val)
M_fit.set_species(init_cond)
plt.plot(timepoints, py_simulate_model(timepoints, Model= M_fit)['X'], "C1", alpha=0.6)
# plt.errorbar(, y, yerr=yerr, fmt=".k", capsize=0)
for i in range(num_trajectories):
plt.plot(timepoints, list(pid.exp_data[i]['X']), 'b', alpha = 0.1)
plt.plot(timepoints, result, "k", label="original model")
plt.legend(fontsize=14)
plt.xlabel("Time")
plt.ylabel("[X]");
"""
Explanation: Check mcmc_results.csv for the results of the MCMC procedure and perform your own analysis.
OR
You can also plot the results as follows
End of explanation
"""
# prior = {'d1' : ['gaussian', 0, 10, 1e-3], 'k1' : ['gaussian', 0, 50, 1e-4]}
prior = {'d1' : ['uniform', 0.1, 10],'k1' : ['uniform',0,100],'KR' : ['uniform',0,100]}
sampler, pid = py_inference(Model = M, exp_data = exp_data, measurements = ['X'], time_column = ['timepoints'],
nwalkers = 15, init_seed = 0.15, nsteps = 10000, sim_type = 'stochastic',
params_to_estimate = ['d1','k1','KR'], prior = prior, plot_show = True, convergence_check = False)
M_fit = M
timepoints = pid.timepoints[0]
flat_samples = sampler.get_chain(discard=200, thin=15, flat=True)
inds = np.random.randint(len(flat_samples), size=200)
for init_cond in initial_condition_list:
for ind in inds:
sample = flat_samples[ind]
for pi, pi_val in zip(pid.params_to_estimate, sample):
M_fit.set_parameter(pi, pi_val)
M_fit.set_species(init_cond)
plt.plot(timepoints, py_simulate_model(timepoints, Model= M_fit)['X'], "C1", alpha=0.6)
# plt.errorbar(, y, yerr=yerr, fmt=".k", capsize=0)
for i in range(num_trajectories):
plt.plot(timepoints, list(pid.exp_data[i]['X']), 'b', alpha = 0.2)
plt.plot(timepoints, result_list[i], "k")
# plt.legend(fontsize=14)
plt.xlabel("Time")
plt.ylabel("[X]");
"""
Explanation: Let us now try to fit all three parameters to see if results improve:
End of explanation
"""
|
RedHatInsights/insights-core | docs/notebooks/Insights Core Tutorial.ipynb | apache-2.0 | import sys
sys.path.insert(0, "../..")
from insights.core import dr
# Here's our component type with the clever name "component."
# Insights Core provides several types that we'll come to later.
class component(dr.ComponentType):
pass
"""
Explanation: Red Hat Insights Core
Insights Core is a framework for collecting and processing data about systems. It allows users to write components that collect and transform sets of raw data into typed python objects, which can then be used in rules that encapsulate knowledge about them.
To accomplish this the framework uses an internal dependency engine. Components in the form of class or function definitions declare dependencies on other components with decorators, and the resulting graphs can be executed once all components you care about have been loaded.
This is an introduction to the dependency system followed by a summary of the standard components Insights Core provides.
Components
To make a component, we first have to create a component type, which is a decorator we'll use to declare it.
End of explanation
"""
import random
# Make two components with no dependencies
@component()
def rand():
return random.random()
@component()
def three():
return 3
# Make a component that depends on the other two. Notice that we depend on two
# things, and there are two arguments to the function.
@component(rand, three)
def mul_things(x, y):
return x * y
# Now that we have a few components defined, let's run them.
from pprint import pprint
# If you call run with no arguments, all components of every type (with a few caveats
# I'll address later) are run, and their values or exceptions are collected in an
# object called a broker. The broker is like a fancy dictionary that keeps up with
# the state of an evaluation.
broker = dr.run()
pprint(broker.instances)
"""
Explanation: How do I use it?
End of explanation
"""
class stage(dr.ComponentType):
pass
@stage(mul_things)
def spam(m):
return int(m)
broker = dr.run()
print "All Instances"
pprint(broker.instances)
print
print "Components"
pprint(broker.get_by_type(component))
print
print "Stages"
pprint(broker.get_by_type(stage))
"""
Explanation: Component Types
We can define components of different types by creating different decorators.
End of explanation
"""
class thing(dr.ComponentType):
def invoke(self, broker):
return self.component(broker)
@thing(rand, three)
def stuff(broker):
r = broker[rand]
t = broker[three]
return r + t
broker = dr.run()
print broker[stuff]
"""
Explanation: Component Invocation
You can customize how components of a given type get called by overriding the invoke method of your ComponentType class. For example, if you want your components to receive the broker itself instead of individual arguments, you can do the following.
End of explanation
"""
@stage()
def boom():
raise Exception("Boom!")
broker = dr.run()
e = broker.exceptions[boom][0]
t = broker.tracebacks[e]
pprint(e)
print
print t
"""
Explanation: Notice that broker can be used as a dictionary to get the value of components that have already executed without directly looking at the broker.instances attribute.
Exception Handling
When a component raises an exception, the exception is recorded in a dictionary whose key is the component and whose value is a list of exceptions. The traceback related to each exception is recorded in a dictionary of exceptions to tracebacks. We record exceptions in a list because some components may generate more than one value. We'll come to that later.
End of explanation
"""
@stage("where's my stuff at?")
def missing_stuff(s):
return s
broker = dr.run()
print broker.missing_requirements[missing_stuff]
@stage("a", "b", [rand, "d"], ["e", "f"])
def missing_more_stuff(a, b, c, d, e, f):
return a + b + c + d + e + f
broker = dr.run()
print broker.missing_requirements[missing_more_stuff]
"""
Explanation: Missing Dependencies
A component with any missing required dependencies will not be called. Missing dependencies are recorded in the broker in a dictionary whose keys are components and whose values are tuples with two values. The first is a list of all missing required dependencies. The second is a list of all dependencies of which at least one was required.
End of explanation
"""
@stage(rand, optional=['test'])
def is_greater_than_ten(r, t):
return (int(r*10.0) < 5.0, t)
broker = dr.run()
print broker[is_greater_than_ten]
"""
Explanation: Notice that the first elements in the dependency list after @stage are simply "a" and "b", but the next two elements are themselves lists. This means that at least one element of each list must be present. The first "any" list has [rand, "d"], and rand is available, so it resolves. However, neither "e" nor "f" are available, so the resolution fails. Our missing dependencies list includes the first two standalone elements as well as the second "any" list.
SkipComponent
Components that raise dr.SkipComponent won't have any values or exceptions recorded and will be treated as missing dependencies for components that depend on them.
Optional Dependencies
There's an "optional" keyword that takes a list of components that should be run before the current one. If they throw exceptions or don't run for some other reason, execute the current component anyway and just say they were None.
End of explanation
"""
class mything(dr.ComponentType):
requires = [rand]
@mything()
def dothings(r):
return 4 * r
broker = dr.run(broker=broker)
pprint(broker[dothings])
pprint(dr.get_dependencies(dothings))
"""
Explanation: Automatic Dependencies
The definition of a component type may include requires and optional attributes. Their specifications are the same as the requires and optional portions of the component decorators. Any component decorated with a component type that has requires or optional in the class definition will automatically depend on the specified components, and any additional dependencies on the component itself will just be appended.
This functionality should almost never be used because it makes it impossible to tell that the component has implied dependencies.
End of explanation
"""
class anotherthing(dr.ComponentType):
metadata={"a": 3}
@anotherthing(metadata={"b": 4, "c": 5})
def four():
return 4
dr.get_metadata(four)
"""
Explanation: Metadata
Component types and components can define metadata in their definitions. If a component's type defines metadata, that metadata is inherited by the component, although the component may override it.
End of explanation
"""
class grouped(dr.ComponentType):
group = "grouped"
@grouped()
def five():
return 5
b = dr.Broker()
dr.run(dr.COMPONENTS["grouped"], broker=b)
pprint(b.instances)
"""
Explanation: Component Groups
So far we haven't said how we might group components together outside of defining different component types. But sometimes we might want to specify certain components, even of different component types, to belong together and to only be executed when explicitly asked to do so.
All of our components so far have implicitly belonged to the default group. However, component types and even individual components can be assigned to specific groups, which will run only when specified.
End of explanation
"""
from insights.core import dr
@stage()
def six():
return 6
@stage(six)
def times_two(x):
return x * 2
# If the component's full name was foo.bar.baz.six, this would print "baz"
print "\nModule (times_two):", dr.get_base_module_name(times_two)
print "\nComponent Type (times_two):", dr.get_component_type(times_two)
print "\nDependencies (times_two): "
pprint(dr.get_dependencies(times_two))
print "\nDependency Graph (stuff): "
pprint(dr.get_dependency_graph(stuff))
print "\nDependents (rand): "
pprint(dr.get_dependents(rand))
print "\nGroup (six):", dr.get_group(six)
print "\nMetadata (four): ",
pprint(dr.get_metadata(four))
# prints the full module name of the component
print "\nModule Name (times_two):", dr.get_module_name(times_two)
# prints the module name joined to the component name by a "."
print "\nName (times_two):", dr.get_name(times_two)
print "\nSimple Name (times_two):", dr.get_simple_name(times_two)
"""
Explanation: If a group isn't specified in the type definition or in the component decorator, the default group is assumed. Likewise, the default group is assumed when calling run if one isn't provided.
It's also possible to override the group of an individual component by using the group keyword in its decorator.
run_incremental
Since hundreds or even thousands of dependencies can be defined, it's sometimes useful to separate them into graphs that don't share any components and execute those graphs one at a time. In addition to the run function, the dr module provides a run_incremental function that does exactly that. You can give it a starting broker (or none at all), and it will yield a new broker for each distinct graph among all the dependencies.
run_all
The run_all function is similar to run_incremental since it breaks a graph up into independently executable subgraphs before running them. However, it returns a list of the brokers instead of yielding one at a time. It also has a pool keyword argument that accepts a concurrent.futures.ThreadPoolExecutor, which it will use to run the independent subgraphs in parallel. This can provide a significant performance boost in some situtations.
Inspecting Components
The dr module provides several functions for inspecting components. You can get their aliases, dependencies, dependents, groups, type, even their entire dependency trees.
End of explanation
"""
from insights.core import dr
from insights.core.context import HostContext
from insights.core.spec_factory import (simple_file,
glob_file,
simple_command,
listdir,
foreach_execute,
foreach_collect,
first_file,
first_of)
release = simple_file("/etc/redhat-release")
hostname = simple_file("/etc/hostname")
ctx = HostContext()
broker = dr.Broker()
broker[HostContext] = ctx
broker = dr.run(broker=broker)
print broker[release].path, broker[release].content
print broker[hostname].path, broker[hostname].content
"""
Explanation: Loading Components
If you have components defined in a package and the root of that path is in sys.path, you can load the package and all its subpackages and modules by calling dr.load_components. This way you don't have to load every component module individually.
```python
recursively load all packages and modules in path.to.package
dr.load_components("path.to.package")
or load a single module
dr.load_components("path.to.package.module")
```
Now that you know the basics of Insights Core dependency resolution, let's move on to the rest of Core that builds on it.
Standard Component Types
The standard component types provided by Insights Core are datasource, parser, combiner, rule, condition, and incident. They're defined in insights.core.plugins.
Some have specialized interfaces and executors that adapt the dependency specification parts described above to what developers using previous versions of Insights Core have come to expect.
For more information on parser, combiner, and rule development, please see our component developer tutorials.
Datasource
A datasource used to be called a spec. Components of this type collect data and make it available to other components. Since we have several hundred predefined datasources that fall into just a handful of categories, we've streamlined the process of creating them.
Datasources are defined either with the @datasource decorator or with helper functions from insights.core.spec_factory.
The spec_factory module has a handful of functions for defining common datasource types.
- simple_file
- glob_file
- simple_command
- listdir
- foreach_execute
- foreach_collect
- first_file
- first_of
All datasources defined helper functions will depend on a ExecutionContext of some kind. Contexts let you activate different datasources for different environments. Most of them provide a root path for file collection and may perform some environment specific setup for commands, even modifying the command strings if needed.
For now, we'll use a HostContext. This tells datasources to collect files starting at the root of the file system and to execute commands exactly as they are defined. Other contexts are in insights.core.contexts.
All file collection datasources depend on any context that provides a path to use as root unless a particular context is specified. In other words, some datasources will activate for multiple contexts unless told otherwise.
simple_file
simple_file reads a file from the file system and makes it available as a TextFileProvider. A TextFileProvider instance contains the path to the file and its content as a list of lines.
End of explanation
"""
host_stuff = glob_file("/etc/host*", ignore="(allow|deny)")
broker = dr.run(broker=broker)
print broker[host_stuff]
"""
Explanation: glob_file
glob_file accepts glob patterns and evaluates at runtime to a list of TextFileProvider instances, one for each match. You can pass glob_file a single pattern or a list (or set) of patterns. It also accepts an ignore keyword, which should be a regular expression string matching paths to ignore. The glob and ignore patterns can be used together to match lots of files and then throw out the ones you don't want.
End of explanation
"""
uptime = simple_command("/usr/bin/uptime")
broker = dr.run(broker=broker)
print (broker[uptime].cmd, broker[uptime].args, broker[uptime].rc, broker[uptime].content)
"""
Explanation: simple_command
simple_command allows you to get the results of a command that takes no arguments or for which you know all of the arguments up front.
It and other command datasources return a CommandOutputProvider instance, which has the command string, any arguments interpolated into it (more later), the return code if you requested it via the keep_rc=True keyword, and the command output as a list of lines.
simple_command also accepts a timeout keyword, which is the maximum number of seconds the system should attempt to execute the command before a CalledProcessError is raised for the component.
A default timeout for all commands can be set on the initial ExecutionContext instance with the timeout keyword argument.
If a timeout isn't specified in the ExecutionContext or on the command itself, none is used.
End of explanation
"""
interfaces = listdir("/sys/class/net")
broker = dr.run(broker=broker)
pprint(broker[interfaces])
"""
Explanation: listdir
listdir lets you get the contents of a directory.
End of explanation
"""
ethtool = foreach_execute(interfaces, "ethtool %s")
broker = dr.run(broker=broker)
pprint(broker[ethtool])
"""
Explanation: foreach_execute
foreach_execute allows you to use output from one component as input to a datasource command string. For example, using the output of the interfaces datasource above, we can get ethtool information about all of the ethernet devices.
The timeout description provided in the simple_command section applies here to each seperate invocation.
End of explanation
"""
from insights.specs.default import format_rpm
from insights.core.context import DockerImageContext
from insights.core.plugins import datasource
from insights.core.spec_factory import CommandOutputProvider
rpm_format = format_rpm()
cmd = "/usr/bin/rpm -qa --qf '%s'" % rpm_format
host_rpms = simple_command(cmd, context=HostContext)
@datasource(DockerImageContext)
def docker_installed_rpms(ctx):
root = ctx.root
cmd = "/usr/bin/rpm -qa --root %s --qf '%s'" % (root, rpm_format)
result = ctx.shell_out(cmd)
return CommandOutputProvider(cmd, ctx, content=result)
installed_rpms = first_of([host_rpms, docker_installed_rpms])
broker = dr.run(broker=broker)
pprint(broker[installed_rpms])
"""
Explanation: Notice each element in the list returned by interfaces is a single string. The system interpolates each element into the ethtool command string and evaluates each result. This produces a list of objects, one for each input element, instead of a single object. If the list created by interfaces contained tuples with n elements, then our command string would have had n substitution parameters.
foreach_collect
foreach_collect works similarly to foreach_execute, but instead of running commands with interpolated arguments, it collects files at paths with interpolated arguments. Also, because it is a file collection, it doesn't not have execution related keyword arguments
first_file
first_file takes a list of paths and returns a TextFileProvider for the first one it finds. This is useful if you're looking for a single file that might be in different locations.
first_of
first_of is a way to express that you want to use any datasource from a list of datasources you've already defined. This is helpful if the way you collect data differs in different contexts, but the output is the same.
For example, the way you collect installed rpms directly from a machine differs from how you would collect them from a docker image. Ultimately, downstream components don't care: they just want rpm data.
You could do the following. Notice that host_rpms and docker_installed_rpms implement different ways of getting rpm data that depend on different contexts, but the final installed_rpms datasource just references whichever one ran.
End of explanation
"""
from insights.core import Parser
from insights.core.plugins import parser
@parser(hostname)
class HostnameParser(Parser):
def parse_content(self, content):
self.host, _, self.domain = content[0].partition(".")
broker = dr.run(broker=broker)
print "Host:", broker[HostnameParser].host
"""
Explanation: What datasources does Insights Core provide?
To see a list of datasources we already collect, have a look in insights.specs.
Parsers
Parsers are the next major component type Insights Core provides. A Parser depends on a single datasource and is responsible for converting its raw content into a structured object.
Let's build a simple parser.
End of explanation
"""
@parser(ethtool)
class Ethtool(Parser):
def parse_content(self, content):
self.link_detected = None
self.device = None
for line in content:
if "Settings for" in line:
self.device = line.split(" ")[-1].strip(":")
if "Link detected" in line:
self.link_detected = line.split(":")[-1].strip()
broker = dr.run(broker=broker)
for eth in broker[Ethtool]:
print "Device:", eth.device
print "Link? :", eth.link_detected, "\n"
"""
Explanation: Notice that the parser decorator accepts only one argument, the datasource the component needs. Also notice that our parser has a sensible default constructor that accepts a datasource and passes its content into a parse_content function.
Our hostname parser is pretty simple, but it's easy to see how parsing things like rpm data or configuration files could get complicated.
Speaking of rpms, hopefully it's also easy to see that an rpm parser could depend on our installed_rpms definition in the previous section and parse the content regardless of where the content originated.
What about parser dependencies that produce lists of components?
Not only do parsers have a special decorator, they also have a special executor. If the datasource is a list, the executor will attempt to construct a parser object with each element of the list, and the value of the parser in the broker will be the list of parser objects. It's important to keep this in mind when developing components that depend on parsers.
This is also why exceptions raised by components are stored as lists by component instead of single values.
Here's a simple parser that depends on the ethtool datasource.
End of explanation
"""
from insights.core.plugins import rule, make_fail, make_pass
ERROR_KEY = "IS_LOCALHOST"
@rule(HostnameParser)
def report(hn):
return make_pass(ERROR_KEY) if "localhost" in hn.host else make_fail(ERROR_KEY)
brok = dr.Broker()
brok[HostContext] = HostContext()
brok = dr.run(broker=brok)
pprint(brok.get(report))
"""
Explanation: We provide curated parsers for all of our datasources. They're in insights.parsers.
Combiners
Combiners depend on two or more other components. They typically are used to standardize interfaces or to provide a higher-level view of some set of components.
As an example of standardizing interfaces, chkconfig and service commands can be used to retrieve similar data about service status, but the command you run to check that status depends on your operating system version. A datasource would be defined for each command along with a parser to interpret its output. However, a downstream component may just care about a service's status, not about how a particular program exposes it. A combiner can depend on both chkconfig and service parsers (like this, so only one of them is required: @combiner([[chkconfig, service]])) and provide a unified interface to the data.
As an example of a higher level view of several related components, imagine a combiner that depends on various ethtool and other network information gathering parsers. It can compile all of that information behind one view, exposing a range of information about devices, interfaces, iptables, etc. that might otherwise be scattered across a system.
We provide a few common combiners. They're in insights.combiners.
Here's an example combiner that tries a few different ways to determine the Red Hat release information. Notice that its dependency declarations and interface are just like we've discussed before. If this was a class, the __init__ function would be declared like def __init__(self, rh_release, un).
```python
from collections import namedtuple
from insights.core.plugins import combiner
from insights.parsers.redhat_release import RedhatRelease as rht_release
from insights.parsers.uname import Uname
@combiner([rht_release, Uname])
def redhat_release(rh_release, un):
if un and un.release_tuple[0] != -1:
return Release(*un.release_tuple)
if rh_release:
return Release(rh_release.major, rh_release.minor)
raise Exception("Unabled to determine release.")
```
Rules
Rules depend on parsers and/or combiners and encapsulate particular policies about their state. For example, a rule might detect whether a defective rpm is installed. It might also inspect the lsof parser to determine if a process is using a file from that defective rpm. It could also check network information to see if the process is a server and whether it's bound to an internal or external IP address. Rules can check for anything you can surface in a parser or a combiner.
Rules use the make_fail, make_pass, or make_info helpers to create their return values. They take one required parameter, which is a key identifying the particular state the rule wants to highlight, and any number of required parameters that provide context for that state.
End of explanation
"""
def observer(c, broker):
if c not in broker:
return
value = broker[c]
pprint(value)
broker.add_observer(observer, component_type=parser)
broker = dr.run(broker=broker)
"""
Explanation: Conditions and Incidents
Conditions and incidents are optional components that can be used by rules to encapsulate particular pieces of logic.
Conditions are questions with answers that can be interpreted as True or False. For example, a condition might be "Does the kdump configuration contain a 'net' target type?" or "Is the operating system Red Hat Enterprise Linux 7?"
Incidents, on the other hand, typically are specific types of warning or error messages from log type files.
Why would you use conditions or incidents instead of just writing the logic directly into the rule? Future versions of Insights may allow automated analysis of rules and their conditions and incidents. You will be able to tell which conditions, incidents, and rule firings across all rules correspond with each other and how strongly. This feature will become more powerful as conditions and incidents are written independently of explicit rules.
Observers
Insights Core allows you to attach functions to component types, and they'll be called any time a component of that type is encountered. You can attach observer functions globally or to a particular broker.
Observers are called whether a component succeeds or not. They take the component and the broker right after the component is evaluated and so are able to ask the broker about values, exceptions, missing requirements, etc.
End of explanation
"""
|
phasedchirp/Assorted-Data-Analysis | exercises/SlideRule-DS-Intensive/UD120/SVM.ipynb | gpl-2.0 | import sys
from sklearn.svm import SVC
from time import time
sys.path.append("../tools/")
from email_preprocess import preprocess
"""
Explanation: Udacity Machine Learning mini-project 2
Prep stuff
End of explanation
"""
features_train, features_test, labels_train, labels_test = preprocess()
"""
Explanation: Training and Testing data:
End of explanation
"""
clf = SVC(kernel="linear")
clf.fit(features_train,labels_train)
"""
Explanation: Fitting the model:
End of explanation
"""
clf.score(features_test,labels_test)
"""
Explanation: Model accuracy:
End of explanation
"""
%time clf.fit(features_train,labels_train)
"""
Explanation: Timing model training
Not using %timeit because it's really slow to run even once
End of explanation
"""
features_train = features_train[:len(features_train)/100]
labels_train = labels_train[:len(labels_train)/100]
clf.fit(features_train,labels_train)
clf.score(features_test,labels_test)
"""
Explanation: And not surprisingly this is much, much slower than something like naive Bayes.
Accuracy with a reduced training set:
End of explanation
"""
clf_rbf = SVC(kernel="rbf")
clf_rbf.fit(features_train,labels_train)
"""
Explanation: Switching to a radial basis function kernel
End of explanation
"""
clf_rbf.score(features_test,labels_test)
"""
Explanation: Accuracy
End of explanation
"""
clf10 = SVC(C=10.0,kernel="rbf")
clf10.fit(features_train,labels_train)
clf100 = SVC(C=100.0,kernel="rbf")
clf100.fit(features_train,labels_train)
clf1000 = SVC(C=1000.0,kernel="rbf")
clf1000.fit(features_train,labels_train)
clf10000 = SVC(C=10000.0,kernel="rbf")
clf10000.fit(features_train,labels_train)
print "C = 10: ", clf10.score(features_test,labels_test)
print "C = 100: ", clf100.score(features_test,labels_test)
print "C = 1000: ", clf1000.score(features_test,labels_test)
print "C = 10,000: ", clf10000.score(features_test,labels_test)
"""
Explanation: Assessing parameter choices
I had some weirdness with the grid search functions, which are probably a better method of doing this in general.
End of explanation
"""
features_train, features_test, labels_train, labels_test = preprocess()
clf = SVC(C=10000,kernel="rbf")
clf.fit(features_train,labels_train)
"""
Explanation: Trying C=10,000 with the full training data
End of explanation
"""
clf.score(features_test,labels_test)
"""
Explanation: Accuracy with the full training set:
End of explanation
"""
pred = clf.predict(features_test)
for i in [10,26,50]:
print 'training point',i,'--predicted:',pred[i],'real value:',labels_test[i]
"""
Explanation: Answering questions about specific data points with the RBF kernel:
End of explanation
"""
# Raw count:
chrisCount = sum(pred)
chrisCount
# Proportion:
chrisCount/float(len(pred))
"""
Explanation: Proportion of emails attributed to Chris (label = 1)
End of explanation
"""
|
machinelearningnanodegree/stanford-cs231 | solutions/vijendra/assignment2/Dropout.ipynb | mit | # As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
"""
Explanation: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
End of explanation
"""
x = np.random.randn(500, 500) + 10
for p in [0.3, 0.6, 0.75]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print 'Running tests with p = ', p
print 'Mean of input: ', x.mean()
print 'Mean of train-time output: ', out.mean()
print 'Mean of test-time output: ', out_test.mean()
print 'Fraction of train-time output set to zero: ', (out == 0).mean()
print 'Fraction of test-time output set to zero: ', (out_test == 0).mean()
print
"""
Explanation: Dropout forward pass
In the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
End of explanation
"""
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print 'dx relative error: ', rel_error(dx, dx_num)
"""
Explanation: Dropout backward pass
In the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
End of explanation
"""
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [0, 0.25, 0.5]:
print 'Running check with dropout = ', dropout
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
print
"""
Explanation: Fully-connected nets with Dropout
In the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
End of explanation
"""
# Train two identical nets, one with dropout and one without
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
#lets try bunch of dropouts
dropout_choices = [0, 0.25,0.50,0.75]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print dropout
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.
End of explanation
"""
|
MBARIMike/oxyfloat | notebooks/explore_cached_oxyfloat_data.ipynb | mit | import sys
sys.path.insert(0, '../')
from oxyfloat import ArgoData
ad = ArgoData()
"""
Explanation: Explore locally cached Argo oxygen float data - second in a series of Notebooks
Use the oxyfloat module to get data and Pandas to operate on it for testing ability to easily perform calibrations
(See build_oxyfloat_cache.ipynb for for the work that leads to this Notebook.)
Add parent directory to the path and get an ArgoData object that uses the default local cache.
End of explanation
"""
wmo_list = ad.get_oxy_floats_from_status()
"""
Explanation: Get the default list of floats that have oxygen data.
End of explanation
"""
sdf = ad._get_df(ad._STATUS)
"""
Explanation: We can explore the distribution of AGEs of the Argo floats by getting the status data in a DataFrame (sdf).
End of explanation
"""
%pylab inline
def dist_plot(df, title):
from datetime import date
ax = df.hist(bins=100)
ax.set_xlabel('AGE (days)')
ax.set_ylabel('Count')
ax.set_title('{} as of {}'.format(title, date.today()))
dist_plot(sdf['AGE'], 'Argo float AGE distribution')
"""
Explanation: Define a function (dist_plot) and plot the distribution of the AGE column.
End of explanation
"""
sdfq = sdf.query('(AGE != 0) & (OXYGEN == 1) & (GREYLIST != 1)')
dist_plot(sdfq['AGE'], title='Argo oxygen float AGE distribution')
print 'Count age_gte 0340:', len(sdfq.query('AGE >= 340'))
print 'Count age_gte 1000:', len(sdfq.query('AGE >= 1000'))
print 'Count age_gte 2000:', len(sdfq.query('AGE >= 2000'))
print 'Count age_gte 2200:', len(sdfq.query('AGE >= 2200'))
print 'Count age_gte 3000:', len(sdfq.query('AGE >= 3000'))
"""
Explanation: There are over 600 floats with an AGE of 0. The .get_oxy_floats_from_status() method does not select these floats as I believe they are 'inactive'. Let's count the number of non-greylisted oxygen floats at various AGEs so that we can build a reasonably sized test cache.
End of explanation
"""
len(ad.get_oxy_floats_from_status(age_gte=2200))
"""
Explanation: Compare the 2200 count with what .get_oxy_floats_from_status(age_gte=2200) returns.
End of explanation
"""
%%time
ad = ArgoData(cache_file='../oxyfloat/oxyfloat_fixed_cache_age2200_profiles2.hdf')
wmo_list = ad.get_oxy_floats_from_status(2200)
df = ad.get_float_dataframe(wmo_list, max_profiles=2)
"""
Explanation: That's reassuring! Now, let's build a custom cache file with the the 19 floats that have an AGE >= 2200 days.
From a shell window execute this script:
bash
scripts/load_cache.py --age 2200 --profiles 2 -v
This will take several minutes to download the data and build the cache. Once it's finished you can execute the cells below (you will need to enter the exact name of the cache_file which the above command displays in its INFO messages).
End of explanation
"""
# Parameter long_name and units copied from attributes in NetCDF files
time_range = '{} to {}'.format(df.index.get_level_values('time').min(),
df.index.get_level_values('time').max())
parms = {'TEMP_ADJUSTED': 'SEA TEMPERATURE IN SITU ITS-90 SCALE (degree_Celsius)',
'PSAL_ADJUSTED': 'PRACTICAL SALINITY (psu)',
'DOXY_ADJUSTED': 'DISSOLVED OXYGEN (micromole/kg)'}
plt.rcParams['figure.figsize'] = (18.0, 8.0)
fig, ax = plt.subplots(1, len(parms), sharey=True)
ax[0].invert_yaxis()
ax[0].set_ylabel('SEA PRESSURE (decibar)')
for i, (p, label) in enumerate(parms.iteritems()):
ax[i].set_xlabel(label)
ax[i].plot(df[p], df.index.get_level_values('pressure'), '.')
plt.suptitle('Float(s) ' + ' '.join(wmo_list) + ' from ' + time_range)
"""
Explanation: Plot the profiles.
End of explanation
"""
import pylab as plt
from mpl_toolkits.basemap import Basemap
plt.rcParams['figure.figsize'] = (18.0, 8.0)
m = Basemap(llcrnrlon=15, llcrnrlat=-90, urcrnrlon=390, urcrnrlat=90, projection='cyl')
m.fillcontinents(color='0.8')
m.scatter(df.index.get_level_values('lon'), df.index.get_level_values('lat'), latlon=True)
"""
Explanation: Plot the profiles on a map.
End of explanation
"""
|
janesjanes/sketchy | code/Retrieval_Example.ipynb | mit | import numpy as np
from pylab import *
%matplotlib inline
import os
import sys
"""
Explanation: This script is for retrieving images based on sketch query
End of explanation
"""
#TODO: specify your caffe root folder here
caffe_root = "X:\caffe_siggraph/caffe-windows-master"
sys.path.insert(0, caffe_root+'/python')
import caffe
"""
Explanation: caffe
First, we need to import caffe. You'll need to have caffe installed, as well as python interface for caffe.
End of explanation
"""
#TODO: change to your own network and deploying file
PRETRAINED_FILE = '../models/triplet_googlenet/triplet_googlenet_finegrain_final.caffemodel'
sketch_model = '../models/triplet_googlenet/googlenet_sketchdeploy.prototxt'
image_model = '../models/triplet_googlenet/googlenet_imagedeploy.prototxt'
caffe.set_mode_gpu()
#caffe.set_mode_cpu()
sketch_net = caffe.Net(sketch_model, PRETRAINED_FILE, caffe.TEST)
img_net = caffe.Net(image_model, PRETRAINED_FILE, caffe.TEST)
sketch_net.blobs.keys()
#TODO: set output layer name. You can use sketch_net.blobs.keys() to list all layer
output_layer_sketch = 'pool5/7x7_s1_s'
output_layer_image = 'pool5/7x7_s1_p'
#set the transformer
transformer = caffe.io.Transformer({'data': np.shape(sketch_net.blobs['data'].data)})
transformer.set_mean('data', np.array([104, 117, 123]))
transformer.set_transpose('data',(2,0,1))
transformer.set_channel_swap('data', (2,1,0))
transformer.set_raw_scale('data', 255.0)
"""
Explanation: Now we can load up the network. You can change the path to your own network here. Make sure to use the matching deploy prototxt files and change the target layer to your layer name.
End of explanation
"""
#TODO: specify photo folder for the retrieval
photo_paths = 'C:\Users\Patsorn\Documents/notebook_backup/SBIR/retrieval/'
#load up images
img_list = os.listdir(photo_paths)
N = np.shape(img_list)[0]
print 'Retrieving from', N,'photos'
#extract feature for all images
feats = []
for i,path in enumerate(img_list):
imgname = path.split('/')[-1]
imgname = imgname.split('.jpg')[0]
imgcat = path.split('/')[0]
print '\r',str(i+1)+'/'+str(N)+ ' '+'Extracting ' +path+'...',
full_path = photo_paths + path
img = (transformer.preprocess('data', caffe.io.load_image(full_path.rstrip())))
img_in = np.reshape([img],np.shape(sketch_net.blobs['data'].data))
out_img = img_net.forward(data=img_in)
out_img = np.copy(out_img[output_layer_image])
feats.append(out_img)
print 'done',
np.shape(feats)
feats = np.resize(feats,[np.shape(feats)[0],np.shape(feats)[2]]) #quick fixed for size
#build nn pool
from sklearn.neighbors import NearestNeighbors,LSHForest
nbrs = NearestNeighbors(n_neighbors=np.size(feats,0), algorithm='brute',metric='cosine').fit(feats)
"""
Explanation: Retrieving images
The following script show how to use our network to do the retrieval. The easiest way to use the script is to simply put every images you want to retrieve in one folder and modify 'photo_paths' to point to your folder. Then change 'sketch_path' to point to the sketch you want to use as a query.
Extracting image feats
End of explanation
"""
#Load up sketch query
sketch_path = "X:\data_for_research\sketch_dataset\png/giraffe/7366.png"
sketch_in = (transformer.preprocess('data', caffe.io.load_image(sketch_path)))
sketch_in = np.reshape([sketch_in],np.shape(sketch_net.blobs['data'].data))
query = sketch_net.forward(data=sketch_in)
query=np.copy(query[output_layer_sketch])
#get nn
distances, indices = nbrs.kneighbors(np.reshape(query,[np.shape(query)[1]]))
#show query
f = plt.figure(0)
plt.imshow(plt.imread(sketch_path))
plt.axis('off')
#show results
for i in range(1,5,1):
f = plt.figure(i)
img = plt.imread(photo_paths+img_list[indices[0][i-1]])
plt.imshow(img)
plt.axis('off')
plt.show(block=False)
"""
Explanation: Show top 5 retrieval results
End of explanation
"""
|
adukic/nd101 | first-neural-network/dlnd-your-first-neural-network.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = self.sigmoid
def sigmoid(self, x):
return 1/(1+np.exp(-x))
def sigmoid_prime(self, x):
return self.sigmoid(x)*(1-self.sigmoid(x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
output_errors = targets - final_outputs
# TODO: Backpropagated error
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)
hidden_grad = self.sigmoid_prime(hidden_inputs)
# TODO: Update the weights
self.weights_hidden_to_output += (self.lr * np.dot(output_errors, hidden_outputs.T))
self.weights_input_to_hidden += (self.lr * np.dot(hidden_errors * hidden_grad, inputs.T))
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import sys
### Set the hyperparameters here ###
epochs = 3000
learning_rate = 0.01
hidden_nodes = 25
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
if (e == epochs /4):
network.lr = 0.005
print()
if (e == epochs /2):
network.lr = 0.001
print()
if (e == 3 * epochs /4):
network.lr = 0.0005
print()
if (e == 9 * epochs /10):
network.lr = 0.00001
print()
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5] \
+ " ... Learning rate: " + str(network.lr)[:7])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
"""
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/fcc5782db3e2930fc79f31bc745495ed/60_ctf_bst_auditory.ipynb | bsd-3-clause | # Authors: Mainak Jas <mainak.jas@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# Jaakko Leppakangas <jaeilepp@student.jyu.fi>
#
# License: BSD-3-Clause
import os.path as op
import pandas as pd
import numpy as np
import mne
from mne import combine_evoked
from mne.minimum_norm import apply_inverse
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
print(__doc__)
"""
Explanation: Working with CTF data: the Brainstorm auditory dataset
Here we compute the evoked from raw for the auditory Brainstorm
tutorial dataset. For comparison, see :footcite:TadelEtAl2011 and the
associated brainstorm site
<https://neuroimage.usc.edu/brainstorm/Tutorials/Auditory>_.
Experiment:
- One subject, 2 acquisition runs 6 minutes each.
- Each run contains 200 regular beeps and 40 easy deviant beeps.
- Random ISI: between 0.7s and 1.7s seconds, uniformly distributed.
- Button pressed when detecting a deviant with the right index finger.
The specifications of this dataset were discussed initially on the
FieldTrip bug tracker
<http://bugzilla.fieldtriptoolbox.org/show_bug.cgi?id=2300>__.
End of explanation
"""
use_precomputed = True
"""
Explanation: To reduce memory consumption and running time, some of the steps are
precomputed. To run everything from scratch change use_precomputed to
False. With use_precomputed = False running time of this script can
be several minutes even on a fast computer.
End of explanation
"""
data_path = bst_auditory.data_path()
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
raw_fname1 = op.join(data_path, 'MEG', subject, 'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path, 'MEG', subject, 'S01_AEF_20131218_02.ds')
erm_fname = op.join(data_path, 'MEG', subject, 'S01_Noise_20131218_01.ds')
"""
Explanation: The data was collected with a CTF 275 system at 2400 Hz and low-pass
filtered at 600 Hz. Here the data and empty room data files are read to
construct instances of :class:mne.io.Raw.
End of explanation
"""
raw = read_raw_ctf(raw_fname1)
n_times_run1 = raw.n_times
# Here we ignore that these have different device<->head transforms
mne.io.concatenate_raws(
[raw, read_raw_ctf(raw_fname2)], on_mismatch='ignore')
raw_erm = read_raw_ctf(erm_fname)
"""
Explanation: In the memory saving mode we use preload=False and use the memory
efficient IO which loads the data on demand. However, filtering and some
other functions require the data to be preloaded into memory.
End of explanation
"""
raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'})
if not use_precomputed:
# Leave out the two EEG channels for easier computation of forward.
raw.pick(['meg', 'stim', 'misc', 'eog', 'ecg']).load_data()
"""
Explanation: The data array consists of 274 MEG axial gradiometers, 26 MEG reference
sensors and 2 EEG electrodes (Cz and Pz). In addition:
1 stim channel for marking presentation times for the stimuli
1 audio channel for the sent signal
1 response channel for recording the button presses
1 ECG bipolar
2 EOG bipolar (vertical and horizontal)
12 head tracking channels
20 unused channels
Notice also that the digitized electrode positions (stored in a .pos file)
were automatically loaded and added to the ~mne.io.Raw object.
The head tracking channels and the unused channels are marked as misc
channels. Here we define the EOG and ECG channels.
End of explanation
"""
annotations_df = pd.DataFrame()
offset = n_times_run1
for idx in [1, 2]:
csv_fname = op.join(data_path, 'MEG', 'bst_auditory',
'events_bad_0%s.csv' % idx)
df = pd.read_csv(csv_fname, header=None,
names=['onset', 'duration', 'id', 'label'])
print('Events from run {0}:'.format(idx))
print(df)
df['onset'] += offset * (idx - 1)
annotations_df = pd.concat([annotations_df, df], axis=0)
saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int)
# Conversion from samples to times:
onsets = annotations_df['onset'].values / raw.info['sfreq']
durations = annotations_df['duration'].values / raw.info['sfreq']
descriptions = annotations_df['label'].values
annotations = mne.Annotations(onsets, durations, descriptions)
raw.set_annotations(annotations)
del onsets, durations, descriptions
"""
Explanation: For noise reduction, a set of bad segments have been identified and stored
in csv files. The bad segments are later used to reject epochs that overlap
with them.
The file for the second run also contains some saccades. The saccades are
removed by using SSP. We use pandas to read the data from the csv files. You
can also view the files with your favorite text editor.
End of explanation
"""
saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True,
baseline=(None, None),
reject_by_annotation=False)
projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0,
desc_prefix='saccade')
if use_precomputed:
proj_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-eog-proj.fif')
projs_eog = mne.read_proj(proj_fname)[0]
else:
projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(),
n_mag=1, n_eeg=0)
raw.add_proj(projs_saccade)
raw.add_proj(projs_eog)
del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory
"""
Explanation: Here we compute the saccade and EOG projectors for magnetometers and add
them to the raw data. The projectors are added to both runs.
End of explanation
"""
raw.plot(block=True)
"""
Explanation: Visually inspect the effects of projections. Click on 'proj' button at the
bottom right corner to toggle the projectors on/off. EOG events can be
plotted by adding the event list as a keyword argument. As the bad segments
and saccades were added as annotations to the raw data, they are plotted as
well.
End of explanation
"""
if not use_precomputed:
raw.plot_psd(tmax=np.inf, picks='meg')
notches = np.arange(60, 181, 60)
raw.notch_filter(notches, phase='zero-double', fir_design='firwin2')
raw.plot_psd(tmax=np.inf, picks='meg')
"""
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. The power spectra are plotted
before and after the filtering to show the effect. The drop after 600 Hz
appears because the data was filtered during the acquisition. In memory
saving mode we do the filtering at evoked stage, which is not something you
usually would do.
End of explanation
"""
if not use_precomputed:
raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s',
phase='zero-double', fir_design='firwin2')
"""
Explanation: We also lowpass filter the data at 100 Hz to remove the hf components.
End of explanation
"""
tmin, tmax = -0.1, 0.5
event_id = dict(standard=1, deviant=2)
reject = dict(mag=4e-12, eog=250e-6)
# find events
events = mne.find_events(raw, stim_channel='UPPT001')
"""
Explanation: Epoching and averaging.
First some parameters are defined and events extracted from the stimulus
channel (UPPT001). The rejection thresholds are defined as peak-to-peak
values and are in T / m for gradiometers, T for magnetometers and
V for EOG and EEG channels.
End of explanation
"""
sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0]
onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0]
min_diff = int(0.5 * raw.info['sfreq'])
diffs = np.concatenate([[min_diff + 1], np.diff(onsets)])
onsets = onsets[diffs > min_diff]
assert len(onsets) == len(events)
diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq']
print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms'
% (np.mean(diffs), np.std(diffs)))
events[:, 0] = onsets
del sound_data, diffs
"""
Explanation: The event timing is adjusted by comparing the trigger times on detected
sound onsets on channel UADC001-4408.
End of explanation
"""
raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408']
"""
Explanation: We mark a set of bad channels that seem noisier than others. This can also
be done interactively with raw.plot by clicking the channel name
(or the line). The marked channels are added as bad when the browser window
is closed.
End of explanation
"""
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=['meg', 'eog'],
baseline=(None, 0), reject=reject, preload=False,
proj=True)
"""
Explanation: The epochs (trials) are created for MEG channels. First we find the picks
for MEG and EOG channels. Then the epochs are constructed using these picks.
The epochs overlapping with annotated bad segments are also rejected by
default. To turn off rejection by bad segments (as was done earlier with
saccades) you can use keyword reject_by_annotation=False.
End of explanation
"""
epochs.drop_bad()
epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)],
epochs['standard'][182:222]])
epochs_standard.load_data() # Resampling to save memory.
epochs_standard.resample(600, npad='auto')
epochs_deviant = epochs['deviant'].load_data()
epochs_deviant.resample(600, npad='auto')
del epochs
"""
Explanation: We only use first 40 good epochs from each run. Since we first drop the bad
epochs, the indices of the epochs are no longer same as in the original
epochs collection. Investigation of the event timings reveals that first
epoch from the second run corresponds to index 182.
End of explanation
"""
evoked_std = epochs_standard.average()
evoked_dev = epochs_deviant.average()
del epochs_standard, epochs_deviant
"""
Explanation: The averages for each conditions are computed.
End of explanation
"""
for evoked in (evoked_std, evoked_dev):
evoked.filter(l_freq=None, h_freq=40., fir_design='firwin')
"""
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all
line artifacts (and high frequency information). Normally this would be done
to raw data (with :func:mne.io.Raw.filter), but to reduce memory
consumption of this tutorial, we do it at evoked stage. (At the raw stage,
you could alternatively notch filter with :func:mne.io.Raw.notch_filter.)
End of explanation
"""
evoked_std.plot(window_title='Standard', gfp=True, time_unit='s')
evoked_dev.plot(window_title='Deviant', gfp=True, time_unit='s')
"""
Explanation: Here we plot the ERF of standard and deviant conditions. In both conditions
we can see the P50 and N100 responses. The mismatch negativity is visible
only in the deviant condition around 100-200 ms. P200 is also visible around
170 ms in both conditions but much stronger in the standard condition. P300
is visible in deviant condition only (decision making in preparation of the
button press). You can view the topographies from a certain time span by
painting an area with clicking and holding the left mouse button.
End of explanation
"""
times = np.arange(0.05, 0.301, 0.025)
evoked_std.plot_topomap(times=times, title='Standard', time_unit='s')
evoked_dev.plot_topomap(times=times, title='Deviant', time_unit='s')
"""
Explanation: Show activations as topography figures.
End of explanation
"""
evoked_difference = combine_evoked([evoked_dev, evoked_std], weights=[1, -1])
evoked_difference.plot(window_title='Difference', gfp=True, time_unit='s')
"""
Explanation: We can see the MMN effect more clearly by looking at the difference between
the two conditions. P50 and N100 are no longer visible, but MMN/P200 and
P300 are emphasised.
End of explanation
"""
reject = dict(mag=4e-12)
cov = mne.compute_raw_covariance(raw_erm, reject=reject)
cov.plot(raw_erm.info)
del raw_erm
"""
Explanation: Source estimation.
We compute the noise covariance matrix from the empty room measurement
and use it for the other runs.
End of explanation
"""
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
trans = mne.read_trans(trans_fname)
"""
Explanation: The transformation is read from a file:
End of explanation
"""
if use_precomputed:
fwd_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-meg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
else:
src = mne.setup_source_space(subject, spacing='ico4',
subjects_dir=subjects_dir, overwrite=True)
model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3],
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src,
bem=bem)
inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov)
snr = 3.0
lambda2 = 1.0 / snr ** 2
del fwd
"""
Explanation: To save time and memory, the forward solution is read from a file. Set
use_precomputed=False in the beginning of this script to build the
forward solution from scratch. The head surfaces for constructing a BEM
solution are read from a file. Since the data only contains MEG channels, we
only need the inner skull surface for making the forward solution. For more
information: CHDBBCEJ, :func:mne.setup_source_space,
bem-model, :func:mne.bem.make_watershed_bem.
End of explanation
"""
stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM')
brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_standard, brain
"""
Explanation: The sources are computed using dSPM method and plotted on an inflated brain
surface. For interactive controls over the image, use keyword
time_viewer=True.
Standard condition.
End of explanation
"""
stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM')
brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_deviant, brain
"""
Explanation: Deviant condition.
End of explanation
"""
stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM')
brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.15, time_unit='s')
"""
Explanation: Difference.
End of explanation
"""
|
metpy/MetPy | v0.10/_downloads/7dd7941230ab04d65d899c66ed400ef4/xarray_tutorial.ipynb | bsd-3-clause | import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import xarray as xr
# Any import of metpy will activate the accessors
import metpy.calc as mpcalc
from metpy.testing import get_test_data
from metpy.units import units
"""
Explanation: xarray with MetPy Tutorial
xarray <http://xarray.pydata.org/>_ is a powerful Python package that provides N-dimensional
labeled arrays and datasets following the Common Data Model. While the process of integrating
xarray features into MetPy is ongoing, this tutorial demonstrates how xarray can be used
within the current version of MetPy. MetPy's integration primarily works through accessors
which allow simplified projection handling and coordinate identification. Unit and calculation
support is currently available in a limited fashion, but should be improved in future
versions.
End of explanation
"""
# Open the netCDF file as a xarray Dataset
data = xr.open_dataset(get_test_data('irma_gfs_example.nc', False))
# View a summary of the Dataset
print(data)
"""
Explanation: Getting Data
While xarray can handle a wide variety of n-dimensional data (essentially anything that can
be stored in a netCDF file), a common use case is working with model output. Such model
data can be obtained from a THREDDS Data Server using the siphon package, but for this
tutorial, we will use an example subset of GFS data from Hurrican Irma (September 5th,
2017).
End of explanation
"""
# To parse the full dataset, we can call parse_cf without an argument, and assign the returned
# Dataset.
data = data.metpy.parse_cf()
# If we instead want just a single variable, we can pass that variable name to parse_cf and
# it will return just that data variable as a DataArray.
data_var = data.metpy.parse_cf('Temperature_isobaric')
# To rename variables, supply a dictionary between old and new names to the rename method
data.rename({
'Vertical_velocity_pressure_isobaric': 'omega',
'Relative_humidity_isobaric': 'relative_humidity',
'Temperature_isobaric': 'temperature',
'u-component_of_wind_isobaric': 'u',
'v-component_of_wind_isobaric': 'v',
'Geopotential_height_isobaric': 'height'
}, inplace=True)
"""
Explanation: Preparing Data
To make use of the data within MetPy, we need to parse the dataset for projection and
coordinate information following the CF conventions. For this, we use the
data.metpy.parse_cf() method, which will return a new, parsed DataArray or
Dataset.
Additionally, we rename our data variables for easier reference.
End of explanation
"""
data['temperature'].metpy.convert_units('degC')
"""
Explanation: Units
MetPy's DataArray accessor has a unit_array property to obtain a pint.Quantity array
of just the data from the DataArray (metadata is removed) and a convert_units method to
convert the the data from one unit to another (keeping it as a DataArray). For now, we'll
just use convert_units to convert our temperature to degC.
End of explanation
"""
# Get multiple coordinates (for example, in just the x and y direction)
x, y = data['temperature'].metpy.coordinates('x', 'y')
# If we want to get just a single coordinate from the coordinates method, we have to use
# tuple unpacking because the coordinates method returns a generator
vertical, = data['temperature'].metpy.coordinates('vertical')
# Or, we can just get a coordinate from the property
time = data['temperature'].metpy.time
# To verify, we can inspect all their names
print([coord.name for coord in (x, y, vertical, time)])
"""
Explanation: Coordinates
You may have noticed how we directly accessed the vertical coordinates above using their
names. However, in general, if we are working with a particular DataArray, we don't have to
worry about that since MetPy is able to parse the coordinates and so obtain a particular
coordinate type directly. There are two ways to do this:
Use the data_var.metpy.coordinates method
Use the data_var.metpy.x, data_var.metpy.y, data_var.metpy.vertical,
data_var.metpy.time properties
The valid coordinate types are:
x
y
vertical
time
(Both approaches and all four types are shown below)
End of explanation
"""
print(data['height'].metpy.sel(vertical=850 * units.hPa))
"""
Explanation: Indexing and Selecting Data
MetPy provides wrappers for the usual xarray indexing and selection routines that can handle
quantities with units. For DataArrays, MetPy also allows using the coordinate axis types
mentioned above as aliases for the coordinates. And so, if we wanted 850 hPa heights,
we would take:
End of explanation
"""
data_crs = data['temperature'].metpy.cartopy_crs
print(data_crs)
"""
Explanation: For full details on xarray indexing/selection, see
xarray's documentation <http://xarray.pydata.org/en/stable/indexing.html>_.
Projections
Getting the cartopy coordinate reference system (CRS) of the projection of a DataArray is as
straightforward as using the data_var.metpy.cartopy_crs property:
End of explanation
"""
data_globe = data['temperature'].metpy.cartopy_globe
print(data_globe)
"""
Explanation: The cartopy Globe can similarly be accessed via the data_var.metpy.cartopy_globe
property:
End of explanation
"""
lat, lon = xr.broadcast(y, x)
f = mpcalc.coriolis_parameter(lat)
dx, dy = mpcalc.lat_lon_grid_deltas(lon, lat, initstring=data_crs.proj4_init)
heights = data['height'].metpy.loc[{'time': time[0], 'vertical': 500. * units.hPa}]
u_geo, v_geo = mpcalc.geostrophic_wind(heights, f, dx, dy)
print(u_geo)
print(v_geo)
"""
Explanation: Calculations
Most of the calculations in metpy.calc will accept DataArrays by converting them
into their corresponding unit arrays. While this may often work without any issues, we must
keep in mind that because the calculations are working with unit arrays and not DataArrays:
The calculations will return unit arrays rather than DataArrays
Broadcasting must be taken care of outside of the calculation, as it would only recognize
dimensions by order, not name
As an example, we calculate geostropic wind at 500 hPa below:
End of explanation
"""
heights = data['height'].metpy.loc[{'time': time[0], 'vertical': 500. * units.hPa}]
lat, lon = xr.broadcast(y, x)
f = mpcalc.coriolis_parameter(lat)
dx, dy = mpcalc.grid_deltas_from_dataarray(heights)
u_geo, v_geo = mpcalc.geostrophic_wind(heights, f, dx, dy)
print(u_geo)
print(v_geo)
"""
Explanation: Also, a limited number of calculations directly support xarray DataArrays or Datasets (they
can accept and return xarray objects). Right now, this includes
Derivative functions
first_derivative
second_derivative
gradient
laplacian
Cross-section functions
cross_section_components
normal_component
tangential_component
absolute_momentum
More details can be found by looking at the documentation for the specific function of
interest.
There is also the special case of the helper function, grid_deltas_from_dataarray, which
takes a DataArray input, but returns unit arrays for use in other calculations. We could
rewrite the above geostrophic wind example using this helper function as follows:
End of explanation
"""
# A very simple example example of a plot of 500 hPa heights
data['height'].metpy.loc[{'time': time[0], 'vertical': 500. * units.hPa}].plot()
plt.show()
# Let's add a projection and coastlines to it
ax = plt.axes(projection=ccrs.LambertConformal())
data['height'].metpy.loc[{'time': time[0],
'vertical': 500. * units.hPa}].plot(ax=ax, transform=data_crs)
ax.coastlines()
plt.show()
# Or, let's make a full 500 hPa map with heights, temperature, winds, and humidity
# Select the data for this time and level
data_level = data.metpy.loc[{time.name: time[0], vertical.name: 500. * units.hPa}]
# Create the matplotlib figure and axis
fig, ax = plt.subplots(1, 1, figsize=(12, 8), subplot_kw={'projection': data_crs})
# Plot RH as filled contours
rh = ax.contourf(x, y, data_level['relative_humidity'], levels=[70, 80, 90, 100],
colors=['#99ff00', '#00ff00', '#00cc00'])
# Plot wind barbs, but not all of them
wind_slice = slice(5, -5, 5)
ax.barbs(x[wind_slice], y[wind_slice],
data_level['u'].metpy.unit_array[wind_slice, wind_slice].to('knots'),
data_level['v'].metpy.unit_array[wind_slice, wind_slice].to('knots'),
length=6)
# Plot heights and temperature as contours
h_contour = ax.contour(x, y, data_level['height'], colors='k', levels=range(5400, 6000, 60))
h_contour.clabel(fontsize=8, colors='k', inline=1, inline_spacing=8,
fmt='%i', rightside_up=True, use_clabeltext=True)
t_contour = ax.contour(x, y, data_level['temperature'], colors='xkcd:deep blue',
levels=range(-26, 4, 2), alpha=0.8, linestyles='--')
t_contour.clabel(fontsize=8, colors='xkcd:deep blue', inline=1, inline_spacing=8,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Add geographic features
ax.add_feature(cfeature.LAND.with_scale('50m'), facecolor=cfeature.COLORS['land'])
ax.add_feature(cfeature.OCEAN.with_scale('50m'), facecolor=cfeature.COLORS['water'])
ax.add_feature(cfeature.STATES.with_scale('50m'), edgecolor='#c7c783', zorder=0)
ax.add_feature(cfeature.LAKES.with_scale('50m'), facecolor=cfeature.COLORS['water'],
edgecolor='#c7c783', zorder=0)
# Set a title and show the plot
ax.set_title('500 hPa Heights (m), Temperature (\u00B0C), Humidity (%) at '
+ time[0].dt.strftime('%Y-%m-%d %H:%MZ'))
plt.show()
"""
Explanation: Plotting
Like most meteorological data, we want to be able to plot these data. DataArrays can be used
like normal numpy arrays in plotting code, which is the recommended process at the current
point in time, or we can use some of xarray's plotting functionality for quick inspection of
the data.
(More detail beyond the following can be found at xarray's plotting reference
<http://xarray.pydata.org/en/stable/plotting.html>_.)
End of explanation
"""
|
daniel-koehn/DENISE-Black-Edition | par/pythonIO/DENISE-python_IO.ipynb | gpl-2.0 | # Import Python libaries
# ----------------------
import numpy as np # NumPy library
from denise_IO.denise_out import * # "DENISE" library
"""
Explanation: Creating input files for DENISE Black-Edition
Jupyter notebook for the definition of FD model and modelling/FWI/RTM parameters of DENISE Black-Edition.
For a more detailed explanation of the parameters, I refer to the DENISE Black-Edition user manual
Daniel Koehn
Kiel, 24.06.2019
Latest update: 26.08.2020
End of explanation
"""
para["filename"] = "DENISE_marm_OBC.inp"
"""
Explanation: 1. Short description of modelling/FWI problem
Define name of DENISE parameter file
End of explanation
"""
para["descr"] = "Marmousi-II"
"""
Explanation: Give a short description of your modelling/FWI problem
End of explanation
"""
para["PHYSICS"] = 1
"""
Explanation: What kind of PHYSICS do you want to use? (2D-PSV=1; 2D-AC=2; 2D-PSV-VTI=3; 2D-PSV-TTI=4; 2D-SH=5)
End of explanation
"""
para["MODE"] = 0
"""
Explanation: Choose DENISE operation mode (MODE): (forward_modelling_only=0; FWI=1; RTM=2)
End of explanation
"""
para["NX"] = 500 # number of grid points in x-direction
para["NY"] = 174 # number of grid points in y-direction
para["DH"] = 20. # spatial grid point distance [m]
"""
Explanation: 2. Load external 2D elastic model
First define spatial model discretization:
End of explanation
"""
# Define model basename
base_model = "model/marmousi_II_marine"
# Open vp-model and write IEEE-le binary data to vp array
# -------------------------------------------------------
f = open(base_model + ".vp")
data_type = np.dtype ('float32').newbyteorder ('<')
vp = np.fromfile (f, dtype=data_type)
f.close()
# Reshape (1 x nx*ny) vector to (ny x nx) matrix
vp = vp.reshape(para["NX"],para["NY"])
vp = np.transpose(vp)
vp = np.flipud(vp)
# Open vs-model and write IEEE-le binary data to vs array
# -------------------------------------------------------
f = open(base_model + ".vs")
data_type = np.dtype ('float32').newbyteorder ('<')
vs = np.fromfile (f, dtype=data_type)
f.close()
# Reshape (1 x nx*ny) vector to (ny x nx) matrix
vs = vs.reshape(para["NX"],para["NY"])
vs = np.transpose(vs)
vs = np.flipud(vs)
# Open rho-model and write IEEE-le binary data to rho array
# ---------------------------------------------------------
f = open(base_model + ".rho")
data_type = np.dtype ('float32').newbyteorder ('<')
rho = np.fromfile (f, dtype=data_type)
f.close()
# Reshape (1 x nx*ny) vector to (ny x nx) matrix
rho = rho.reshape(para["NX"],para["NY"])
rho = np.transpose(rho)
rho = np.flipud(rho)
"""
Explanation: Load external elastic model:
End of explanation
"""
x = np.arange(para["DH"], para["DH"] * (para["NX"] + 1), para["DH"])
y = np.arange(para["DH"], para["DH"] * (para["NY"] + 1), para["DH"])
# convert m -> km
x = np.divide(x,1000.0);
y = np.divide(y,1000.0);
"""
Explanation: Define coordinate axis
End of explanation
"""
cmap = "magma" # colormap
# define minimum and maximum material parameter values
vpmin = np.min(vp)
vpmax = np.max(vp)
vsmin = np.min(vs)
vsmax = np.max(vs)
rhomin = np.min(rho)
rhomax = np.max(rho)
# plot elastic model
plot_model(vp,vs,rho,x,y,cmap,vpmin,vpmax,vsmin,vsmax,rhomin,rhomax)
"""
Explanation: Plot external model
End of explanation
"""
# model basename
model_basename = "marmousi_II_marine"
# location of model files during DENISE forward modelling run
para["MFILE"] = "start/" + model_basename
# writing P-wave velocity model to IEEE-le binary file
name_model = model_basename + ".vp"
f = open (name_model, mode='wb')
data_type = np.dtype ('float32').newbyteorder ('<')
vp1 = np.array(vp, dtype=data_type)
vp1 = np.rot90(vp1,3)
vp1.tofile(f)
f.close()
# writing S-wave velocity model to IEEE-le binary file
name_model = model_basename + ".vs"
f = open (name_model, mode='wb')
data_type = np.dtype ('float32').newbyteorder ('<')
vs1 = np.array(vs, dtype=data_type)
vs1 = np.rot90(vs1,3)
vs1.tofile(f)
f.close()
# writing density model to IEEE-le binary file
name_model = model_basename + ".rho"
f = open (name_model, mode='wb')
data_type = np.dtype ('float32').newbyteorder ('<')
rho1 = np.array(rho, dtype=data_type)
rho1 = np.rot90(rho1,3)
rho1.tofile(f)
f.close()
"""
Explanation: Write model to IEEE-le binary file
End of explanation
"""
print("ximage n1=" + str(para["NY"]) + " < " + model_basename + ".vp")
print("ximage n1=" + str(para["NY"]) + " < " + model_basename + ".vs")
print("ximage n1=" + str(para["NY"]) + " < " + model_basename + ".rho")
"""
Explanation: To check if the models are correctly written to the binary files, you can use the Seismic Unix function ximage
End of explanation
"""
# Order of spatial FD operator (2, 4, 6, 8, 10, 12)
para["FD_ORDER"] = 8
# Maximum relative group velocity error E
# (minimum number of grid points per shortest wavelength is defined by FD_ORDER and E)
# values:
# 0 = Taylor coefficients
# 1 = Holberg coeff.: E = 0.1 %
# 2 = E = 0.5 %
# 3 = E = 1.0 %
# 4 = E = 3.0 %
para["max_relative_error"] = 3
"""
Explanation: 3. Define spatial FD operator
Spatial FD operator coefficients are based on Taylor series expansion and optimized according to Holberg (1987)
End of explanation
"""
# maximum modelling frequency based on grid dispersion criterion for spatial FD operator
freqmax = calc_max_freq(vp,vs,para)
"""
Explanation: Estimate the maximum frequency in the source wavelet, which can be modelled by the given FD grid discretization and spatial FD operator, using the grid dispersion citerion
End of explanation
"""
para["NPROCX"] = 5 # number of processors in x-direction
para["NPROCY"] = 3 # number of processors in y-direction
"""
Explanation: If you want to model higher frequency wave propagation, you have to decrease the spatial gridpoint distance DH by resampling the model
4. Parallelization by Domain Decomposition
End of explanation
"""
check_domain_decomp(para)
"""
Explanation: Check if the spatial domain decomposition is consistent with the spatial FD grid discretization. The following conditions have to be satisfied
NX % NPROCX = 0
NY % NPROCY = 0
End of explanation
"""
para["DT"] = check_stability(vp,vs,para)
"""
Explanation: If the domain decomposition conditions are not satisfied, you have to add additional gridpoints at the bottom and right model boundary.
5. Time stepping
Calculate maximum time step DT according to the Courant-Friedrichs-Lewy (CFL) criterion
End of explanation
"""
para["TIME"] = 6.0 # time of wave propagation [s]
para["DT"] = 2.0e-3 # timestep [s]
"""
Explanation: If you want to apply a FWI, keep in mind that the FWI will change the velocity model. Therefore, the maxium seismic velocities in the model will increase and you should choose a smaller time step than the DT derived from the CFL criterion
End of explanation
"""
para["L"] = 0 # number of relaxation mechanisms
para["FL"] = 40. # relaxation frequencies [Hz]
"""
Explanation: 6. Q-approximation
End of explanation
"""
# free surface boundary condition
para["FREE_SURF"] = 1 # activate free surface boundary condition
# PML boundary frame
para["FW"] = 10
para["DAMPING"] = 1500.
para["FPML"] = 10.
para["npower"] = 4.
para["k_max_PML"] = 1.
"""
Explanation: 7. Boundary conditions
Define boundary conditions. FREE_SURF=1 activates a free-surface boundary condition at y = 0 m. PML absorbing boundries are defined by their widht FW, damping velocity DAMPING, damping frequency FPML, degree of damping profile npower and k_max_PML
End of explanation
"""
# receiver x-coordinates
drec = 20. # receiver spacing [m]
xrec1 = 800. # 1st receiver position [m]
xrec2 = 8780. # last receiver position [m]
xrec = np.arange(xrec1, xrec2 + para["DH"], drec) # receiver positions in x-direction [m]
# place receivers at depth yrec [m]
depth_rec = 460. # receiver depth [m]
yrec = depth_rec * xrec/xrec
# assemble vectors into an array
tmp = np.zeros(xrec.size, dtype=[('var1', float), ('var2', float)])
tmp['var1'] = xrec
tmp['var2'] = yrec
"""
Explanation: 8. Define acquisition geometry
a) Receiver properites and positions
Place receivers on FD modelling grid
End of explanation
"""
check_src_rec_pml(xrec,yrec,para,1)
"""
Explanation: Check if receivers are located in computational domain and not the PMLs
End of explanation
"""
# write receiver positions to file
basename_rec = 'receiver_OBC'
np.savetxt(basename_rec + ".dat", tmp, fmt='%4.3f %4.3f')
"""
Explanation: Write receiver positions to file
End of explanation
"""
# type of seismogram
para["SEISMO"] = 1
"""
Explanation: Define type of seismograms SEISMO:
SEISMO=0: no seismograms
SEISMO=1: particle-velocities
SEISMO=2: pressure (hydrophones)
SEISMO=3: curl and div
SEISMO=4: everything
End of explanation
"""
para["READREC"] = 1
"""
Explanation: How does DENISE read receiver positions from a file? In case of a fixed spread geometry you only need a single receiver file (READREC=1). If you want to model streamer geometry or more generally variable acquisition geometry with changing receiver positions for each shot, you have to define a separate receiver file for each shot (READREC=2)
End of explanation
"""
para["REC_FILE"] = "./receiver/" + basename_rec
"""
Explanation: Define location and basename of receiver file, defined above, without ".dat" extension
End of explanation
"""
para["NDT"] = 1 # seismogram sampling rate in timesteps (has to be set to NDT=1 if you run FWI)
# location and name of seismogram output files in SU format
# particle velocities (if SEISMO=1 or SEISMO=4)
para["SEIS_FILE_VX"] = "su/DENISE_MARMOUSI_x.su" # filename for vx component
para["SEIS_FILE_VY"] = "su/DENISE_MARMOUSI_y.su" # filename for vy component
# curl and div of wavefield (if SEISMO=3 or SEISMO=4)
para["SEIS_FILE_CURL"] = "su/DENISE_MARMOUSI_rot.su" # filename for rot_z component ~ S-wave energy
para["SEIS_FILE_DIV"] = "su/DENISE_MARMOUSI_div.su" # filename for div component ~ P-wave energy
# pressure field (hydrophones) (if SEISMO=2 or SEISMO=4)
para["SEIS_FILE_P"] = "su/DENISE_MARMOUSI_p.su" # filename for pressure component
"""
Explanation: Define the seismogram properties
End of explanation
"""
# source x-coordinates
dsrc = 80. # source spacing [m]
xsrc1 = 800. # 1st source position [m]
xsrc2 = 8780. # last source position [m]
xsrc = np.arange(xsrc1, xsrc2 + para["DH"], dsrc) # source positions in x-direction [m]
# place sources at depth ysrc [m]
depth_src = 40. # source depth [m]
ysrc = depth_src * xsrc/xsrc
# number of source positions
nshot = (int)(len(ysrc))
# z-coordinate = 0 due to 2D code [m]
zsrc = 0.0 * (xsrc / xsrc)
# time delay of source wavelet [s]
td = 0.0 * (xsrc / xsrc)
# center frequency of pre-defined source wavelet [Hz]
fc = 10.0 * (xsrc / xsrc)
# you can also use the maximum frequency computed from the grid dispersion
# criterion in section 3. based on spatial discretization and FD operator
# fc = (freqmax / 2.) * (xsrc / xsrc)
# amplitude of source wavelet [m]
amp = 1.0 * (xsrc / xsrc)
# angle of rotated source [°]
angle = 0.0 * (xsrc / xsrc)
# define source type:
# 2D PSV case
# -----------
# explosive sources (QUELLTYP=1)
# point forces in x- and y-direction (QUELLTYP=2,3)
# 2D SH case
# -----------
# point force in z-direction (QUELLTYP=1)
QUELLTYP = 1
src_type = QUELLTYP * (xsrc / xsrc)
"""
Explanation: b) Source properties and positions
Distribute sources on FD modelling grid an define source properties
End of explanation
"""
check_src_rec_pml(xsrc,ysrc,para,2)
"""
Explanation: Check if sources are located in computational domain and not the PMLs
End of explanation
"""
# write source positions and properties to file
basename_src = "source_OBC_VSP.dat"
# create and open source file
fp = open(basename_src, mode='w')
# write nshot to file header
fp.write(str(nshot) + "\n")
# write source properties to file
for i in range(0,nshot):
fp.write('{:4.2f}'.format(xsrc[i]) + "\t" + '{:4.2f}'.format(zsrc[i]) + "\t" + '{:4.2f}'.format(ysrc[i]) + "\t" + '{:1.2f}'.format(td[i]) + "\t" + '{:4.2f}'.format(fc[i]) + "\t" + '{:1.2f}'.format(amp[i]) + "\t" + '{:1.2f}'.format(angle[i]) + "\t" + str(src_type[i]) + "\t" + "\n")
# close source file
fp.close()
"""
Explanation: Write source positions to file
End of explanation
"""
para["SOURCE_FILE"] = "./source/" + basename_src
"""
Explanation: Define location of the source file:
End of explanation
"""
para["RUN_MULTIPLE_SHOTS"] = 1
"""
Explanation: Do you want to excite all source positions simultaneously (RUN_MULTIPLE_SHOTS=0) or start a separate modelling run for each shot (RUN_MULTIPLE_SHOTS=1)
End of explanation
"""
para["QUELLART"] = 6
"""
Explanation: Define shape of the source signal (QUELLART)
Ricker wavelet = 1
\begin{equation}
\rm{r(\tau)=\left(1-2\tau^2\right)\exp(-\tau^2)} \quad \mbox{with} \quad \rm{\tau=\frac{\pi(t-1.5/f_c-t_d)}{1.0/f_c}}
\label{eq_ricker}
\end{equation}
Fuchs-Mueller wavelet = 2
\begin{equation}
\rm{f_m(t)=\sin(2\pi(t-t_d)f_c)-0.5\sin(4\pi(t-t_d)f_c)} \quad \mbox{if} \quad \rm{t\in[t_d,t_d+1/fc]} \quad \mbox{else} \quad \rm{fm(t)=0}
\label{eq_fm}
\end{equation}
read wavelet from ASCII file = 3
SIN^3 wavelet = 4
\begin{equation}
\rm{s3(t)=0.75 \pi f_c \sin(\pi(t+t_d)f_c)^3}\quad \mbox{if} \quad \rm{t \in[t_d,t_d+1/fc]} \quad \mbox{else} \quad \rm{s3(t)=0}
\label{eq_s3}
\end{equation}
Gaussian derivative wavelet = 5
\begin{equation}
\rm{gd(t)=-2 \pi^2 f_c^2 (t-t_d) exp(-\pi^2 f_c^2 (t-t_d)^2)}
\label{eq_s4}
\end{equation}
Bandlimited spike wavelet = 6
Klauder wavelet = 7
\begin{equation}
\rm{klau(t) = real\biggl{sin\biggl(\frac{\pi k \tau (TS-\tau)}{\pi k \tau}\biggr)(exp(2 \pi i f_0 \tau))\biggr}} \quad \mbox{with} \quad \rm{\tau=(t-1.5/FC_SPIKE_1-t_d)}}
\label{eq_s5}
\end{equation}
with
$\rm{k=(FC_SPIKE_2-FC_SPIKE_1)/TS}$ (rate of change of frequency with time)
$\rm{f_0=(FC_SPIKE_2+FC_SPIKE_1)/2}$ (midfrequency of bandwidth)
$\rm{i^2=-1}$
In these equations, t denotes time and $f_c$ is the center frequency. $t_d$ is a time delay which can be defined for each source position in SOURCE_FILE. Note that the symmetric (zero phase) Ricker signal is always delayed by $1.0/f_c$, which means that after one period the maximum amplitude is excited at the source location.
End of explanation
"""
para["SIGNAL_FILE"] = "./wavelet/wavelet_marmousi"
"""
Explanation: If you read the wavelet from an ASCII file (QUELLART=3), you have to define the location of the signal file (SIGNAL_FILE)
End of explanation
"""
para["FC_SPIKE_1"] = -5.0 # lower corner frequency [Hz]
para["FC_SPIKE_2"] = 15.0 # upper corner frequency [Hz]
# you can also use the maximum frequency computed from the grid dispersion
# criterion in section 3. based on spatial discretization and FD operator
# para["FC_SPIKE_2"] = freqmax # upper corner frequency [Hz]
para["ORDER_SPIKE"] = 5 # order of Butterworth filter
"""
Explanation: In case of the bandlimited spike wavelet you have to define ...
If FC_SPIKE_1 <= 0.0 a low-pass filtered spike with upper corner frequency FC_SPIKE_2 and order ORDER_SPIKE is calculated
If FC_SPIKE_1 > 0.0 a band-pass filtered spike with lower corner frequency FC_SPIKE_1 and upper corner frequency FC_SPIKE_2 with order ORDER_SPIKE is calculated
End of explanation
"""
para["TS"] = 8.0 # sweep length [s]
"""
Explanation: In case of the Klauder wavelet you have to define the sweep length TS
End of explanation
"""
para["WRITE_STF"] = 1
"""
Explanation: Do you want to write the source wavelet to a SU file for each shot (WRITE_STF=1)?
End of explanation
"""
cmap = "inferno"
plot_acq(vp,xrec/1000,yrec/1000,xsrc/1000,ysrc/1000,x,y,cmap,vpmin,vpmax)
"""
Explanation: Plot acquisition geometry relative to the subsurface model. Red stars denote the source positions and cyan triangles receiver positions
End of explanation
"""
para["SNAP"] = 0
para["SNAP_SHOT"] = 1 # compute and write snapshots for shot no. SNAP_SHOT
para["TSNAP1"] = 0.002 # first snapshot [s] (TSNAP1 has to fullfill the condition TSNAP1 > DT)
para["TSNAP2"] = 3.0 # first snapshot [s]
para["TSNAPINC"] = 0.06 # snapshot increment [s]
para["IDX"] = 1 # write only every IDX spatial grid point in x-direction to snapshot file
para["IDY"] = 1 # write only every IDY spatial grid point in y-direction to snapshot file
para["SNAP_FILE"] = "./snap/waveform_forward" # location and basename of the snapshot files
"""
Explanation: 9. Wavefield snapshots
Output of wavefield snapshots (SNAP>0):
particle velocities: SNAP=1
pressure field: SNAP=2
curl and divergence energy: SNAP=3
both particle velocities and energy : SNAP=4
End of explanation
"""
para["LOG_FILE"] = "log/Marmousi.log" # Log file name
"""
Explanation: 10. Log file name
End of explanation
"""
para["ITERMAX"] = 600 # maximum number of TDFWI iterations at each FWI stage defined in FWI workflow file
para["JACOBIAN"] = "jacobian/gradient_Test" # location and basename of FWI gradients
para["DATA_DIR"] = "su/MARMOUSI_spike/DENISE_MARMOUSI" # location and basename of field data seismograms
para["INVMAT1"] = 1 # material parameterization for FWI (Vp,Vs,rho=1/Zp,Zs,rho=2/lam,mu,rho=3)
# Currently, only the Vp-Vs-rho parametrization (INVMAT1=1) can be used
para["GRAD_FORM"] = 1 # gradient formulation (time integration of adjoint sources = 1, no time integration = 2)
# Adjoint source type
# x-y components = 1; y-comp = 2; x-comp = 3; p-comp = 4; x-p-comp = 5; y-p-comp = 6; x-y-p-comp = 7
para["QUELLTYPB"] = 1
# Optimization method
para["GRAD_METHOD"] = 2 # PCG = 1; LBFGS = 2
# PCG_BETA (Fletcher_Reeves=1/Polak_Ribiere=2/Hestenes_Stiefel=3/Dai_Yuan=4)
para["PCG_BETA"] = 2
# store NLBFGS update during LBFGS optimization
para["NLBFGS"] = 20
# store wavefields only every DTINV time sample for gradient computation
para["DTINV"] = 3
# FWI log file location and name
para["MISFIT_LOG_FILE"] = "Marmousi_fwi_log.dat"
"""
Explanation: FWI parameters
If you only want to run FD forward modelling runs, you can neglect the parameters below, which are fixed FWI parameters
11. General FWI parameters
End of explanation
"""
# gradient taper geometry
para["GRADT1"] = 21
para["GRADT2"] = 25
para["GRADT3"] = 490
para["GRADT4"] = 500
para["TAPERLENGTH"] = (int)(para["GRADT2"]-para["GRADT1"])
# apply vertical taper (SWS_TAPER_GRAD_VERT=1)
para["SWS_TAPER_GRAD_VERT"] = 0
# apply horizontal taper (SWS_TAPER_GRAD_HOR=1)
para["SWS_TAPER_GRAD_HOR"] = 1
# exponent of depth scaling for preconditioning
para["EXP_TAPER_GRAD_HOR"] = 2.0
# Circular taper around all sources (not at receiver positions)
para["SWS_TAPER_GRAD_SOURCES"] = 0
para["SWS_TAPER_CIRCULAR_PER_SHOT"] = 0
para["SRTSHAPE"] = 1 # SRTSHAPE: 1 = error_function; 2 = log_function
para["SRTRADIUS"] = 5. # --> minimum for SRTRADIUS is 5x5 gridpoints
# Read taper file from external file
para["SWS_TAPER_FILE"] = 0
# Location and basename of taper files
para["TFILE"] = "taper/taper"
"""
Explanation: 12. FWI gradient taper functions
End of explanation
"""
# model location and basename
para["INV_MODELFILE"] = "model/modelTest"
# write inverted model after each iteration (yes=1)?
# Warning: Might require a lot of disk space
para["INV_MOD_OUT"] = 0
"""
Explanation: 13. FWI model output
End of explanation
"""
# upper limit for vp
para["VPUPPERLIM"] = 6000.
# lower limit for vp
para["VPLOWERLIM"] = 0.
# upper limit for vs
para["VSUPPERLIM"] = 4000.
# lower limit for vs
para["VSLOWERLIM"] = 0.
# upper limit for density
para["RHOUPPERLIM"] = 3000.
# lower limit for density
para["RHOLOWERLIM"] = 1000.
# upper limit for Qs
para["QSUPPERLIM"] = 100.
# lower limit for Qs
para["QSLOWERLIM"] = 10.
"""
Explanation: 14. Bound constraints
Upper and lower limits for different model parameter classes
End of explanation
"""
para["EPS_SCALE"] = 0.01 # initial model update during step length estimation
para["STEPMAX"] = 6 # maximum number of attemps to find a step length during line search
para["SCALEFAC"] = 2. # scale step during line search
# evaluate objective function only for a limited number of shots
para["TESTSHOT_START"] = 25
para["TESTSHOT_END"] = 75
para["TESTSHOT_INCR"] = 10
"""
Explanation: 15. Step length estimation
End of explanation
"""
check_steplength(nshot,para)
"""
Explanation: Check step length estimation
End of explanation
"""
# Activate trace muting (yes=1)
para["TRKILL"] = 0
# Location and name of trace mute file containing muting matrix
para["TRKILL_FILE"] = "./trace_kill/trace_kill.dat"
"""
Explanation: 16. Trace muting
End of explanation
"""
# Basename of picked traveltimes for each shot
# Time damping parameters are defined in the DENISE
# workflow file for each FWI stage
para["PICKS_FILE"] = "./picked_times/picks_"
"""
Explanation: 17. Time damping
End of explanation
"""
write_denise_para(para)
"""
Explanation: 18. Create DENISE parameter file
End of explanation
"""
para["filename_workflow"] = "FWI_workflow_marmousi.inp"
"""
Explanation: Define FWI workflow file
If you want to run a FWI, you also have to define a FWI workflow file ...
Define name of DENISE FWI workflow file
End of explanation
"""
write_denise_workflow_header(para)
"""
Explanation: Create Header for DENISE FWI workflow file
End of explanation
"""
# Define FWI parameters for stage 1 ...
# Termination criterion
para["PRO"] = 0.01
# Frequency filtering
# TIME_FILT = 0 (apply no frequency filter to field data and source wavelet)
# TIME_FILT = 1 (apply low-pass filter to field data and source wavelet)
# TIME_FILT = 2 (apply band-pass filter to field data and source wavelet)
para["TIME_FILT"] = 1
# Low- (FC_LOW) and high-pass (FC_HIGH) corner frequencies of Butterwortfilter
# of order ORDER
para["FC_LOW"] = 0.0
para["FC_HIGH"] = 2.0
para["ORDER"] = 6
# Time windowing
para["TIME_WIN"] = 0
para["GAMMA"] = 20.0
para["TWIN-"] = 0.0
para["TWIN+"] = 0.0
# Starting FWI of parameter class Vp, Vs, rho, Qs from iteration number
# INV_VP_ITER, INV_VS_ITER, INV_RHO_ITER, INV_QS_ITER
para["INV_VP_ITER"] = 0
para["INV_VS_ITER"] = 0
para["INV_RHO_ITER"] = 0
para["INV_QS_ITER"] = 0
# Apply spatial Gaussian filter to gradients
# SPATFILTER = 0 (apply no filter)
# SPATFILTER = 4 (Anisotropic Gaussian filter with half-width adapted to the local wavelength)
para["SPATFILTER"] = 0
# If Gaussian filter (SPATFILTER=4), define the fraction of the local wavelength in ...
# x-direction WD_DAMP and y-direction WD_DAMP1 used to define the half-width of the
# Gaussian filter
para["WD_DAMP"] = 0.5
para["WD_DAMP1"] = 0.5
# Preconditioning of the gradient directions
# EPRECOND = 0 - no preconditioning
# EPRECOND = 1 - approximation of the Pseudo-Hessian (Shin et al. 2001)
# EPRECOND = 3 - Hessian approximation according to Plessix & Mulder (2004)
para["EPRECOND"] = 3
# Define objective function
# LNORM = 2 - L2 norm
# LNORM = 5 - global correlation norm (Choi & Alkhalifah 2012)
# LNORM = 6 - envelope objective functions after Chi, Dong and Liu (2014) - EXPERIMENTAL
# LNORM = 7 - NIM objective function after Chauris et al. (2012) and Tejero et al. (2015) - EXPERIMENTAL
para["LNORM"] = 2
# Activate Random Objective Waveform Inversion (ROWI, Pan & Gao 2020)
# ROWI = 0 - no ROWI
# ROWI = 1 - 50% GCN l2 norm / 50% AGC l2 norm (AC, PSV, SH modules only)
para["ROWI"] = 0
# Source wavelet inversion
# STF = 0 - no source wavelet inversion
# STF = 1 - estimate source wavelet by stabilized Wiener Deconvolution
para["STF"] = 0
# If OFFSETC_STF > 0, limit source wavelet inversion to maximum offsets OFFSETC_STF
para["OFFSETC_STF"] = -4.0
# Source wavelet inversion stabilization term to avoid division by zero in Wiener Deco
para["EPS_STF"] = 1e-1
# Apply Offset mute to field and modelled seismograms
# OFFSET_MUTE = 0 - no offset mute
# OFFSET_MUTE = 1 - mute far-offset data for offset >= OFFSETC
# OFFSET_MUTE = 1 - mute near-offset data for offset <= OFFSETC
para["OFFSET_MUTE"] = 0
para["OFFSETC"] = 10
# Scale density and Qs updates during multiparameter FWI by factors
# SCALERHO and SCALEQS, respectively
para["SCALERHO"] = 0.5
para["SCALEQS"] = 1.0
# If LNORM = 6, define type of envelope objective function (EXPERIMENTAL)
# ENV = 1 - L2 envelope objective function
# ENV = 2 - Log L2 envelope objective function
para["ENV"] = 1
# Integrate synthetic and modelled data NORDER times (EXPERIMENTAL)
para["N_ORDER"] = 0
# Write parameters to DENISE workflow file
write_denise_workflow(para)
# Define FWI parameters for stage 2 ...
# Termination criterion
para["PRO"] = 0.01
# Frequency filtering
# TIME_FILT = 0 (apply no frequency filter to field data and source wavelet)
# TIME_FILT = 1 (apply low-pass filter to field data and source wavelet)
# TIME_FILT = 2 (apply band-pass filter to field data and source wavelet)
para["TIME_FILT"] = 1
# Low- (FC_LOW) and high-pass (FC_HIGH) corner frequencies of Butterwortfilter
# of order ORDER
para["FC_LOW"] = 0.0
para["FC_HIGH"] = 5.0
para["ORDER"] = 6
# Time windowing
para["TIME_WIN"] = 0
para["GAMMA"] = 20.0
para["TWIN-"] = 0.0
para["TWIN+"] = 0.0
# Starting FWI of parameter class Vp, Vs, rho, Qs from iteration number
# INV_VP_ITER, INV_VS_ITER, INV_RHO_ITER, INV_QS_ITER
para["INV_VP_ITER"] = 0
para["INV_VS_ITER"] = 0
para["INV_RHO_ITER"] = 0
para["INV_QS_ITER"] = 0
# Apply spatial Gaussian filter to gradients
# SPATFILTER = 0 (apply no filter)
# SPATFILTER = 4 (Anisotropic Gaussian filter with half-width adapted to the local wavelength)
para["SPATFILTER"] = 0
# If Gaussian filter (SPATFILTER=4), define the fraction of the local wavelength in ...
# x-direction WD_DAMP and y-direction WD_DAMP1 used to define the half-width of the
# Gaussian filter
para["WD_DAMP"] = 0.5
para["WD_DAMP1"] = 0.5
# Preconditioning of the gradient directions
# EPRECOND = 0 - no preconditioning
# EPRECOND = 1 - approximation of the Pseudo-Hessian (Shin et al. 2001)
# EPRECOND = 3 - Hessian approximation according to Plessix & Mulder (2004)
para["EPRECOND"] = 3
# Define objective function
# LNORM = 2 - L2 norm
# LNORM = 5 - global correlation norm (Choi & Alkhalifah 2012)
# LNORM = 6 - envelope objective functions after Chi, Dong and Liu (2014) - EXPERIMENTAL
# LNORM = 7 - NIM objective function after Chauris et al. (2012) and Tejero et al. (2015) - EXPERIMENTAL
para["LNORM"] = 2
# Activate Random Objective Waveform Inversion (ROWI, Pan & Gao 2020)
# ROWI = 0 - no ROWI
# ROWI = 1 - 50% GCN l2 norm / 50% AGC l2 norm (AC, PSV, SH modules only)
para["ROWI"] = 0
# Source wavelet inversion
# STF = 0 - no source wavelet inversion
# STF = 1 - estimate source wavelet by stabilized Wiener Deconvolution
para["STF"] = 0
# If OFFSETC_STF > 0, limit source wavelet inversion to maximum offsets OFFSETC_STF
para["OFFSETC_STF"] = -4.0
# Source wavelet inversion stabilization term to avoid division by zero in Wiener Deco
para["EPS_STF"] = 1e-1
# Apply Offset mute to field and modelled seismograms
# OFFSET_MUTE = 0 - no offset mute
# OFFSET_MUTE = 1 - mute far-offset data for offset >= OFFSETC
# OFFSET_MUTE = 1 - mute near-offset data for offset <= OFFSETC
para["OFFSET_MUTE"] = 0
para["OFFSETC"] = 10
# Scale density and Qs updates during multiparameter FWI by factors
# SCALERHO and SCALEQS, respectively
para["SCALERHO"] = 0.5
para["SCALEQS"] = 1.0
# If LNORM = 6, define type of envelope objective function (EXPERIMENTAL)
# ENV = 1 - L2 envelope objective function
# ENV = 2 - Log L2 envelope objective function
para["ENV"] = 1
# Integrate synthetic and modelled data NORDER times (EXPERIMENTAL)
para["N_ORDER"] = 0
# Write parameters to DENISE workflow file
write_denise_workflow(para)
# Define FWI parameters for stage 3 ...
# Termination criterion
para["PRO"] = 0.01
# Frequency filtering
# TIME_FILT = 0 (apply no frequency filter to field data and source wavelet)
# TIME_FILT = 1 (apply low-pass filter to field data and source wavelet)
# TIME_FILT = 2 (apply band-pass filter to field data and source wavelet)
para["TIME_FILT"] = 1
# Low- (FC_LOW) and high-pass (FC_HIGH) corner frequencies of Butterwortfilter
# of order ORDER
para["FC_LOW"] = 0.0
para["FC_HIGH"] = 10.0
para["ORDER"] = 6
# Time windowing
para["TIME_WIN"] = 0
para["GAMMA"] = 20.0
para["TWIN-"] = 0.0
para["TWIN+"] = 0.0
# Starting FWI of parameter class Vp, Vs, rho, Qs from iteration number
# INV_VP_ITER, INV_VS_ITER, INV_RHO_ITER, INV_QS_ITER
para["INV_VP_ITER"] = 0
para["INV_VS_ITER"] = 0
para["INV_RHO_ITER"] = 0
para["INV_QS_ITER"] = 0
# Apply spatial Gaussian filter to gradients
# SPATFILTER = 0 (apply no filter)
# SPATFILTER = 4 (Anisotropic Gaussian filter with half-width adapted to the local wavelength)
para["SPATFILTER"] = 0
# If Gaussian filter (SPATFILTER=4), define the fraction of the local wavelength in ...
# x-direction WD_DAMP and y-direction WD_DAMP1 used to define the half-width of the
# Gaussian filter
para["WD_DAMP"] = 0.5
para["WD_DAMP1"] = 0.5
# Preconditioning of the gradient directions
# EPRECOND = 0 - no preconditioning
# EPRECOND = 1 - approximation of the Pseudo-Hessian (Shin et al. 2001)
# EPRECOND = 3 - Hessian approximation according to Plessix & Mulder (2004)
para["EPRECOND"] = 3
# Define objective function
# LNORM = 2 - L2 norm
# LNORM = 5 - global correlation norm (Choi & Alkhalifah 2012)
# LNORM = 6 - envelope objective functions after Chi, Dong and Liu (2014) - EXPERIMENTAL
# LNORM = 7 - NIM objective function after Chauris et al. (2012) and Tejero et al. (2015) - EXPERIMENTAL
para["LNORM"] = 2
# Activate Random Objective Waveform Inversion (ROWI, Pan & Gao 2020)
# ROWI = 0 - no ROWI
# ROWI = 1 - 50% GCN l2 norm / 50% AGC l2 norm (AC, PSV, SH modules only)
para["ROWI"] = 0
# Source wavelet inversion
# STF = 0 - no source wavelet inversion
# STF = 1 - estimate source wavelet by stabilized Wiener Deconvolution
para["STF"] = 0
# If OFFSETC_STF > 0, limit source wavelet inversion to maximum offsets OFFSETC_STF
para["OFFSETC_STF"] = -4.0
# Source wavelet inversion stabilization term to avoid division by zero in Wiener Deco
para["EPS_STF"] = 1e-1
# Apply Offset mute to field and modelled seismograms
# OFFSET_MUTE = 0 - no offset mute
# OFFSET_MUTE = 1 - mute far-offset data for offset >= OFFSETC
# OFFSET_MUTE = 1 - mute near-offset data for offset <= OFFSETC
para["OFFSET_MUTE"] = 0
para["OFFSETC"] = 10
# Scale density and Qs updates during multiparameter FWI by factors
# SCALERHO and SCALEQS, respectively
para["SCALERHO"] = 0.5
para["SCALEQS"] = 1.0
# If LNORM = 6, define type of envelope objective function (EXPERIMENTAL)
# ENV = 1 - L2 envelope objective function
# ENV = 2 - Log L2 envelope objective function
para["ENV"] = 1
# Integrate synthetic and modelled data NORDER times (EXPERIMENTAL)
para["N_ORDER"] = 0
# Write parameters to DENISE workflow file
write_denise_workflow(para)
# Define FWI parameters for stage 4 ...
# Termination criterion
para["PRO"] = 0.01
# Frequency filtering
# TIME_FILT = 0 (apply no frequency filter to field data and source wavelet)
# TIME_FILT = 1 (apply low-pass filter to field data and source wavelet)
# TIME_FILT = 2 (apply band-pass filter to field data and source wavelet)
para["TIME_FILT"] = 1
# Low- (FC_LOW) and high-pass (FC_HIGH) corner frequencies of Butterwortfilter
# of order ORDER
para["FC_LOW"] = 0.0
para["FC_HIGH"] = 20.0
para["ORDER"] = 6
# Time windowing
para["TIME_WIN"] = 0
para["GAMMA"] = 20.0
para["TWIN-"] = 0.0
para["TWIN+"] = 0.0
# Starting FWI of parameter class Vp, Vs, rho, Qs from iteration number
# INV_VP_ITER, INV_VS_ITER, INV_RHO_ITER, INV_QS_ITER
para["INV_VP_ITER"] = 0
para["INV_VS_ITER"] = 0
para["INV_RHO_ITER"] = 0
para["INV_QS_ITER"] = 0
# Apply spatial Gaussian filter to gradients
# SPATFILTER = 0 (apply no filter)
# SPATFILTER = 4 (Anisotropic Gaussian filter with half-width adapted to the local wavelength)
para["SPATFILTER"] = 0
# If Gaussian filter (SPATFILTER=4), define the fraction of the local wavelength in ...
# x-direction WD_DAMP and y-direction WD_DAMP1 used to define the half-width of the
# Gaussian filter
para["WD_DAMP"] = 0.5
para["WD_DAMP1"] = 0.5
# Preconditioning of the gradient directions
# EPRECOND = 0 - no preconditioning
# EPRECOND = 1 - approximation of the Pseudo-Hessian (Shin et al. 2001)
# EPRECOND = 3 - Hessian approximation according to Plessix & Mulder (2004)
para["EPRECOND"] = 3
# Define objective function
# LNORM = 2 - L2 norm
# LNORM = 5 - global correlation norm (Choi & Alkhalifah 2012)
# LNORM = 6 - envelope objective functions after Chi, Dong and Liu (2014) - EXPERIMENTAL
# LNORM = 7 - NIM objective function after Chauris et al. (2012) and Tejero et al. (2015) - EXPERIMENTAL
para["LNORM"] = 2
# Activate Random Objective Waveform Inversion (ROWI, Pan & Gao 2020)
# ROWI = 0 - no ROWI
# ROWI = 1 - 50% GCN l2 norm / 50% AGC l2 norm (AC, PSV, SH modules only)
para["ROWI"] = 0
# Source wavelet inversion
# STF = 0 - no source wavelet inversion
# STF = 1 - estimate source wavelet by stabilized Wiener Deconvolution
para["STF"] = 0
# If OFFSETC_STF > 0, limit source wavelet inversion to maximum offsets OFFSETC_STF
para["OFFSETC_STF"] = -4.0
# Source wavelet inversion stabilization term to avoid division by zero in Wiener Deco
para["EPS_STF"] = 1e-1
# Apply Offset mute to field and modelled seismograms
# OFFSET_MUTE = 0 - no offset mute
# OFFSET_MUTE = 1 - mute far-offset data for offset >= OFFSETC
# OFFSET_MUTE = 1 - mute near-offset data for offset <= OFFSETC
para["OFFSET_MUTE"] = 0
para["OFFSETC"] = 10
# Scale density and Qs updates during multiparameter FWI by factors
# SCALERHO and SCALEQS, respectively
para["SCALERHO"] = 0.5
para["SCALEQS"] = 1.0
# If LNORM = 6, define type of envelope objective function (EXPERIMENTAL)
# ENV = 1 - L2 envelope objective function
# ENV = 2 - Log L2 envelope objective function
para["ENV"] = 1
# Integrate synthetic and modelled data NORDER times (EXPERIMENTAL)
para["N_ORDER"] = 0
# Write parameters to DENISE workflow file
write_denise_workflow(para)
"""
Explanation: 1. FWI parameters for each inversion stage
For each inversion stage copy the cell below and define the FWI parameters. For a detailed description of the FWI workflow parameters, I refer to the user manual. "Experimental" options should be used with care and are still buggy.
End of explanation
"""
print("mv " + model_basename + ".vp DENISE-Black-Edition/par/" + para["MFILE"] + ".vp")
print("mv " + model_basename + ".vs DENISE-Black-Edition/par/" + para["MFILE"] + ".vs")
print("mv " + model_basename + ".rho DENISE-Black-Edition/par/" + para["MFILE"] + ".rho")
"""
Explanation: Instructions for preparing and starting a modelling/FWI run with DENISE Black-Edition
(a) Move model files to the directory DENISE-Black-Edition/par/para["MFILE"]
End of explanation
"""
print("mv " + basename_src + " DENISE-Black-Edition/par/" + para["SOURCE_FILE"][2::])
"""
Explanation: You can also copy the model files to a HPC cluster using SCP.
(b) Move source file to the directory DENISE-Black-Edition/par/para["SOURCE_FILE"]
End of explanation
"""
print("mv " + basename_rec + ".dat DENISE-Black-Edition/par" + para["REC_FILE"][1::] + ".dat")
"""
Explanation: (c) Move receiver file(s) to the directory DENISE-Black-Edition/par/para["REC_FILE"]
End of explanation
"""
print("mv " + para["filename"] + " DENISE-Black-Edition/par/")
"""
Explanation: (d) Move DENISE parameter file to the directory DENISE-Black-Edition/par/
End of explanation
"""
print("mpirun -np " + str(para["NPROCX"]*para["NPROCY"]) + " ../bin/denise " + para["filename"])
"""
Explanation: (e) Within the DENISE-Black-Edition/par directory you can start the DENISE modelling run with
End of explanation
"""
print("mv " + para["filename_workflow"] + " DENISE-Black-Edition/par/")
"""
Explanation: If you want to run a FWI, you also have to define a FWI workflow file ...
(f) move the DENISE workflow file to the directory DENISE-Black-Edition/par/
End of explanation
"""
print("mpirun -np " + str(para["NPROCX"]*para["NPROCY"]) + " ../bin/denise " + para["filename"] + "\t" + para["filename_workflow"])
"""
Explanation: and run the FWI by typing
End of explanation
"""
|
cmorgan/toyplot | docs/canvas-layout.ipynb | bsd-3-clause | import numpy
y = numpy.linspace(0, 1, 20) ** 2
import toyplot
toyplot.plot(y, width=300);
"""
Explanation: .. _canvas-layout:
Canvas Layout
In Toyplot, axes (including :ref:cartesian-axes, :ref:table-axes, and others) are used to map data values into canvas coordinates. The axes range (the area on the canvas that they occupy) is specified when they are created. By default, axes are sized to fill the entire canvas:
End of explanation
"""
canvas = toyplot.Canvas(width=600, height=300)
axes1 = canvas.axes(bounds=(20, 280, 20, 280))
axes1.plot(y)
axes2 = canvas.axes(bounds=(320, 580, 20, 280))
axes2.plot(1 - y);
"""
Explanation: If you need greater control over the positioning of the axes within the canvas, or want to add multiple axes to one canvas, it's necessary to create the canvas and axes explicitly, then use the axes to plot your data. For example, you can use the bounds argument to specify explicit (xmin, xmax, ymin, ymax) bounds for the axes using canvas coordinates (note that canvas coordinates always increase from top to bottom, unlike cartesian coordinates):
End of explanation
"""
canvas = toyplot.Canvas(width=600, height=300)
axes1 = canvas.axes(bounds=(20, 280, 20, -20))
axes1.plot(y)
axes2 = canvas.axes(bounds=(-280, -20, 20, -20))
axes2.plot(1 - y);
"""
Explanation: You can also use negative values to specify values relative to the right and bottom sides of the canvas, instead of the (default) left and top sides, greatly simplifying the layout:
End of explanation
"""
canvas = toyplot.Canvas(width="20cm", height="2in")
axes1 = canvas.axes(bounds=("1cm", "5cm", "10%", "90%"))
axes1.plot(y)
axes2 = canvas.axes(bounds=("6cm", "-1cm", "10%", "90%"))
axes2.plot(1 - y);
"""
Explanation: Furthermore, the bounds parameters can use any :ref:units, including "%" units, so you can use real-world units and relative dimensioning in any combination that makes sense:
End of explanation
"""
canvas = toyplot.Canvas(width=600, height=300)
axes1 = canvas.axes(grid=(1, 2, 0))
axes1.plot(y)
axes2 = canvas.axes(grid=(1, 2, 1))
axes2.plot(1 - y);
"""
Explanation: Of course, most of the time this level of control isn't necessary. Instead, the grid argument allows us to easily position each set of axes on a regular grid that covers the canvas. Note that you can control the axes position on the grid in a variety of ways:
(rows, columns, n)
fill cell $n$ (in left-to-right, top-to-bottom order) of an $M \times N$ grid.
(rows, columns, i, j)
fill cell $i,j$ of an $M \times N$ grid.
(rows, columns, i, rowspan, j, colspan)
fill cells $[i, i + rowspan), [j, j + colspan)$ of an $M \times N$ grid.
End of explanation
"""
canvas = toyplot.Canvas(width=600, height=300)
axes1 = canvas.axes(grid=(1, 2, 0), gutter=15)
axes1.plot(y)
axes2 = canvas.axes(grid=(1, 2, 1), gutter=15)
axes2.plot(1 - y);
"""
Explanation: You can also use the gutter argument to control the space between cells in the grid:
End of explanation
"""
x = numpy.random.normal(size=100)
y = numpy.random.normal(size=100)
canvas = toyplot.Canvas(width="5in")
canvas.axes().plot(numpy.linspace(0, 1) ** 0.5)
canvas.axes(corner=("bottom-right", "1in", "1.5in", "1.5in")).scatterplot(x, y);
"""
Explanation: Sometimes, particularly when embedding axes to produce a figure-within-a-figure, the corner argument can be used to position axes relative to one of eight "corner" positions within the canvas. The corner argument takes a (position, inset, width, height) tuple:
End of explanation
"""
canvas = toyplot.Canvas(width="10cm")
for position in ["top-left", "top", "top-right", "right", "bottom-right", "bottom", "bottom-left", "left"]:
canvas.axes(corner=(position, "1cm", "2cm", "2cm"), label=position)
"""
Explanation: Here are all the positions supported by the corner argument:
End of explanation
"""
|
PyDataMallorca/WS_Introduction_to_data_science | ml_miguel/quien_es_quien.ipynb | gpl-3.0 | import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (15.0, 6.0)
import numpy as np
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
from IPython.display import Image
"""
Explanation: Cual es la mejor estrategia para adivinar?
Por Miguel Escalona
End of explanation
"""
Image('data/guess_who_board.jpg', width=700)
"""
Explanation: ¡Adivina Quién es!
El juego de adivina quién es, consiste en adivinar el personaje que tu oponente ha seleccionado antes de que él/ella adivine el tuyo.
La dinámica del juego es:
* Cada jugador elige un personaje al azar
* Por turnos, cada jugador realiza preguntas de sí o no, e intenta adivinar el personaje del oponente.
* Las preguntas válidas están basadas en la apariencia de los personajes y deberían ser fáciles de responder.
* Ejemplo de pregunta válida: ¿Tiene el cabello negro?
* Ejemplo de pregunta no válida: ¿Luce como un ex-presidiario?
A continuación, cargamos el tablero con los personajes.
End of explanation
"""
# Carga el modulo pandas
# Escribe aqui tu codigo para cargar los datos (utiliza read_csv), llama a los datos df
df =
df.head()
"""
Explanation: 1. Cargando los datos
Para la carga de datos usaremos la función read_csv de pandas. Pandas cuenta con un amplio listado de funciones para la carga de datos. Mas informacion en la documentación de la API.
End of explanation
"""
#Separamos los tipos de variables
categorical_var = 'color de cabello'
binary_vars = list(set(df.keys()) - set([categorical_var, 'NOMBRE']))
# *** Escribe tu codigo aquí ***
# Para las variables booleanas calculamos la suma
# *** Escribe tu codigo aquí ***
# Para las variables categoricas, observamos la frecuencia de cada categoría
"""
Explanation: 2. ¿Cuántos personajes tenemos con cada característica?
End of explanation
"""
# *** Escribe tu codigo aquí ***
"""
Explanation: Pregunta, ¿cuántas personas tienen la boca grande?
End of explanation
"""
# *** Escribe tu codigo aquí ***
"""
Explanation: y cuántos de estos son hombres?
End of explanation
"""
labels = df['NOMBRE']
del df['NOMBRE']
df.head()
# inspección del target
"""
Explanation: 3. Separamos el target de los features
End of explanation
"""
from sklearn.feature_extraction import DictVectorizer
vectorizer = DictVectorizer(sparse=False)
ab=vectorizer.fit_transform(df.to_dict('records'))
dft = pd.DataFrame(ab, columns=vectorizer.get_feature_names())
dft.head().T
"""
Explanation: 4. Codificación de variables categóricas
End of explanation
"""
from sklearn.tree import DecisionTreeClassifier
clasificador = DecisionTreeClassifier(criterion='entropy', random_state=42)
clasificador.fit(dft, labels)
"""
Explanation: 5. Entrenando un arbol de decisión
End of explanation
"""
feat = pd.DataFrame(index=dft.keys(), data=clasificador.feature_importances_, columns=['score'])
feat = feat.sort_values(by='score', ascending=False)
# grafica feat, para ver las variables mas relevantes
"""
Explanation: 5.1 Obtención de los pesos de cada feature
End of explanation
"""
from sklearn.tree import export_graphviz
dotfile = open('quien_es_quien_tree.dot', 'w')
export_graphviz(
clasificador,
out_file = dotfile,
filled=True,
feature_names = dft.columns,
class_names=list(labels),
rotate=True,
max_depth=None,
rounded=True,
)
dotfile.close()
!dot -Tpng quien_es_quien_tree.dot -o quien_es_quien_tree.png
Image('quien_es_quien_tree.png', width=1000)
"""
Explanation: 6. Visualizando el arbol (requiere graphviz)
Si no lo tienes instalado, puedes ejecutar
conda install graphviz
en una terminal
End of explanation
"""
# Elige un personaje por su numero de observacion
observacion_numero = 17
mi_personaje = dft.iloc[observacion_numero]
mi_personaje
personaje = clasificador.predict(mi_personaje)[0]
print('El personaje elegido es: ' + personaje + ' y en realidad es: ' + labels[observacion_numero+1])
"""
Explanation: 7. Es la hora de jugar!
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(criterion='entropy', random_state=42)
rfc.fit(dft, labels)
"""
Explanation: 8. Esto se pone mejor!!!
probemos otro clasificador del sklearn
End of explanation
"""
new_per = np.zeros(len(dft.keys()))
nuevo_personaje = pd.DataFrame(index=dft.keys(), data=new_per, columns=['features']).T
nuevo_personaje.T
def modifica_feature_de_personaje(data, feature, nuevo_valor=1.0):
data[feature] = nuevo_valor
return data
Image('data/guess_who_board.jpg', width=700)
"""
Explanation: Ahora creamos un nuevo personaje, con las caracteristicas que queramos...
End of explanation
"""
nuevo_personaje = modifica_feature_de_personaje(nuevo_personaje, 'bigote', 1.0)
nuevo_personaje.T
"""
Explanation: podemos modificar los features de nuestro personaje, llamando a la funcion modifica_feature_de_personaje
End of explanation
"""
print('El arbol de decision dice que es: ' + clasificador.predict(nuevo_personaje)[0])
print('El random forest cree que es: ' + rfc.predict(nuevo_personaje)[0])
"""
Explanation: Comparemos que dice cada clasificador
End of explanation
"""
ind = range(24)
plt.bar(ind,rfc.predict_proba(nuevo_personaje)[0])
plt.xticks(ind, labels.values, rotation='vertical')
plt.show()
"""
Explanation: Veamos las probailidades del random forest
End of explanation
"""
|
ghvn7777/ghvn7777.github.io | content/fluent_python/12_inherit.ipynb | apache-2.0 | class DoppelDict(dict):
def __setitem__(self, key, value):
super().__setitem__(key, [value] * 2)
dd = DoppelDict(one=1)
dd # 继承 dict 的 __init__ 方法忽略了我们覆盖的 __setitem__方法,'one' 值没有重复
dd['two'] = 2 # `[]` 运算符会调用我们覆盖的 __setitem__ 方法
dd
dd.update(three=3) #继承自 dict 的 update 方法也不会调用我们覆盖的 __setitem__ 方法
dd
"""
Explanation: 本章将讨论继承和子类化,重点是说明对 Python 而言尤为重要的两个细节:
子类化内置类型的缺点
多重继承的方法和解析顺序
我们将通过两个重要的 Python 项目探讨多重继承,这两个项目是 GUI 工具包 Tkinter 和 Web 框架 Django
我们将首先分析子类化内置类型的问题,然后讨论多重继承,通过案例讨论类层次结构方面好的做法和不好的
子类化内置类型很麻烦
在 Python 2.2 之前内置类型(如 list 和 dict)不能子类化,之后可以了,但是有个重要事项:内置类型(使用 C 语言编写)不会调用用户定义的类覆盖的特殊方法
至于内置类型的子类覆盖的方法会不会隐式调用,CPython 没有官方规定,基本上,内置类型的方法不会调用子类覆盖的方法。例如,dict 的子类覆盖 __getitem__() 方法不会被内置类型的 get() 方法调用,下面说明了这个问题:
内置类型的 dict 的 __init__ 和 __update__ 方法会忽略我们覆盖的 __setitem__ 方法
End of explanation
"""
class AnswerDict(dict):
def __getitem__(self, key):
return 42
ad = AnswerDict(a='foo')
ad['a'] # 返回 42,与预期相符
d = {}
d.update(ad) # d 是 dict 的实例,使用 ad 中的值更新 d
d['a'] #dict.update 方法忽略了 AnswerDict.__getitem__ 方法
"""
Explanation: 原生类型的这种行为违背了面向对象编程的一个基本原则:始终应该从实例(self)所属的类开始搜索方法,即使在超类实现的类中调用也是如此。在这种糟糕的局面中,__missing__ 却能按照预期工作(3.4 节),但这是特例
不止实例内部有这个问题(self.get() 不调用 self.__getitem__()),内置类型的方法调用其他类的方法,如果被覆盖了,也不会被调用。下面是个例子,改编自 PyPy 文档
dict.update 方法会忽略 AnswerDict.__getitem__ 方法
End of explanation
"""
import collections
class DoppelDict2(collections.UserDict):
def __setitem__(self, key, value):
super().__setitem__(key, [value] * 2)
dd = DoppelDict2(one=1)
dd
dd['two'] = 2
dd
dd.update(three=3)
dd
class AnswerDict2(collections.UserDict):
def __getitem__(self, key):
return 42
ad = AnswerDict2(a='foo')
ad['a']
d = {}
d.update(ad)
d['a']
d
ad # 这里是自己加的,感觉还是有点问题,但是调用时候结果符合预期
"""
Explanation: 直接子类化内置类型(如 dict,list,str)容易出错,因为内置类型的方法通常忽略用户覆盖的方法,不要子类化内置类型,用户自己定义的类应该继承 collections 模块中的类,例如 UserDict, UserList, UserString,这些类,这些类做了特殊设计,因此易于扩展
如果子类化的是 collections.UserDict,上面暴露的问题就迎刃而解了,如下:
End of explanation
"""
class A:
def ping(self):
print('ping', self)
class B(A):
def pong(self):
print('pong', self)
class C(A):
def pong(self):
print('PONG', self)
class D(B, C):
def ping(self):
super().ping()
print('post-ping:', self)
def pingpong(self):
self.ping()
super().ping()
self.pong()
super().pong
C.pong(self)
"""
Explanation: 综上,本节所述的问题只是针对与 C 语言实现的内置类型内部的方法委托上,而且只影响直接继承内置类型的用户自定义类。如果子类化使用 Python 编写的类,如 UserDict 和 MutableMapping,就不会受此影响
多重继承和方法解析顺序
任何实现多重继承的语言都要处理潜在的命名冲突,这种冲突由不相关的祖先类实现同命方法引起,这种冲突称为菱形问题。
End of explanation
"""
d = D()
d.pong() # 直接调用 d.pong() 是调用的 B 类中的版本
C.pong(d) #超类中的方法都可以直接调用,此时要把实例作为显式参数传入
"""
Explanation: B 和 C 都实现了 pong 方法,唯一区别就是打印不一样。在 D 上调用 d.pong 运行的是哪个 pong 方法呢? C++ 中,必须使用类名限定方法调用来避免歧义。Python 也可以,如下:
End of explanation
"""
D.__mro__
"""
Explanation: Python 能区分 d.pong() 调用的是哪个方法,因为 Python 会按照特定的顺序遍历继承图,这个顺序叫顺序解析(Method Resolution Order,MRO)。类都有一个名为 __mro__ 的属性,它的值是一个元组,按照方法解析顺序列出各个超类。从当前类一直向上,直到 object 类。D 类的 __mro__ 属性如下:
End of explanation
"""
def ping(self):
A.ping(self) # 而不是 super().ping()
print('post-ping', self)
"""
Explanation: 若想把方法调用委托给超类,推荐的方法是使用内置的 super() 函数。在 Python 3 中,这种方式变得更容易了,如上面的 D 类中的 pingpong 方法所示。然而,有时可能幸亏绕过方法解析顺序,直接调用某个类的超方法 -- 这样有时更加方便。,例如,D.ping 方法可以这样写
End of explanation
"""
d = D()
d.ping() # 输出了两行,第一行是 super() A 类输出,第二行是 D 类输出
"""
Explanation: 注意,直接在类上调用实例方法时,必须显式传入 self 参数,因为这样访问的是未绑定方法(unbound method)
然而,使用 super() 最安全,也不易过时,调用框架或不受自己控制的类层次结构中的方法时,尤其适合用 super()。使用 super() 调用方法时,会遵循方法解析顺序,如下所示:
End of explanation
"""
d.pingpong() #最后一个是直接找到 C 类实现 pong 方法,忽略 mro
"""
Explanation: 下面看看 D 在实例上调用 pingpong 方法得到的结果,如下所示:
End of explanation
"""
bool.__mro__
def print_mro(cls):
print(', '.join(c.__name__ for c in cls.__mro__))
print_mro(bool)
import numbers
print_mro(numbers.Integral)
import io
print_mro(io.BytesIO)
print_mro(io.TextIOWrapper)
"""
Explanation: 方法解析顺序不仅考虑继承图,还考虑子类声明中列出超类的顺序。也就是说,如果声明 D 类时把 D 声明为 class D(C, B),那么 D 类的 __mro__ 就会不一样,先搜索 C 类,再 搜索 B 类
分析类时,我们需要经常查看 __mro__ 属性,下面是一些常用类的方法搜索顺序:
End of explanation
"""
import tkinter
print_mro(tkinter.Text)
"""
Explanation: 结束方法解析之前,我们再看看 Tkinter 复杂的多重继承:
End of explanation
"""
|
lileiting/goatools | notebooks/semantic_similarity.ipynb | bsd-2-clause | %load_ext autoreload
%autoreload 2
import sys
sys.path.insert(0, "..")
from goatools import obo_parser
go = obo_parser.GODag("../go-basic.obo")
go_id3 = 'GO:0048364'
go_id4 = 'GO:0044707'
print(go[go_id3])
print(go[go_id4])
"""
Explanation: Computing basic semantic similarities between GO terms
Adapted from book chapter written by Alex Warwick Vesztrocy and Christophe Dessimoz
In this section we look at how to compute semantic similarity between GO terms. First we need to write a function that calculates the minimum number of branches connecting two GO terms.
End of explanation
"""
from goatools.associations import read_gaf
associations = read_gaf("http://geneontology.org/gene-associations/gene_association.tair.gz")
"""
Explanation: Let's get all the annotations from arabidopsis.
End of explanation
"""
from goatools.semantic import semantic_similarity
sim = semantic_similarity(go_id3, go_id4, go)
print('The semantic similarity between terms {} and {} is {}.'.format(go_id3, go_id4, sim))
"""
Explanation: Now we can calculate the semantic distance and semantic similarity, as so:
End of explanation
"""
from goatools.semantic import TermCounts, get_info_content
# First get the counts of each GO term.
termcounts = TermCounts(go, associations)
# Calculate the information content
go_id = "GO:0048364"
infocontent = get_info_content(go_id, termcounts)
print('Information content ({}) = {}'.format(go_id, infocontent))
"""
Explanation: Then we can calculate the information content of the single term, <code>GO:0048364</code>.
End of explanation
"""
from goatools.semantic import resnik_sim
sim_r = resnik_sim(go_id3, go_id4, go, termcounts)
print('Resnik similarity score ({}, {}) = {}'.format(go_id3, go_id4, sim_r))
"""
Explanation: Resnik's similarity measure is defined as the information content of the most informative common ancestor. That is, the most specific common parent-term in the GO. Then we can calculate this as follows:
End of explanation
"""
from goatools.semantic import lin_sim
sim_l = lin_sim(go_id3, go_id4, go, termcounts)
print('Lin similarity score ({}, {}) = {}'.format(go_id3, go_id4, sim_l))
"""
Explanation: Lin's similarity measure is defined as:
$$ \textrm{sim}{\textrm{Lin}}(t{1}, t_{2}) = \frac{-2*\textrm{sim}_{\textrm{Resnik}}(t_1, t_2)}{IC(t_1) + IC(t_2)} $$
Then we can calculate this as
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/guide/tensor_slicing.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
import numpy as np
"""
Explanation: Introduction to tensor slicing
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/tensor_slicing"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/tensor_slicing.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/tensor_slicing.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/tensor_slicing.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
When working on ML applications such as object detection and NLP, it is sometimes necessary to work with sub-sections (slices) of tensors. For example, if your model architecture includes routing, where one layer might control which training example gets routed to the next layer. In this case, you could use tensor slicing ops to split the tensors up and put them back together in the right order.
In NLP applications, you can use tensor slicing to perform word masking while training. For example, you can generate training data from a list of sentences by choosing a word index to mask in each sentence, taking the word out as a label, and then replacing the chosen word with a mask token.
In this guide, you will learn how to use the TensorFlow APIs to:
Extract slices from a tensor
Insert data at specific indices in a tensor
This guide assumes familiarity with tensor indexing. Read the indexing sections of the Tensor and TensorFlow NumPy guides before getting started with this guide.
Setup
End of explanation
"""
t1 = tf.constant([0, 1, 2, 3, 4, 5, 6, 7])
print(tf.slice(t1,
begin=[1],
size=[3]))
"""
Explanation: Extract tensor slices
Perform NumPy-like tensor slicing using tf.slice.
End of explanation
"""
print(t1[1:4])
"""
Explanation: Alternatively, you can use a more Pythonic syntax. Note that tensor slices are evenly spaced over a start-stop range.
End of explanation
"""
print(t1[-3:])
"""
Explanation: <img src="images/tf_slicing/slice_1d_1.png">
End of explanation
"""
t2 = tf.constant([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]])
print(t2[:-1, 1:3])
"""
Explanation: <img src="images/tf_slicing/slice_1d_2.png">
For 2-dimensional tensors,you can use something like:
End of explanation
"""
t3 = tf.constant([[[1, 3, 5, 7],
[9, 11, 13, 15]],
[[17, 19, 21, 23],
[25, 27, 29, 31]]
])
print(tf.slice(t3,
begin=[1, 1, 0],
size=[1, 1, 2]))
"""
Explanation: <img src="images/tf_slicing/slice_2d_1.png">
You can use tf.slice on higher dimensional tensors as well.
End of explanation
"""
print(tf.gather(t1,
indices=[0, 3, 6]))
# This is similar to doing
t1[::3]
"""
Explanation: You can also use tf.strided_slice to extract slices of tensors by 'striding' over the tensor dimensions.
Use tf.gather to extract specific indices from a single axis of a tensor.
End of explanation
"""
alphabet = tf.constant(list('abcdefghijklmnopqrstuvwxyz'))
print(tf.gather(alphabet,
indices=[2, 0, 19, 18]))
"""
Explanation: <img src="images/tf_slicing/slice_1d_3.png">
tf.gather does not require indices to be evenly spaced.
End of explanation
"""
t4 = tf.constant([[0, 5],
[1, 6],
[2, 7],
[3, 8],
[4, 9]])
print(tf.gather_nd(t4,
indices=[[2], [3], [0]]))
"""
Explanation: <img src="images/tf_slicing/gather_1.png">
To extract slices from multiple axes of a tensor, use tf.gather_nd. This is useful when you want to gather the elements of a matrix as opposed to just its rows or columns.
End of explanation
"""
t5 = np.reshape(np.arange(18), [2, 3, 3])
print(tf.gather_nd(t5,
indices=[[0, 0, 0], [1, 2, 1]]))
# Return a list of two matrices
print(tf.gather_nd(t5,
indices=[[[0, 0], [0, 2]], [[1, 0], [1, 2]]]))
# Return one matrix
print(tf.gather_nd(t5,
indices=[[0, 0], [0, 2], [1, 0], [1, 2]]))
"""
Explanation: <img src="images/tf_slicing/gather_2.png">
End of explanation
"""
t6 = tf.constant([10])
indices = tf.constant([[1], [3], [5], [7], [9]])
data = tf.constant([2, 4, 6, 8, 10])
print(tf.scatter_nd(indices=indices,
updates=data,
shape=t6))
"""
Explanation: Insert data into tensors
Use tf.scatter_nd to insert data at specific slices/indices of a tensor. Note that the tensor into which you insert values is zero-initialized.
End of explanation
"""
# Gather values from one tensor by specifying indices
new_indices = tf.constant([[0, 2], [2, 1], [3, 3]])
t7 = tf.gather_nd(t2, indices=new_indices)
"""
Explanation: Methods like tf.scatter_nd which require zero-initialized tensors are similar to sparse tensor initializers. You can use tf.gather_nd and tf.scatter_nd to mimic the behavior of sparse tensor ops.
Consider an example where you construct a sparse tensor using these two methods in conjunction.
End of explanation
"""
# Add these values into a new tensor
t8 = tf.scatter_nd(indices=new_indices, updates=t7, shape=tf.constant([4, 5]))
print(t8)
"""
Explanation: <img src="images/tf_slicing/gather_nd_sparse.png">
End of explanation
"""
t9 = tf.SparseTensor(indices=[[0, 2], [2, 1], [3, 3]],
values=[2, 11, 18],
dense_shape=[4, 5])
print(t9)
# Convert the sparse tensor into a dense tensor
t10 = tf.sparse.to_dense(t9)
print(t10)
"""
Explanation: This is similar to:
End of explanation
"""
t11 = tf.constant([[2, 7, 0],
[9, 0, 1],
[0, 3, 8]])
# Convert the tensor into a magic square by inserting numbers at appropriate indices
t12 = tf.tensor_scatter_nd_add(t11,
indices=[[0, 2], [1, 1], [2, 0]],
updates=[6, 5, 4])
print(t12)
"""
Explanation: To insert data into a tensor with pre-existing values, use tf.tensor_scatter_nd_add.
End of explanation
"""
# Convert the tensor into an identity matrix
t13 = tf.tensor_scatter_nd_sub(t11,
indices=[[0, 0], [0, 1], [1, 0], [1, 1], [1, 2], [2, 1], [2, 2]],
updates=[1, 7, 9, -1, 1, 3, 7])
print(t13)
"""
Explanation: Similarly, use tf.tensor_scatter_nd_sub to subtract values from a tensor with pre-existing values.
End of explanation
"""
t14 = tf.constant([[-2, -7, 0],
[-9, 0, 1],
[0, -3, -8]])
t15 = tf.tensor_scatter_nd_min(t14,
indices=[[0, 2], [1, 1], [2, 0]],
updates=[-6, -5, -4])
print(t15)
"""
Explanation: Use tf.tensor_scatter_nd_min to copy element-wise minimum values from one tensor to another.
End of explanation
"""
t16 = tf.tensor_scatter_nd_max(t14,
indices=[[0, 2], [1, 1], [2, 0]],
updates=[6, 5, 4])
print(t16)
"""
Explanation: Similarly, use tf.tensor_scatter_nd_max to copy element-wise maximum values from one tensor to another.
End of explanation
"""
|
pjbull/data-science-is-software | notebooks/data-science-is-software-talk.ipynb | mit | # install the watermark extension
!pip install watermark
# once it is installed, you'll just need this in future notebooks:
%load_ext watermark
%watermark -a "Peter Bull" -d -v -p numpy,pandas -g
"""
Explanation: <table style="width:100%; border: 0px solid black;">
<tr style="width: 100%; border: 0px solid black;">
<td style="width:75%; border: 0px solid black;">
<a href="http://www.drivendata.org">
<img src="https://s3.amazonaws.com/drivendata.org/kif-example/img/dd.png" />
</a>
</td>
<td style="width:20%; border: 0px solid black;">
<strong>Peter Bull</strong> <br>
<strong>Data Scientist</strong> <br>
<a target=_blank href="http://www.drivendata.org">DrivenData</a>
</td>
</tr>
</table>
Data Science is Software: Developer #lifehacks for the Jupyter Data Scientist
21 May 2016
1. This is my house
Environment reproducibility for Python
1.1 The watermark extension
Tell everyone when your notebook was run, and with which packages. This is especially useful for nbview, blog posts, and other media where you are not sharing the notebook as executable code.
End of explanation
"""
!head -n 20 ../requirements.txt
"""
Explanation: 1.2 Laying the foundation
virtualenv and virtualenvwrapper give you a new foundation.
Start from "scratch" on each project
Choose Python 2 or 3 as appropriate
Packages are cached locally, so no need to wait for download/compile on every new env
Installation is as easy as:
- pip install virtualenv
- pip install virtualenvwrapper
- Add the following lines to ~/.bashrc:
export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=$HOME/Devel
source /usr/local/bin/virtualenvwrapper.sh
To create a virtual environment:
mkvirtualenv <name>
To work in a particular virtual environment:
workon <name>
To leave a virtual environment:
deactivate
#lifehack: create a new virtual environment for every project you work on
#lifehack: if you use anaconda to manage packages using mkvirtualenv --system-site-packages <name> means you don't have to recompile large packages
1.1 The pip requirements.txt file
Track your MRE, "Minimum reproducible environment" in a requirements.txt file
#lifehack: never again run pip install <package>. Instead, update requirements.txt and run pip install -r requirements.txt
#lifehack: for data science projects, favor package>=0.0.0 rather than package==0.0.0. This works well with the --system-site-packages flag so you don't have many versions of large packages with complex dependencies sitting around (e.g., numpy, scipy, pandas)
End of explanation
"""
! tree ..
"""
Explanation: 2. The Life-Changing Magic of Tidying Up
2.1 Consistent project structure means
relative paths work
other collaborators know what to expect
order of scripts is self-documenting
End of explanation
"""
import pandas as pd
df = pd.read_csv("../data/water-pumps.csv")
df.head(1)
## Try adding parameter index=0
pd.read_csv?
df = pd.read_csv("../data/water-pumps.csv",
index_col=0,
parse_dates=["date_recorded"])
df.head(1)
"""
Explanation: 3. Edit-run-repeat: how to stop the cycle of pain
The goal: don't edit, execute and verify any more. How close can we get to code succeeding the first or second time you run it? It's a fine way to start a project, but it doesn't scale as code runs longer and gets more complex.
3.1 No more docs-guessing
Don't edit-run-repeat to try to remember the name of a function or argument.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
plot_data = df['construction_year']
plot_data = plot_data[plot_data != 0]
sns.kdeplot(plot_data, bw=0.1)
plt.show()
plot_data = df['longitude']
plot_data = plot_data[plot_data != 0]
sns.kdeplot(plot_data, bw=0.1)
plt.show()
## Paste for 'amount_tsh' and plot
## Paste for 'latitude' and plot
def kde_plot(dataframe, variable, upper=0.0, lower=0.0, bw=0.1):
plot_data = dataframe[variable]
plot_data = plot_data[(plot_data > lower) & (plot_data < upper)]
sns.kdeplot(plot_data, bw=bw)
plt.show()
kde_plot(df, 'construction_year', upper=2016)
kde_plot(df, 'longitude', upper=42)
kde_plot(df, 'amount_tsh', upper=400000)
"""
Explanation: #lifehack: in addition to the ? operator, the Jupyter notebooks has great intelligent code completion; try tab when typing the name of a function, try shift+tab when inside a method call
3.2 No more copy pasta
Don't repeat yourself.
End of explanation
"""
kde_plot(df, 'date_recorded')
%debug
# "1" turns pdb on, "0" turns pdb off
%pdb 1
kde_plot(df, 'date_recorded')
# turn off debugger
%pdb 0
"""
Explanation: 3.3 No more guess-and-check
Interrupt execution with:
- %debug magic: drops you out into the most recent error stacktrace in pdb
- import q;q.d(): drops you into pdb, even outside of IPython
Interrupt execution on an Exception with %pdb magic. Use pdb the Python debugger to debug inside a notebook. Key commands for pdb are:
p: Evaluate and print Python code
w: Where in the stack trace am I?
u: Go up a frame in the stack trace.
d: Go down a frame in the stack trace.
c: Continue execution
q: Stop execution
End of explanation
"""
import numpy as np
def gimme_the_mean(series):
return np.mean(series)
assert gimme_the_mean([0.0]*10) == 0.0
assert gimme_the_mean(range(10)) == 5
"""
Explanation: #lifehack: %debug and %pdb are great, but pdb can be clunky. Try the 'q' module. Adding the line import q;q.d() anywhere in a project gives you a normal python console at that point. This is great if you're running outside of IPython.
3.4 No more "Restart & Run All"
assert is the poor man's unit test: stops execution if condition is False, continues silently if True
End of explanation
"""
import os
import sys
# add the 'src' directory as one where we can import modules
src_dir = os.path.join(os.getcwd(), os.pardir, 'src')
sys.path.append(src_dir)
# import my method from the source code
from preprocess.build_features import remove_invalid_data
df = remove_invalid_data("../data/water-pumps.csv")
df.shape
# TRY ADDING print "lalalala" to the method
df = remove_invalid_data("../data/water-pumps.csv")
"""
Explanation: 3.5 No more copy-pasta between notebooks
Have a method that gets used in multiple notebooks? Refactor it into a separate .py file so it can live a happy life!
Note: In order to import your local modules, you must do three things:
put the .py file in a separate folder
add an empty __init__.py file to the folder
add that folder to the Python path with sys.path.append
End of explanation
"""
# Load the "autoreload" extension
%load_ext autoreload
# always reload modules marked with "%aimport"
%autoreload 1
import os
import sys
# add the 'src' directory as one where we can import modules
src_dir = os.path.join(os.getcwd(), os.pardir, 'src')
sys.path.append(src_dir)
# import my method from the source code
%aimport preprocess.build_features
from preprocess.build_features import remove_invalid_data
df = remove_invalid_data("../data/water-pumps.csv")
df.head()
"""
Explanation: Restart the kernel, let's try this again....
End of explanation
"""
%run ../src/preprocess/tests.py
"""
Explanation: #lifehack: reloading modules in a running kernel is tricky business. If you use %autoreload when developing, restart the kernel and run all cells when you're done.
3.6 I'm too good! Now this code is useful to other projects!
Importing local code is great if you want to use it in multiple notebooks, but once you want to use the code in multiple projects or repositories, it gets complicated. This is when we get serious about isolation!
We can build a python package to solve that! In fact, there is a cookiecutter to create Python packages.
Once we create this package, we can install it in "editable" mode, which means that as we change the code the changes will get picked up if the package is used. The process looks like
cookiecutter https://github.com/kragniz/cookiecutter-pypackage-minimal
cd package_name
pip install -e .
Now we can have a separate repository for this code and it can be used across projects without having to maintain code in multiple places.
3.7 No more letting other people (including future you) break your toys
unittest is a unit testing framework that is built in to Python. See src/preprocess/tests.py for an example.
End of explanation
"""
data = np.random.normal(0.0, 1.0, 1000000)
assert gimme_the_mean(data) == 0.0
np.testing.assert_almost_equal(gimme_the_mean(data),
0.0,
decimal=1)
a = np.random.normal(0, 0.0001, 10000)
b = np.random.normal(0, 0.0001, 10000)
np.testing.assert_array_equal(a, b)
np.testing.assert_array_almost_equal(a, b, decimal=3)
"""
Explanation: #lifehack: test your code.
3.8 Special treats for datascience testing
numpy.testing
Provides useful assertion methods for values that are numerically close and for numpy arrays.
End of explanation
"""
import engarde.decorators as ed
test_data = pd.DataFrame({'a': np.random.normal(0, 1, 100),
'b': np.random.normal(0, 1, 100)})
@ed.none_missing()
def process(dataframe):
dataframe.loc[10, 'a'] = np.nan
return dataframe
process(test_data).head()
"""
Explanation: engarde decorators
A new library that lets you practice defensive program--specifically with pandas DataFrame objects. It provides a set of decorators that check the return value of any function that returns a DataFrame and confirms that it conforms to the rules.
End of explanation
"""
!cat ../.env
import os
from dotenv import load_dotenv, find_dotenv
# find .env automagically by walking up directories until it's found
dotenv_path = find_dotenv(usecwd=True)
# load up the entries as environment variables
load_dotenv(dotenv_path)
api_key = os.environ.get("API_KEY")
api_key
"""
Explanation: engarde has an awesome set of decorators:
none_missing - no NaNs (great for machine learning--sklearn does not care for NaNs)
has_dtypes - make sure the dtypes are what you expect
verify - runs an arbitrary function on the dataframe
verify_all - makes sure every element returns true for a given function
More can be found in the docs.
#lifehack: test your data science code.
3.9 Keep your secrets to yourself
We've all seen secrets: passwords, database URLs, API keys checked in to GitHub. Don't do it! Even on a private repo. What's the easiest way to manage these secrets outside of source control? Store them as a .env file that lives in your repository, but is not in source control (e.g., add .env to your .gitignore file).
A package called python-dotenv manages this for you easily.
End of explanation
"""
from IPython.display import IFrame
IFrame("htmlcov/index.html", 800, 300)
"""
Explanation: 4. Next-level code inspection
4.1 Code coverage
coverage.py is an amazing tool for seeing what code gets executed when you run your test suite. You can run these commands to generate a code coverage report:
coverage run --source ../src/ ../src/preprocess/tests.py
coverage html
coverage report
End of explanation
"""
import numpy as np
from mcmc.hamiltonian import hamiltonian, run_diagnostics
f = lambda X: np.exp(-100*(np.sqrt(X[:,1]**2 + X[:,0]**2)- 1)**2 + (X[:,0]-1)**3 - X[:,1] - 5)
# potential and kinetic energies
U = lambda q: -np.log(f(q))
K = lambda p: p.dot(p.T) / 2
# gradient of the potential energy
def grad_U(X):
x, y = X[0,:]
xy_sqrt = np.sqrt(y**2 + x**2)
mid_term = 100*2*(xy_sqrt - 1)
grad_x = 3*((x-1)**2) - mid_term * ((x) / (xy_sqrt))
grad_y = -1 - mid_term * ((y) / (xy_sqrt))
return -1*np.array([grad_x, grad_y]).reshape(-1, 2)
ham_samples, H = hamiltonian(5000, U, K, grad_U)
run_diagnostics(ham_samples)
%prun ham_samples, H = hamiltonian(5000, U, K, grad_U)
run_diagnostics(ham_samples)
"""
Explanation: 4.2 Code profiling
Sometimes your code is slow. See which functions are called, how many times, and how long they take!
The %prun magic reports these to you right in the Jupyter notebook!
End of explanation
"""
|
wbinventor/openmc | examples/jupyter/nuclear-data.ipynb | mit | %matplotlib inline
import os
from pprint import pprint
import shutil
import subprocess
import urllib.request
import h5py
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm
from matplotlib.patches import Rectangle
import openmc.data
"""
Explanation: In this notebook, we will go through the salient features of the openmc.data package in the Python API. This package enables inspection, analysis, and conversion of nuclear data from ACE files. Most importantly, the package provides a mean to generate HDF5 nuclear data libraries that are used by the transport solver.
End of explanation
"""
openmc.data.atomic_mass('Fe54')
openmc.data.NATURAL_ABUNDANCE['H2']
openmc.data.atomic_weight('C')
"""
Explanation: Physical Data
Some very helpful physical data is available as part of openmc.data: atomic masses, natural abundances, and atomic weights.
End of explanation
"""
url = 'https://anl.box.com/shared/static/kxm7s57z3xgfbeq29h54n7q6js8rd11c.ace'
filename, headers = urllib.request.urlretrieve(url, 'gd157.ace')
# Load ACE data into object
gd157 = openmc.data.IncidentNeutron.from_ace('gd157.ace')
gd157
"""
Explanation: The IncidentNeutron class
The most useful class within the openmc.data API is IncidentNeutron, which stores to continuous-energy incident neutron data. This class has factory methods from_ace, from_endf, and from_hdf5 which take a data file on disk and parse it into a hierarchy of classes in memory. To demonstrate this feature, we will download an ACE file (which can be produced with NJOY 2016) and then load it in using the IncidentNeutron.from_ace method.
End of explanation
"""
total = gd157[1]
total
"""
Explanation: Cross sections
From Python, it's easy to explore (and modify) the nuclear data. Let's start off by reading the total cross section. Reactions are indexed using their "MT" number -- a unique identifier for each reaction defined by the ENDF-6 format. The MT number for the total cross section is 1.
End of explanation
"""
total.xs
"""
Explanation: Cross sections for each reaction can be stored at multiple temperatures. To see what temperatures are available, we can look at the reaction's xs attribute.
End of explanation
"""
total.xs['294K'](1.0)
"""
Explanation: To find the cross section at a particular energy, 1 eV for example, simply get the cross section at the appropriate temperature and then call it as a function. Note that our nuclear data uses eV as the unit of energy.
End of explanation
"""
total.xs['294K']([1.0, 2.0, 3.0])
"""
Explanation: The xs attribute can also be called on an array of energies.
End of explanation
"""
gd157.energy
energies = gd157.energy['294K']
total_xs = total.xs['294K'](energies)
plt.loglog(energies, total_xs)
plt.xlabel('Energy (eV)')
plt.ylabel('Cross section (b)')
"""
Explanation: A quick way to plot cross sections is to use the energy attribute of IncidentNeutron. This gives an array of all the energy values used in cross section interpolation for each temperature present.
End of explanation
"""
pprint(list(gd157.reactions.values())[:10])
"""
Explanation: Reaction Data
Most of the interesting data for an IncidentNeutron instance is contained within the reactions attribute, which is a dictionary mapping MT values to Reaction objects.
End of explanation
"""
n2n = gd157[16]
print('Threshold = {} eV'.format(n2n.xs['294K'].x[0]))
"""
Explanation: Let's suppose we want to look more closely at the (n,2n) reaction. This reaction has an energy threshold
End of explanation
"""
n2n.xs
xs = n2n.xs['294K']
plt.plot(xs.x, xs.y)
plt.xlabel('Energy (eV)')
plt.ylabel('Cross section (b)')
plt.xlim((xs.x[0], xs.x[-1]))
"""
Explanation: The (n,2n) cross section, like all basic cross sections, is represented by the Tabulated1D class. The energy and cross section values in the table can be directly accessed with the x and y attributes. Using the x and y has the nice benefit of automatically acounting for reaction thresholds.
End of explanation
"""
n2n.products
neutron = n2n.products[0]
neutron.distribution
"""
Explanation: To get information on the energy and angle distribution of the neutrons emitted in the reaction, we need to look at the products attribute.
End of explanation
"""
dist = neutron.distribution[0]
dist.energy_out
"""
Explanation: We see that the neutrons emitted have a correlated angle-energy distribution. Let's look at the energy_out attribute to see what the outgoing energy distributions are.
End of explanation
"""
for e_in, e_out_dist in zip(dist.energy[::5], dist.energy_out[::5]):
plt.semilogy(e_out_dist.x, e_out_dist.p, label='E={:.2f} MeV'.format(e_in/1e6))
plt.ylim(ymax=1e-6)
plt.legend()
plt.xlabel('Outgoing energy (eV)')
plt.ylabel('Probability/eV')
plt.show()
"""
Explanation: Here we see we have a tabulated outgoing energy distribution for each incoming energy. Note that the same probability distribution classes that we could use to create a source definition are also used within the openmc.data package. Let's plot every fifth distribution to get an idea of what they look like.
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
cm = matplotlib.cm.Spectral_r
# Determine size of probability tables
urr = gd157.urr['294K']
n_energy = urr.table.shape[0]
n_band = urr.table.shape[2]
for i in range(n_energy):
# Get bounds on energy
if i > 0:
e_left = urr.energy[i] - 0.5*(urr.energy[i] - urr.energy[i-1])
else:
e_left = urr.energy[i] - 0.5*(urr.energy[i+1] - urr.energy[i])
if i < n_energy - 1:
e_right = urr.energy[i] + 0.5*(urr.energy[i+1] - urr.energy[i])
else:
e_right = urr.energy[i] + 0.5*(urr.energy[i] - urr.energy[i-1])
for j in range(n_band):
# Determine maximum probability for a single band
max_prob = np.diff(urr.table[i,0,:]).max()
# Determine bottom of band
if j > 0:
xs_bottom = urr.table[i,1,j] - 0.5*(urr.table[i,1,j] - urr.table[i,1,j-1])
value = (urr.table[i,0,j] - urr.table[i,0,j-1])/max_prob
else:
xs_bottom = urr.table[i,1,j] - 0.5*(urr.table[i,1,j+1] - urr.table[i,1,j])
value = urr.table[i,0,j]/max_prob
# Determine top of band
if j < n_band - 1:
xs_top = urr.table[i,1,j] + 0.5*(urr.table[i,1,j+1] - urr.table[i,1,j])
else:
xs_top = urr.table[i,1,j] + 0.5*(urr.table[i,1,j] - urr.table[i,1,j-1])
# Draw rectangle with appropriate color
ax.add_patch(Rectangle((e_left, xs_bottom), e_right - e_left, xs_top - xs_bottom,
color=cm(value)))
# Overlay total cross section
ax.plot(gd157.energy['294K'], total.xs['294K'](gd157.energy['294K']), 'k')
# Make plot pretty and labeled
ax.set_xlim(1.0, 1.0e5)
ax.set_ylim(1e-1, 1e4)
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('Energy (eV)')
ax.set_ylabel('Cross section(b)')
"""
Explanation: Unresolved resonance probability tables
We can also look at unresolved resonance probability tables which are stored in a ProbabilityTables object. In the following example, we'll create a plot showing what the total cross section probability tables look like as a function of incoming energy.
End of explanation
"""
gd157.export_to_hdf5('gd157.h5', 'w')
"""
Explanation: Exporting HDF5 data
If you have an instance IncidentNeutron that was created from ACE or HDF5 data, you can easily write it to disk using the export_to_hdf5() method. This can be used to convert ACE to HDF5 or to take an existing data set and actually modify cross sections.
End of explanation
"""
gd157_reconstructed = openmc.data.IncidentNeutron.from_hdf5('gd157.h5')
np.all(gd157[16].xs['294K'].y == gd157_reconstructed[16].xs['294K'].y)
"""
Explanation: With few exceptions, the HDF5 file encodes the same data as the ACE file.
End of explanation
"""
h5file = h5py.File('gd157.h5', 'r')
main_group = h5file['Gd157/reactions']
for name, obj in sorted(list(main_group.items()))[:10]:
if 'reaction_' in name:
print('{}, {}'.format(name, obj.attrs['label'].decode()))
n2n_group = main_group['reaction_016']
pprint(list(n2n_group.values()))
"""
Explanation: And one of the best parts of using HDF5 is that it is a widely used format with lots of third-party support. You can use h5py, for example, to inspect the data.
End of explanation
"""
n2n_group['294K/xs'].value
"""
Explanation: So we see that the hierarchy of data within the HDF5 mirrors the hierarchy of Python objects that we manipulated before.
End of explanation
"""
# Download ENDF file
url = 'https://t2.lanl.gov/nis/data/data/ENDFB-VII.1-neutron/Gd/157'
filename, headers = urllib.request.urlretrieve(url, 'gd157.endf')
# Load into memory
gd157_endf = openmc.data.IncidentNeutron.from_endf(filename)
gd157_endf
"""
Explanation: Working with ENDF files
In addition to being able to load ACE and HDF5 data, we can also load ENDF data directly into an IncidentNeutron instance using the from_endf() factory method. Let's download the ENDF/B-VII.1 evaluation for $^{157}$Gd and load it in:
End of explanation
"""
elastic = gd157_endf[2]
"""
Explanation: Just as before, we can get a reaction by indexing the object directly:
End of explanation
"""
elastic.xs
"""
Explanation: However, if we look at the cross section now, we see that it isn't represented as tabulated data anymore.
End of explanation
"""
elastic.xs['0K'](0.0253)
"""
Explanation: If had Cython installed when you built/installed OpenMC, you should be able to evaluate resonant cross sections from ENDF data directly, i.e., OpenMC will reconstruct resonances behind the scenes for you.
End of explanation
"""
gd157_endf.resonances.ranges
"""
Explanation: When data is loaded from an ENDF file, there is also a special resonances attribute that contains resolved and unresolved resonance region data (from MF=2 in an ENDF file).
End of explanation
"""
[(r.energy_min, r.energy_max) for r in gd157_endf.resonances.ranges]
"""
Explanation: We see that $^{157}$Gd has a resolved resonance region represented in the Reich-Moore format as well as an unresolved resonance region. We can look at the min/max energy of each region by doing the following:
End of explanation
"""
# Create log-spaced array of energies
resolved = gd157_endf.resonances.resolved
energies = np.logspace(np.log10(resolved.energy_min),
np.log10(resolved.energy_max), 1000)
# Evaluate elastic scattering xs at energies
xs = elastic.xs['0K'](energies)
# Plot cross section vs energies
plt.loglog(energies, xs)
plt.xlabel('Energy (eV)')
plt.ylabel('Cross section (b)')
"""
Explanation: With knowledge of the energy bounds, let's create an array of energies over the entire resolved resonance range and plot the elastic scattering cross section.
End of explanation
"""
resolved.parameters.head(10)
"""
Explanation: Resonance ranges also have a useful parameters attribute that shows the energies and widths for resonances.
End of explanation
"""
gd157.add_elastic_0K_from_endf('gd157.endf')
"""
Explanation: Heavy-nuclide resonance scattering
OpenMC has two methods for accounting for resonance upscattering in heavy nuclides, DBRC and RVS. These methods rely on 0 K elastic scattering data being present. If you have an existing ACE/HDF5 dataset and you need to add 0 K elastic scattering data to it, this can be done using the IncidentNeutron.add_elastic_0K_from_endf() method. Let's do this with our original gd157 object that we instantiated from an ACE file.
End of explanation
"""
gd157[2].xs
"""
Explanation: Let's check to make sure that we have both the room temperature elastic scattering cross section as well as a 0K cross section.
End of explanation
"""
# Download ENDF file
url = 'https://t2.lanl.gov/nis/data/data/ENDFB-VII.1-neutron/H/2'
filename, headers = urllib.request.urlretrieve(url, 'h2.endf')
# Run NJOY to create deuterium data
h2 = openmc.data.IncidentNeutron.from_njoy('h2.endf', temperatures=[300., 400., 500.], stdout=True)
"""
Explanation: Generating data from NJOY
To run OpenMC in continuous-energy mode, you generally need to have ACE files already available that can be converted to OpenMC's native HDF5 format. If you don't already have suitable ACE files or need to generate new data, both the IncidentNeutron and ThermalScattering classes include from_njoy() methods that will run NJOY to generate ACE files and then read those files to create OpenMC class instances. The from_njoy() methods take as input the name of an ENDF file on disk. By default, it is assumed that you have an executable named njoy available on your path. This can be configured with the optional njoy_exec argument. Additionally, if you want to show the progress of NJOY as it is running, you can pass stdout=True.
Let's use IncidentNeutron.from_njoy() to run NJOY to create data for $^2$H using an ENDF file. We'll specify that we want data specifically at 300, 400, and 500 K.
End of explanation
"""
h2[2].xs
"""
Explanation: Now we can use our h2 object just as we did before.
End of explanation
"""
url = 'https://github.com/mit-crpg/WMP_Library/releases/download/v1.1/092238.h5'
filename, headers = urllib.request.urlretrieve(url, '092238.h5')
u238_multipole = openmc.data.WindowedMultipole.from_hdf5('092238.h5')
"""
Explanation: Note that 0 K elastic scattering data is automatically added when using from_njoy() so that resonance elastic scattering treatments can be used.
Windowed multipole
OpenMC can also be used with an experimental format called windowed multipole. Windowed multipole allows for analytic on-the-fly Doppler broadening of the resolved resonance range. Windowed multipole data can be downloaded with the openmc-get-multipole-data script. This data can be used in the transport solver, but it can also be used directly in the Python API.
End of explanation
"""
u238_multipole(1.0, 294)
"""
Explanation: The WindowedMultipole object can be called with energy and temperature values. Calling the object gives a tuple of 3 cross sections: elastic scattering, radiative capture, and fission.
End of explanation
"""
E = np.linspace(5, 25, 1000)
plt.semilogy(E, u238_multipole(E, 293.606)[1])
"""
Explanation: An array can be passed for the energy argument.
End of explanation
"""
E = np.linspace(6.1, 7.1, 1000)
plt.semilogy(E, u238_multipole(E, 0)[1])
plt.semilogy(E, u238_multipole(E, 900)[1])
"""
Explanation: The real advantage to multipole is that it can be used to generate cross sections at any temperature. For example, this plot shows the Doppler broadening of the 6.67 eV resonance between 0 K and 900 K.
End of explanation
"""
|
marxav/hello-world | artificial_neural_network_101_tensorflow.ipynb | mit | # To enable Tensorflow 2 instead of TensorFlow 1.15, uncomment the next 4 lines
#try:
# %tensorflow_version 2.x
#except Exception:
# pass
# library to store and manipulate neural-network input and output data
import numpy as np
# library to graphically display any data
import matplotlib.pyplot as plt
# library to manipulate neural-network models
import tensorflow as tf
from tensorflow import keras
# the code is compatible with Tensflow v1.15 and v2, but interesting info anyway
print("Tensorlow version:", tf.__version__)
# Versions needs to be 1.15.1 or greater (e.g. this code won't work with 1.13.1)
# To check whether you code will use a GPU or not, uncomment the following two
# lines of code. You should either see:
# * an "XLA_GPU",
# * or better a "K80" GPU
# * or even better a "T100" GPU
#from tensorflow.python.client import device_lib
#device_lib.list_local_devices()
import time
# trivial "debug" function to display the duration between time_1 and time_2
def get_duration(time_1, time_2):
duration_time = time_2 - time_1
m, s = divmod(duration_time, 60)
h, m = divmod(m, 60)
s,m,h = int(round(s, 0)), int(round(m, 0)), int(round(h, 0))
duration = "duration: " + "{0:02d}:{1:02d}:{2:02d}".format(h, m, s)
return duration
"""
Explanation: Introduction
The goal of this Artificial Neural Network (ANN) 101 session is twofold:
To build an ANN model that will be able to predict y value according to x value.
In other words, we want our ANN model to perform a regression analysis.
To observe three important KPI when dealing with ANN:
The size of the network (called trainable_params in our code)
The duration of the training step (called training_ duration: in our code)
The efficiency of the ANN model (called evaluated_loss in our code)
The data used here are exceptionally simple:
X represents the interesting feature (i.e. will serve as input X for our ANN).
Here, each x sample is a one-dimension single scalar value.
Y represents the target (i.e. will serve as the exected output Y of our ANN).
Here, each x sample is also a one-dimension single scalar value.
Note that in real life:
You will never have such godsent clean, un-noisy and simple data.
You will have more samples, i.e. bigger data (better for statiscally meaningful results).
You may have more dimensions in your feature and/or target (e.g. space data, temporal data...).
You may also have more multiple features and even multiple targets.
Hence your ANN model will be more complex that the one studied here
Work to be done:
For exercices A to E, the only lines of code that need to be added or modified are in the create_model() Python function.
Exercice A
Run the whole code, Jupyter cell by Jupyter cell, without modifiying any line of code.
Write down the values for:
trainable_params:
training_ duration:
evaluated_loss:
In the last Jupyter cell, what is the relationship between the predicted x samples and y samples? Try to explain it base on the ANN model?
Exercice B
Add a first hidden layer called "hidden_layer_1" containing 8 units in the model of the ANN.
Restart and execute everything again.
Write down the obtained values for:
trainable_params:
training_ duration:
evaluated_loss:
How better is it with regard to Exercice A?
Worse? Not better? Better? Strongly better?
Exercice C
Modify the hidden layer called "hidden_layer_1" so that it contains 128 units instead of 8.
Restart and execute everything again.
Write down the obtained values for:
trainable_params:
training_ duration:
evaluated_loss:
How better is it with regard to Exercice B?
Worse? Not better? Better? Strongly better?
Exercice D
Add a second hidden layer called "hidden_layer_2" containing 32 units in the model of the ANN.
Write down the obtained values for:
trainable_params:
training_ duration:
evaluated_loss:
How better is it with regard to Exercice C?
Worse? Not better? Better? Strongly better?
Exercice E
Add a third hidden layer called "hidden_layer_3" containing 4 units in the model of the ANN.
Restart and execute everything again.
Look at the graph in the last Jupyter cell. Is it better?
Write down the obtained values for:
trainable_params:
training_ duration:
evaluated_loss:
How better is it with regard to Exercice D?
Worse? Not better? Better? Strongly better?
Exercice F
If you still have time, you can also play with the training epochs parameter, the number of training samples (or just exchange the training datasets with the test datasets), the type of runtime hardware (GPU orTPU), and so on...
Python Code
Import the tools
End of explanation
"""
# DO NOT MODIFY THIS CODE
# IT HAS JUST BEEN WRITTEN TO GENERATE THE DATA
# library fr generating random number
#import random
# secret relationship between X data and Y data
#def generate_random_output_data_correlated_from_input_data(nb_samples):
# generate nb_samples random x between 0 and 1
# X = np.array( [random.random() for i in range(nb_samples)] )
# generate nb_samples y correlated with x
# Y = np.tan(np.sin(X) + np.cos(X))
# return X, Y
#def get_new_X_Y(nb_samples, debug=False):
# X, Y = generate_random_output_data_correlated_from_input_data(nb_samples)
# if debug:
# print("generate %d X and Y samples:" % nb_samples)
# X_Y = zip(X, Y)
# for i, x_y in enumerate(X_Y):
# print("data sample %d: x=%.3f, y=%.3f" % (i, x_y[0], x_y[1]))
# return X, Y
# Number of samples for the training dataset and the test dateset
#nb_samples=50
# Get some data for training the futture neural-network model
#X_train, Y_train = get_new_X_Y(nb_samples)
# Get some other data for evaluating the futture neural-network model
#X_test, Y_test = get_new_X_Y(nb_samples)
# In most cases, it will be necessary to normalize X and Y data with code like:
# X_centered -= X.mean(axis=0)
# X_normalized /= X_centered.std(axis=0)
#def mstr(X):
# my_str ='['
# for x in X:
# my_str += str(float(int(x*1000)/1000)) + ','
# my_str += ']'
# return my_str
## Call get_data to have an idead of what is returned by call data
#generate_data = False
#if generate_data:
# nb_samples = 50
# X_train, Y_train = get_new_X_Y(nb_samples)
# print('X_train = np.array(%s)' % mstr(X_train))
# print('Y_train = np.array(%s)' % mstr(Y_train))
# X_test, Y_test = get_new_X_Y(nb_samples)
# print('X_test = np.array(%s)' % mstr(X_test))
# print('Y_test = np.array(%s)' % mstr(Y_test))
X_train = np.array([0.765,0.838,0.329,0.277,0.45,0.833,0.44,0.634,0.351,0.784,0.589,0.816,0.352,0.591,0.04,0.38,0.816,0.732,0.32,0.597,0.908,0.146,0.691,0.75,0.568,0.866,0.705,0.027,0.607,0.793,0.864,0.057,0.877,0.164,0.729,0.291,0.324,0.745,0.158,0.098,0.113,0.794,0.452,0.765,0.983,0.001,0.474,0.773,0.155,0.875,])
Y_train = np.array([6.322,6.254,3.224,2.87,4.177,6.267,4.088,5.737,3.379,6.334,5.381,6.306,3.389,5.4,1.704,3.602,6.306,6.254,3.157,5.446,5.918,2.147,6.088,6.298,5.204,6.147,6.153,1.653,5.527,6.332,6.156,1.766,6.098,2.236,6.244,2.96,3.183,6.287,2.205,1.934,1.996,6.331,4.188,6.322,5.368,1.561,4.383,6.33,2.192,6.108,])
X_test = np.array([0.329,0.528,0.323,0.952,0.868,0.931,0.69,0.112,0.574,0.421,0.972,0.715,0.7,0.58,0.69,0.163,0.093,0.695,0.493,0.243,0.928,0.409,0.619,0.011,0.218,0.647,0.499,0.354,0.064,0.571,0.836,0.068,0.451,0.074,0.158,0.571,0.754,0.259,0.035,0.595,0.245,0.929,0.546,0.901,0.822,0.797,0.089,0.924,0.903,0.334,])
Y_test = np.array([3.221,4.858,3.176,5.617,6.141,5.769,6.081,1.995,5.259,3.932,5.458,6.193,6.129,5.305,6.081,2.228,1.912,6.106,4.547,2.665,5.791,3.829,5.619,1.598,2.518,5.826,4.603,3.405,1.794,5.23,6.26,1.81,4.18,1.832,2.208,5.234,6.306,2.759,1.684,5.432,2.673,5.781,5.019,5.965,6.295,6.329,1.894,5.816,5.951,3.258,])
print('X_train contains %d samples' % X_train.shape)
print('Y_train contains %d samples' % Y_train.shape)
print('')
print('X_test contains %d samples' % X_test.shape)
print('Y_test contains %d samples' % Y_test.shape)
# Graphically display our training data
plt.scatter(X_train, Y_train, color='green', alpha=0.5)
plt.title('Scatter plot of the training data')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# Graphically display our test data
plt.scatter(X_test, Y_test, color='blue', alpha=0.5)
plt.title('Scatter plot of the testing data')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
"""
Explanation: Get the data
End of explanation
"""
# THIS IS THE ONLY CELL WHERE YOU HAVE TO ADD AND/OR MODIFY CODE
def create_model():
# This returns a tensor
model = keras.Sequential([
keras.layers.Input(shape=(1,), name='input_layer'),
keras.layers.Dense(128, activation=tf.nn.relu, name='hidden_layer_1'),
keras.layers.Dense(32, activation=tf.nn.relu, name='hidden_layer_2'),
keras.layers.Dense(4, activation=tf.nn.relu, name='hidden_layer_3'),
keras.layers.Dense(1, name='output_layer')
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),
loss='mean_squared_error',
metrics=['mean_absolute_error', 'mean_squared_error'])
return model
# Same model but for Keras 1.13.1
#inputs_data = keras.layers.Input(shape=(1, ), name='input_layer')
#hl_1_out_data = keras.layers.Dense(units=128, activation=tf.nn.relu, name='hidden_layer_1')(inputs_data)
#hl_2_out_data = keras.layers.Dense(units=32, activation=tf.nn.relu, name='hidden_layer_2')(hl_1_out_data)
#hl_3_out_data = keras.layers.Dense(units=4, activation=tf.nn.relu, name='hidden_layer_3')(hl_2_out_data)
#outputs_data = keras.layers.Dense(units=1)(hl_3_out_data)
#model = keras.models.Model(inputs=inputs_data, outputs=outputs_data)
ann_model = create_model()
# Display a textual summary of the newly created model
# Pay attention to size (a.k.a. total parameters) of the network
ann_model.summary()
print('trainable_params:', ann_model.count_params())
%%html
As a reminder for understanding, the following ANN unit contains <b>m + 1</b> trainable parameters:<br>
<img src='https://www.degruyter.com/view/j/nanoph.2017.6.issue-3/nanoph-2016-0139/graphic/j_nanoph-2016-0139_fig_002.jpg' alt="perceptron" width="400" />
"""
Explanation: Build the artificial neural-network
End of explanation
"""
# Train the model with the input data and the output_values
# validation_split=0.2 means that 20% of the X_train samples will be used
# for a validation test and that "only" 80% will be used for training
t0 = time.time()
results = ann_model.fit(X_train, Y_train, verbose=False,
batch_size=1, epochs=500, validation_split=0.2)
t1 = time.time()
print('training_%s' % get_duration(t0, t1))
#plt.plot(r.history['mean_squared_error'], label = 'mean_squared_error')
plt.plot(results.history['loss'], label = 'train_loss')
plt.plot(results.history['val_loss'], label = 'validation_loss')
plt.legend()
plt.show()
# If you can write a file locally (i.e. If Google Drive available on Colab environnement)
# then, you can save your model in a file for future reuse.
# (c.f. https://www.tensorflow.org/guide/keras/save_and_serialize)
# Only uncomment the following file if you can write a file
# model.save('ann_101.h5')
"""
Explanation: Train the artificial neural-network model
End of explanation
"""
loss, mean_absolute_error, mean_squared_error = ann_model.evaluate(X_test, Y_test, verbose=True)
"""
Explanation: Evaluate the model
End of explanation
"""
X_new_values = [0., 0.2, 0.4, 0.6, 0.8, 1.0]
Y_predicted_values = ann_model.predict(X_new_values)
# Display training data and predicted data graphically
plt.title('Training data (green color) + Predicted data (red color)')
# training data in green color
plt.scatter(X_train, Y_train, color='green', alpha=0.5)
# training data in green color
#plt.scatter(X_test, Y_test, color='blue', alpha=0.5)
# predicted data in blue color
plt.scatter(X_new_values, Y_predicted_values, color='red', alpha=0.5)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
"""
Explanation: Predict new output data
End of explanation
"""
|
ethen8181/Business-Analytics | ab_tests/bayesian_ab_test.ipynb | mit | # Website A had 1055 clicks and 28 sign-ups
# Website B had 1057 clicks and 45 sign-ups
values_A = np.hstack( ( [0] * (1055 - 28), [1] * 28 ) )
values_B = np.hstack( ( [0] * (1057 - 45), [1] * 45 ) )
print(values_A)
print(values_B)
"""
Explanation: A/B Testing with Hierarchical Models
Though A/B testing seems simple in that you're just comparing A against B and see which one performs better, but figuring out whether your results mean anything is actually quite complicated. During this explore and evaluate process, failing to correct for multiple comparisons is one of the most common A/B testing mistakes. Hierarchical models are one way to address this problem.
Background Information
Imagine the following scenario: You work for a company that gets most of its online traffic through ads. Your current ads have a 3% click rate, and your boss decides that's not good enough. The marketing team comes up with 26 new ad designs, and as the company's data scientist, it's your job to determine if any of these new ads have a higher click rate than the current ad.
You set up an online experiment where internet users are shown one of the 27 possible ads (the current ad or one of the 26 new designs). After two weeks, you collect the data on each ad: How many users saw it, and how many times it was clicked on.
Time to run some statistical tests! New design A vs current design? No statistically significant difference. New design B vs current design? No statistically significant difference. You keep running test and continue getting not significant results. Just as you are about lose hope, new design Z v. current design.... Statically significant difference at the alpha = 0.05 level!
You tell your boss you've found a design that has a higher click rate than the current design, and your company deploys it in production. However, after two months of collecting statistics on the new design, it seems the new design has a click rate of 3%. What went wrong?
When performing A/B testing, data scientists often fall into the common pitfall of failing to correct to for multiple testing. Testing at alpha = 0.05 means your statistical test yielding a result as extreme or more extreme by random chance (assuming a given null hypothesis is true) occurs with probability 0.05, or you can say that your statistical test has an EXPECTED 5% false positive rate. If you run 26 statistical tests, then an upper bound on the expected number of false positives is 26*0.05 = 1.3. This means in our above scenario, our data scientist can expect to have at least one false positive result, and unfortunately, the false positive result is the one she reported to her boss.
Preliminary Statistics
There are two well-known branches of statistics: Frequentist statistics and Bayesian statistics. These two branches have plenty of differences, but we're going to focus on one key difference:
In frequentist statistics, we assume the parameter(s) of interest are fixed constants. We focus on computing the likelihood $p(Data \mid Parameter)$, the probability we see the observed set of data points given the parameter of interest.
In Bayesian statistics, we having uncertainty surrounding our parameter(s) of interest, and we mathematically capture our prior uncertainty about these parameters in a prior distribution, formally represented as $p(Parameter)$. We focus on computing the posterior distribution $p(Parameter \mid Data)$, representing our posterior uncertainty surrounding the parameter of interest after we have observed data.
Put it in another way, when using frequentist statistics, you based your decision on whether A beat B only from the data in the test all other information is irrelevant as you are simply testing A against B. And nothing else is relevant, much like justice should be blind to outside beliefs.
On the other hand, the Bayesian approach lets you to think a bit deeper about the problem.When you're testing A against B you actually do have some other information. You know what makes sense. And this is valuable information when making a decision. So, sure, justice may be blind - but sometimes we need her to peek a bit and make sure what's on the scale makes sense!
For A/B testing, what this means is that you, the marketer, have to come up with what conversion rate makes sense, known as the prior. That is, if you typically see a 10% conversion in A, you would not, during the test, expect to see it at 100%.
Then instead of only finding the winner in the test itself, Bayesian analysis will include your prior knowledge into the test. That is, you can tell the test what you believe the right answer to be - and then using that prior knowledge, the test can tell you whether A beats B. And, because it uses more information than what's in the test itself, it can give you a defensible answer as to whether A beat B from a remarkably small sample size.
The Bernoulli Model
Let's first look at how we would perform A/B Testing in the standard two website case using Bayesian models, namely the Bernoulli model. Suppose website A had 1055 clicks and 27 sign-ups, and website B had 1057 clicks and 45 sign-ups.
End of explanation
"""
# Create a uniform prior for the probabilities p_a and p_b
p_A = pm.Uniform( 'p_A', lower = 0, upper = 1 )
p_B = pm.Uniform( 'p_B', lower = 0, upper = 1 )
# Creates a posterior distribution of B - A
@pm.deterministic
def delta( p_A = p_A, p_B = p_B ):
return p_B - p_A
"""
Explanation: Now, we can model each possible sign-up as a Bernoulli event. Recall the Bernoulli distribution reflects the outcome of a coin flip. With some probability $p$, the coin flips head and with probability $1-p$, the coin flips tails. The intuition behind this model is as follows: A user visits the website. The user flips a coin. If coin lands head, the user signs up.
Now, let's say each website has its own coin. Website A has a coin that lands heads with probability $p(A)$, and Website $p(B)$ has a coin that lands heads with probability $p(B)$. We don't know either probabilities, but we want to determine if $p(A)$ < $p(B)$ or if the reverse is true (There is also the possibility that $p(A)$ = $p(B)$).
Since we have no information or bias about the true values of $p(A)$ or $p(B)$, we will draw these two parameters from a Uniform distribution. In addition, we will create a delta function to represent the posterior distribution for the difference of the two distributions. Remember the difference between the two probabilities is what we're interested in.
End of explanation
"""
# Create the Bernoulli variables for the observation,
# value is the value that we know (observed)
obs_A = pm.Bernoulli( 'obs_A', p_A, value = values_A, observed = True )
obs_B = pm.Bernoulli( 'obs_B', p_B, value = values_B, observed = True )
# Create the model and run the sampling
# Sample 70,000 points and throw out the first 10,000
iteration = 70000
burn = 10000
model = pm.Model( [ p_A, p_B, delta, obs_A, obs_B ] )
mcmc = pm.MCMC(model)
pm.MAP(model).fit()
mcmc.sample( iteration, burn )
"""
Explanation: Notes on the code above:
For the pm.Uniform() section: These are stochastics variables, variables that are random. Initializing a stochastic variable requires a name argument, plus additional parameters that are class specific. The first attribute is the name attribute, which is used to retrieve the posterior distribution later in the analysis, so it is best to use a descriptive name. Typically, you can use the Python variable's name as the name. Here, the later two attribute is simply the lower and upper bound for the uniform distribution.
For the deterministic variables are variables that are not random if the variables' parents were known, here if I knew all the value of delta's parent, p_A and p_B, then I could determine what delta is. We distinguish deterministic variables with a pm.deterministic decorator wrapper.
Next, we will create an observations variable for each website that incorporates the sign-up data for each website. Thus we create a Bernoulli stochastic variable with our prior and values. Recall that if $X \sim Bernoulli(p)$, then then X is 1 with probability $p$ and 0 with probability $1−p$.
End of explanation
"""
# use .trace to obtain the desired info
delta_samples = mcmc.trace("delta")[:]
plt.figure( figsize = ( 10, 5 ) )
plt.hist( delta_samples, histtype = 'stepfilled', bins = 30, alpha = 0.85,
label = "posterior of delta", color = "#7A68A6", normed = True )
plt.axvline( x = 0.0, color = "black", linestyle = "--" )
plt.legend(loc = "upper right")
plt.show()
"""
Explanation: Notes on the code above We then use pymc to run a MCMC (Markov Chain Monte Carlo) to sample points from each website's posterior distribution.
stochastic variables have a keyword argument observed which accepts a boolean (False by default). The keyword observed has a very simple role: fix the variable's current value, i.e. make value immutable. We have to specify an initial value in the variable's creation, equal to the observations we wish to include, typically an array (and it should be an numpy array for speed).
Most often it is a good idea to prepend your call to the MCMC (Markov Chain Monte Carlo) with a call to .MAP(model).fit(). Recall that MCMC is a class of algorithms for sampling from a desired distribution by constructing an equilibrium distribution that has the properties of the desired distribution. And poor starting sampling points can prevent any convergence, or significantly slow it down. Thus, ideally, we would like to have the sampling process start at points where the posterior distributions truly exist. By calling .MAP(model).fit() we could avoid a lengthy burn-in period (where we discard the first few samples because they are still unrepresentative samples of the posterior distribution) and incorrect inference. Generally, we call this the maximum a posterior or, more simply, the MAP.
We can wrap all the created variables into a pm.Model class. With this Model class, we can analyze the variables as a single unit. This is an optional step, as the fitting algorithms can be sent an array of the variables rather than a Model class. So for the code above, you can do mcmc = pm.MCMC([p_A, p_B, delta, obs_A, obs_B]) instead of having to call pm.Model ... and then pm.MCMC.
Now, let' examine the posterior of the delta distribution (Remember, this is the posterior of Website B - posterior of Website A).
End of explanation
"""
print( "Probability site B is WORSE than site A: %.3f" % ( delta_samples < 0 ).mean() )
print( "Probability site B is BETTER than site A: %.3f" % ( delta_samples > 0 ).mean() )
"""
Explanation: The black line is at x = 0, representing where the difference between the two distributions is 0. From inspection, we see that most of the distribution's mass is to the right of the black line. This means most of the points sampled from B's distribution are larger than those sampled from A's distribution, implying that site B's response is likely better than site A's response. To get more quantitative results, we can compute the probability that website B gets more sign-ups than website A by simply counting the number of samples less than 0, i.e. the area under the curve before 0,
represent the probability that site B is worse than site A.
End of explanation
"""
mcplot( mcmc.trace("delta"), common_scale = False )
"""
Explanation: Diagnosing Convergence
Due to the fact that MCMC is a sampling algorithm, we should also do a double check to make sure that these samples are stable and accurate representatives of the posterior distribution. The current best practice for this is to visually examine trajectory and the autocorrelation.
The pymc.Matplot module contains a poorly named function plot and it is preferred to import it as mcplot so there is no conflict with matplotlib's plot. The function takes an MCMC's trace object and will return posterior distributions, traces and auto-correlations for each variable.
End of explanation
"""
def acf( x, lag = 100 ):
"""
autocorrelation function that calculates the
autocorrelation for series x from 1 to the user-specified lag
Parameter
---------
x : 1d-array, size ( iteration - burn-in iteration, )
pymc's random sample
lag : int, default 100
maximum lagging number
Return
------
corr : list, size (lag)
autocorrelation for each lag
Reference
---------
http://stackoverflow.com/questions/643699/how-can-i-use-numpy-correlate-to-do-autocorrelation
"""
# np.corrcoef here returns a 2 * 2 matrix, either [ 0, 1 ] or [ 1, 0 ]
# will be the actual autocorrelation between the two series,
# the diagonal is just the autocorrelation between each series itself
# which is 1
corr = [ np.corrcoef( x[k:], x[:-k] )[ 0, 1 ] for k in range( 1, lag ) ]
return corr
def effective_sample( iteration, burn, corr ):
"""
calculate the effective sample size of the mcmc,
note that the calculation is based on the fact that
the autorrelation plot appears to have converged
Parameters
----------
iteration : int
number of iteration of the mcmc
burn : int
burn-in iteration of the mcmc
corr : list
list that stores the autocorrelation for different lags
Returns
-------
the effective sample size : int
"""
for index, c in enumerate(corr):
if c < 0.05:
i = index
break
return ( iteration - burn ) / ( 1 + 2 * np.sum( corr[:i + 1] ) )
lag = 100
corr = acf( x = delta_samples, lag = lag )
effective_sample( iteration, burn, corr ) # the iteration and burn is specified above
"""
Explanation: Trajectory:
The top-left plot, often called the trace plot shows the trajectory of the samples. If the samples are representative, then they should look like they're zigzagging along the y-axis. Fortunately, our trace plot matches the description (The plot will most likely not look like this if you did not specify burn-in samples).
Autocorrelation:
Recall that a good trace plot will appear to be zigzagging along the y-axis. This is a sign that tells us the current position will exhibit some sort of correlation with previous positions. And too much of it means we are not exploring the space well.
The bottom-left plot depicts the autocorrelation, a measure of how related a series of numbers is with itself. A measurement of 1.0 is perfect positive autocorrelation, 0 no autocorrelation, and -1 is perfect negative correlation. If you are familiar with standard correlation, then autocorrelation is just how correlated a series, $x_\tau$, at time $t$ is with the series at time $t-k$:
$$R(k) = Corr( x_t, x_{t-k} )$$
And there will be a different autocorrelation value for different $k$ (also referred to as lag). For our plot, the autocorrelation value starts to drop near zero for large values of $k$. This is a sign that our random samples are providing independent information about the posterior distribution, which is exactly what we wanted!!
Posterior Distribution:
The largest plot on the right-hand side is the histograms of the samples, which is basically the same plot as the ones we manually created, plus a few extra features. The thickest vertical line represents the posterior mean, which is a good summary of posterior distribution. The interval between the two dashed vertical lines in each the posterior distributions represent the 95% credible interval, which can be interpreted as "there is a 95% chance that our parameter of interest lies in this interval".
When communicating your results to others, it is incredibly important to state this interval. One of our purposes for studying Bayesian methods is to have a clear understanding of our uncertainty in unknowns. Combined with the posterior mean, the 95% credible interval provides a reliable interval to communicate the likely location of the unknown (provided by the mean) and the uncertainty (represented by the width of the interval).
Effective Sample Size
To figure out exactly how many samples we would have to draw, we can compute the effective sample size (ESS), a measure of how many independent samples our our samples are equivalent to. Formally, denote the actual number of steps in the chain as N. The ESS is:
$$ESS = \frac{N}{1 + 2 \sum_{k=1}^\infty ACF(k)}$$
where ACF(k) is the autocorrelation of the series at lag k. For practical computation, the infinite sum in the definition of ESS may be stopped when ACF(k) < 0.05. A rule of thumb for the ESS is that if you wish to have a stable estimates of the 95% credible interval, an ESS around 10,000 is recommended.
End of explanation
"""
support = np.linspace( 0, 1, num = 500 )
dunif = stats.uniform().pdf(support)
plt.figure( figsize = ( 10, 5 ) )
plt.plot( support, dunif, label = "Uniform(0,1)" )
plt.ylim( 0, 1.15 )
plt.legend( loc = "lower right" )
plt.title("Uniform Distribution")
plt.show()
# replace .show() with .savefig to save the pics and also shows the plot
# plt.savefig( "Uniform.png", format = "png" )
"""
Explanation: What's Next?
For these two websites, we see that website B outperforms website A. This worked well for two websites, but if you're modeling an A/B test with several variants ( e.g. an A/B/C/D test ), you should consider using a hierarchical model to:
Protect yourself from a variety of multiple-comparison-type errors.
Get ahold of posterior distributions for your true conversion rates.
Let's first examine the sort of multiple comparison errors we're trying to avoid. Here's an exaggerated example:
Suppose that we have a single coin. We flip it 100 times, and it lands heads up on all 100 of them; how likely do you think it is that the coin is fair (i.e has a 50/50 chance of landing heads up)? Pretty slim; The probability of observing 100 heads out of 100 flips of a fair coin is:
$$1/2^{100} \approx 7.9×10^{−31}$$
Now imagine a new scenario: Instead of just one coin, we now have $2^{100}$ of them. We flip each 100 times. If we noticed that one of the $2^{100}$ coins has landed heads up on all 100 of its flips; how likely do you think it is that this coin is fair? A full answer will lead us into hierarchical modeling, but at this point it's already clear that we need to pay attention to the fact that there were another $2^{100} - 1$ coins: Even if all the $2^{100}$ coins were fair, the probability that at least one of them lands heads up on all 100 flips is:
$$1 − \left( 1 − \frac{1}{2^{100}} \right)^{2^{100}} \approx 1 − \frac{1}{e} \approx 63.2%$$
Back to the website example, if we tried this for all pairs of our five websites, we run the risk of getting a "false positive problem" due to the multiple testing problem. There are 10 possible pairs, so assume we test all possible pairs independently at an alpha = 0.05 level. For each test, we have a 95% chance of not getting a false positive, so the probability that all the tests do not yield a false positive is $0.95^{10}$, which is roughly equal to 0.60. This means the probability of getting at least one false positive result is about 0.40 or 40%. If we had more websites and thus more pairs to test, the probability of getting a false positive would only increase.
As you can see, without correcting for multiple testing, we run a high risk of encountering a false positive result.
Beta Distribution and Bayesian Priors
Before introducing the Beta-Binomial hierarchical model, let's discuss the theoretical motivation for the Beta distribution. Consider the Uniform Distribution over the interval (0,1).
End of explanation
"""
a_vals = ( 0.5, 1, 2, 2 )
b_vals = ( 0.5, 1, 1, 2 )
plt.figure( figsize = ( 10, 5 ) )
for a, b in zip( a_vals, b_vals ):
plt.plot( support, stats.beta( a, b ).pdf(support), label = "Beta(%s,%s)" % ( a, b ) )
plt.legend()
plt.ylim( 0,4 )
plt.title("Beta Distribution Examples")
plt.show()
"""
Explanation: As you can see, it's a simple distribution. It assigns equal probability weight to all points in the domain (0,1), also known as the support of the distribution. However, what if we want a distribution over (0,1) that isn't just flat everywhere?
This is where the Beta distribution comes in! The Beta distribution can be seen as a generalization of the Uniform(0,1) as it allows us to define more general probability density functions over the interval (0,1). Using two parameters a and b, the Beta(a,b) distribution is defined with the following probability density function:
$$pdf(x) = C x^{\alpha - 1} (1 - x)^{\beta - 1}, x \in (0, 1), \alpha, \beta > 0$$
Where C is a constant to normalize the integral of the function to 1 (all probability density functions must integrate to 1). This constant is formally known as the Beta Function.
But the important thing is that by changing the values of a and b, we can change the shape and the "steepness" of the distribution, thus allowing us to easily create a wide variety of functions over the interval (0,1).
End of explanation
"""
plt.figure( figsize = ( 10, 5 ) )
plt.plot( support, stats.beta( 61, 41 ).pdf(support), label = "Beta(%s,%s)" % ( 61, 41 ) )
plt.legend()
plt.title("Coin Flips")
plt.show()
"""
Explanation: Notice in the above plot that the green line corresponding to the distribution Beta(1,1) is the same as that of Uniform(0,1), proving that the Beta distribution is indeed a generalization of the Uniform(0,1).
Now, many of you might be wondering what's the big takeaway from this section, so here they are:
The Beta Distribution is a versatile family of probability distributions over (0,1).
This allows us to create prior distributions that incorporate some of our beliefs and thus informative priors. More concretely, when you have a k-successes-out-of-n-trials-type test, you can use the Beta distribution to model your posterior distributions. If you have test with k success amongst n trials, your posterior distribution is $Beta(k+1,n−k+1)$.
In the next section, we will discuss why this is important, and how the Beta Distribution can be used for A/B testing.
Hierarchical Models
Hierarchical models are models that involves multiple parameters such that the credible values of some parameters depend on the values of other parameters. Thus Hierarchical models will model all of the test buckets at once, rather than treating each in isolation. More specifically, they use the observed rates of each bucket to infer a prior distribution for the true rates; these priors then influences the predicted rates by "shrinking" posterior distributions towards the prior.
Let's work our way up to this idea. First, let's say that we flip a coin 100 times and that it lands heads-up on 60 of them. We can then model this as $p \sim Beta(61,41)$, and our posterior distribution looks like this:
End of explanation
"""
a_vals = ( 61, 112, 51 )
b_vals = ( 41, 92, 51 )
plt.figure( figsize = ( 10, 5 ) )
for a, b in zip( a_vals, b_vals ):
plt.plot( support, stats.beta( a, b ).pdf(support), label = "Beta(%s,%s)" % ( a, b ) )
plt.legend()
plt.title("Beta Distribution Examples")
plt.show()
"""
Explanation: Side note on a handy general rule. The intuition behind $Binomial(n,p)$ is that if we flip a coin with probability p of landing heads n times, how likely is it that we see k heads for some k between 0 and n.
And given that information, If your prior is $p \sim Beta(a,b)$ and you observe $X=k$ for $X \sim Binomial(n,p)$, then your posterior is $(p∣X) \sim Beta(a+k,b+n−k)$. Beta is a "conjugate prior" for the Binomial, meaning that the posterior is also Beta.
Second, let's suppose, unrealistically, that we have an explicit prior distribution. We've flipped a lot of similar coins in the past, and we're pretty sure that the true bias of such coins follows a $Beta(51,51)$ distribution. Applying Bayes' rule with this prior, we would now model our observation of 60 out of 100 heads-up as $p \sim Beta(112,92)$.
Now our posterior distribution looks as follows. We keep the original for reference:
End of explanation
"""
@pm.stochastic( dtype = np.float64 )
def beta_priors( value = [ 1.0, 1.0 ] ):
a, b = value
# outside of the support of the distribution
if a <= 0 or b <= 0:
return -np.inf
else:
return np.log( np.power( ( a + b ), -2.5 ) )
a = beta_priors[0]
b = beta_priors[1]
"""
Explanation: Notice how much the distribution has shifted to the towards the prior! The preceding plot tells us that when we know an explicit prior, we should use it. Great. The problem with all of this is that for A/B tests, we often don't have an explicit prior. But when we have multiple test buckets, we can infer a prior.
To keep things concrete, let's say that we are designing a company's website, and we're testing five different layouts for the landing page. When a user clicks on our site, he/she sees one of the five landing page layouts. From there, the user can decide whether or not she wants to create an account on the site.
| Experiment | Clicks | Orders | True Rate | Empirical Rate |
|------------|--------|--------|------------|----------------|
| A | 1055 | 28 | 0.032 | 0.027 |
| B | 1057 | 45 | 0.041 | 0.043 |
| C | 1065 | 69 | 0.058 | 0.065 |
| D | 1039 | 58 | 0.047 | 0.056 |
| E | 1046 | 60 | 0.051 | 0.057 |
As a disclaimer, this is simulated data that is created to mimic a real A/B testing data. The number of Orders for each website was generated by generating a number from a Binomial distribution with n = Clicks and p = True Rate. The Empirical Rate represents the actual or so called observed Orders/Clicks
So now we have $\beta_1, ...,\beta_5$ tests and that for each test $\beta_i$ we observe $k_i$ successes out of $N$ trials. Let's further say that each bucket $\beta_i$ has some true success rate $p_i$; we don't know what $p_i$ is, but we're assuming that $k_i$ was drawn from a $Binomial( N, p_i )$ distribution. What we'd like is a prior for each $p_i$. The key idea is: Let's assume that all the $p_i$ are drawn from the same distribution, and let's use the empirically observed rates, i.e. the $k_i$s to infer what this distribution is.
Here's what the whole setup looks like. We assume that our sign-ups $k_i$ is modeled as $k_i \sim Binomial(100, p_i)$. We then assume that every $p_i$, the true sign-up rates for each website is drawn from the same $Beta(a,b)$ distribution for some parameters a and b; briefly, $p_i \sim Beta(a,b)$. We don't have any prior beliefs for a and b, so we'd like them to be drawn from an "uninformative prior".
Recall that in the Bernoulli Method section, when we had no prior information about the true rates for each website we used an uniform distribution as our "uninformative prior". In this section, we will assume each true sign-ups rate is drawn from a Beta distribution.
Now, we've neglected one important question up until this point, How do we choose the a and b for the Beta distribution? Well, maybe we could assign a prior distribution to choose these hyper-parameters, but then our prior distribution has a prior distribution and it's priors all the way down.... A better alternative is to sample a and b from the distribution:
$$p(a, b) \propto \frac{1}{(a+b)^{5/2}}$$
Where $\propto$ represents is proportional to. This may look like magic, but let's just assume that this is correct that keep going. Now that we've covered the theory, we can finally build our hierarchical model. The beta priors function samples a and b for us from the function defined above.
Using the pymc module, we use the @pm.stochastic decorator to define a custom prior for a parameter in a model, since the decorator requires that we return the log-likelihood, so we will be returning the log of $(a+b)^{-2.5}$.
End of explanation
"""
# The hidden, true rate for each website, or simply
# this is what we don't know, but would like to find out
true_rates = pm.Beta( 'true_rates', a, b, size = 5 )
# The observed values, clicks and orders
trials = np.array([ 1055, 1057, 1065, 1039, 1046 ])
successes = np.array([ 28, 45, 69, 58, 60 ])
observed_values = pm.Binomial( 'observed_values', trials, true_rates,
value = successes, observed = True )
model1 = pm.Model([ a, b, true_rates, observed_values ])
mcmc1 = pm.MCMC(model1)
pm.MAP(model1).fit()
iteration1 = 70000
burn1 = 10000
mcmc1.sample( iteration1, burn1 )
"""
Explanation: We then model the true sign-up rates as Beta distribution and use our observed sign-up data to construct the Binomial distribution. Once again use MCMC to sample the data points and throw out the first few.
End of explanation
"""
traces = mcmc1.trace('true_rates')[:]
plt.figure( figsize = ( 10, 5 ) )
for i in range(5):
# our true rates was a size of 5, thus each column represents each true rate
plt.hist( traces[ :, i ],
histtype = 'stepfilled', bins = 30, alpha = 0.5,
label = "Website %s" % chr( 65 + i ), normed = True )
plt.legend(loc = "upper right")
plt.show()
"""
Explanation: Let's see what our five posterior distributions look like.
End of explanation
"""
diff_BA = traces[ :, 1 ] - traces[ :, 0 ]
plt.figure( figsize = ( 10, 5 ) )
plt.hist( diff_BA, histtype = 'stepfilled', bins = 30, alpha = 0.85,
label = "Difference site B - site A", normed = True )
plt.axvline( x = 0.0, color = "black", linestyle = "--" )
plt.legend(loc = "upper right")
plt.show()
"""
Explanation: Now that we have all five posterior distributions, we can easily computer the difference between any two of them. For example, let's revisit the difference of the posterior distribution of website B and website A.
End of explanation
"""
print( "Prob. that website A gets MORE sign-ups than website C: %0.3f" % (diff_BA < 0).mean() )
print( "Prob. that website A gets LESS sign-ups than website C: %0.3f" % (diff_BA > 0).mean() )
"""
Explanation: we see most of the probability mass of this posterior distribution lies to the right of the line x = 0.00. We can quantify these results using the same method we used for the difference between website C and website A.
End of explanation
"""
diff_ED = traces[ :, 4 ] - traces[ :, 3 ]
plt.figure( figsize = ( 10, 5 ) )
plt.hist( diff_ED, histtype = 'stepfilled', bins = 30, alpha = 0.85,
label = "Difference site E - site D", normed = True )
plt.axvline( x = 0.0, color = "black", linestyle = "--" )
plt.legend(loc = "upper right")
plt.show()
print( "Probability that website D gets MORE sign-ups than website E: %0.3f" % (diff_ED < 0).mean() )
print( "Probability that website D gets LESS sign-ups than website E: %0.3f" % (diff_ED > 0).mean() )
"""
Explanation: Again, we see results showing that website B has a higher sign-up rate than website A at a statistically significant level, same as the result when using the Bernoulli model.
Though it should be noted that It the hierarchical model cannot overcome the limitations of data. For example, let's consider website D and website E. While these two websites have a differing true sign-up rate (website E being better than website D), they have virtually identical click and sign-up data. As a result, our difference of the posterior yields a distribution centered about 0.0 (see plot below), and we cannot conclude that one website has a higher sign-up rate at a statistically significant level.
End of explanation
"""
# trace of the bernoulli way
siteA_distribution = mcmc.trace("p_A")[:]
plt.figure( figsize = ( 10, 5 ) )
plt.hist( traces[ :, 0 ], histtype = 'stepfilled', bins = 30, alpha = 0.6,
label = "Hierachical Beta" )
plt.hist( siteA_distribution, histtype = 'stepfilled', bins = 30, alpha = 0.6,
label = "Bernoulli Model" )
plt.axvline( x = 0.032, color = "black", linestyle = "--" )
plt.legend(loc = "upper right")
plt.show()
"""
Explanation: Comparing the Two Methods
We have gone through two different ways to do A/B testing: One way using Bernoulli distributions, and another using Beta-Binomial distributions. The Beta-Binomial hierarchical model is motivated by using the problem of multiple comparison testing. Let's now compare the performance of the two methods by examining the posterior distribution generated for Website A by each of the two methods. A black vertical line at x = 0.032, the true rate is included.
End of explanation
"""
# true rate : 0.032
# empirical rate : 0.027
# Hierarchical Beta-Binomial posterior's true rate
traces[ :, 0 ].mean()
"""
Explanation: In this case, the mass of the Hierarchical Beta-Binomial model is closer to the true rate than that of the Bernoulli model. The posteriors of the hierarchical model gave us a closer estimate of the true rate.
Why does the Hierarchical Beta-Binomial model appear to be more accurate in estimating the true rate? Well this comes down to the prior distributions used in each method. In the classical Bernoulli method, we used the $Uniform(0,1)$ as our prior distribution. As I mentioned earlier, this is an uninformative prior as it assigns equal weight to all possible probabilities. On the other hand, the Beta prior creates a distribution that puts some of the probability mass towards the "truth" and thus we see a more a bit accurate estimate for the posterior distribution.
The other thing worth noticing is that remember Website A was observed to have a success rate of 0.027% and a true (unobserved) success rate of 0.032%. And our Hierarchical Beta-Binomial's posterior distribution's estimation was about:
End of explanation
"""
# we can also plot the diagnosis that checks for convergence like we did above
mcplot( mcmc1.trace("true_rates"), common_scale = False )
"""
Explanation: As we can see directly, that the hierarchical model gives a much better estimate than the empirical rate. Again, this is because hierarchical models shrink the individual posteriors towards the family-wise posterior. (You can think of it as "regression to the mean" in a special case.)
End of explanation
"""
|
intel-analytics/analytics-zoo | docs/docs/colab-notebook/orca/quickstart/pytorch_lenet_mnist_data_creator_func.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Explanation: <a href="https://colab.research.google.com/github/intel-analytics/analytics-zoo/blob/master/docs/docs/colab-notebook/orca/quickstart/pytorch_lenet_mnist_data_creator_func.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2018 Analytics Zoo Authors.
End of explanation
"""
# Install jdk8
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
import os
# Set environment variable JAVA_HOME.
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
!update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
!java -version
"""
Explanation: Environment Preparation
Install Java 8
Run the cell on the Google Colab to install jdk 1.8.
Note: if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer).
End of explanation
"""
import sys
# Get current python version
python_version = "3.7.10"
# Install Miniconda
!wget https://repo.continuum.io/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh
!chmod +x Miniconda3-4.5.4-Linux-x86_64.sh
!./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local
# Update Conda
!conda install --channel defaults conda python=$python_version --yes
!conda update --channel defaults --all --yes
# Append to the sys.path
_ = (sys.path
.append(f"/usr/local/lib/python3.7/site-packages"))
os.environ['PYTHONHOME']="/usr/local"
"""
Explanation: Install Analytics Zoo
Conda is needed to prepare the Python environment for running this example.
Note: The following code cell is specific for setting up conda environment on Colab; for general conda installation, please refer to the install guide for more details.
End of explanation
"""
# Install latest pre-release version of Analytics Zoo
# Installing Analytics Zoo from pip will automatically install pyspark, bigdl, and their dependencies.
!pip install --pre analytics-zoo
# Install python dependencies
!pip install torch==1.7.1 torchvision==0.8.2
!pip install six cloudpickle
!pip install jep==3.9.0
"""
Explanation: You can install the latest pre-release version using pip install --pre analytics-zoo.
End of explanation
"""
# import necesary libraries and modules
from __future__ import print_function
import os
import argparse
from zoo.orca import init_orca_context, stop_orca_context
from zoo.orca import OrcaContext
"""
Explanation: Distributed PyTorch using Orca APIs
In this guide we will describe how to scale out PyTorch programs using Orca in 4 simple steps.
End of explanation
"""
# recommended to set it to True when running Analytics Zoo in Jupyter notebook.
OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook).
cluster_mode = "local"
if cluster_mode == "local":
init_orca_context(cores=1, memory="2g") # run in local mode
elif cluster_mode == "k8s":
init_orca_context(cluster_mode="k8s", num_nodes=2, cores=4) # run on K8s cluster
elif cluster_mode == "yarn":
init_orca_context(
cluster_mode="yarn-client", cores=4, num_nodes=2, memory="2g",
driver_memory="10g", driver_cores=1,
conf={"spark.rpc.message.maxSize": "1024",
"spark.task.maxFailures": "1",
"spark.driver.extraJavaOptions": "-Dbigdl.failure.retryTimes=1"}) # run on Hadoop YARN cluster
"""
Explanation: Step 1: Init Orca Context
End of explanation
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4*4*50, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
model = LeNet()
model.train()
criterion = nn.NLLLoss()
lr = 0.001
adam = torch.optim.Adam(model.parameters(), lr)
"""
Explanation: This is the only place where you need to specify local or distributed mode. View Orca Context for more details.
Note: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster.
Step 2: Define the Model
You may define your model, loss and optimizer in the same way as in any standard (single node) PyTorch program.
End of explanation
"""
import torch
from torchvision import datasets, transforms
torch.manual_seed(0)
dir='/tmp/dataset'
def train_loader_creator(config, batch_size):
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(dir, train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
return train_loader
def test_loader_creator(config, batch_size):
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(dir, train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=False)
return test_loader
"""
Explanation: Step 3: Define Train Dataset
You may use a Data Creator Function for your input data (as shown below), especially when the data size is very large.
End of explanation
"""
from zoo.orca.learn.pytorch import Estimator
from zoo.orca.learn.metrics import Accuracy
est = Estimator.from_torch(model=model, optimizer=adam, loss=criterion, metrics=[Accuracy()])
"""
Explanation: Step 4: Fit with Orca Estimator
First, Create an Estimator.
End of explanation
"""
from zoo.orca.learn.trigger import EveryEpoch
est.fit(data=train_loader_creator, epochs=1, validation_data=test_loader_creator, batch_size=320,
checkpoint_trigger=EveryEpoch())
"""
Explanation: Next, fit and evaluate using the Estimator.
End of explanation
"""
result = est.evaluate(data=test_loader_creator, batch_size=320)
for r in result:
print(r, ":", result[r])
"""
Explanation: Finally, evaluate using the Estimator.
End of explanation
"""
# stop orca context when program finishes
stop_orca_context()
"""
Explanation: The accuracy of this model has reached 98%.
End of explanation
"""
|
EvanBianco/striplog | tutorial/Striplog_object.ipynb | apache-2.0 | %matplotlib inline
import striplog
striplog.__version__
from striplog import Legend, Lexicon, Interval, Component
legend = Legend.default()
lexicon = Lexicon.default()
"""
Explanation: Striplog objects
This notebooks looks at the main striplog object. For the basic objects it depends on, see Basic objects.
First, import anything we might need.
End of explanation
"""
from striplog import Striplog
print(Striplog.__doc__)
"""
Explanation: Making a striplog
End of explanation
"""
imgfile = "M-MG-70_14.3_135.9.png"
striplog = Striplog.from_img(imgfile, 14.3, 135.9, legend=legend)
striplog
striplog.thinnest(n=7)
striplog.thickest(n=5).plot(legend=legend)
"""
Explanation: Here is one of the images we will convert into striplogs:
<img src="M-MG-70_14.3_135.9.png" width=50 style="float:left" />
End of explanation
"""
if striplog.find_gaps():
print('Gaps')
else:
print("No gaps!")
"""
Explanation: Making, finding and annealing gaps
This striplog doesn't have any gaps...
End of explanation
"""
del striplog[[2, 7, 20]]
striplog.find_gaps()
"""
Explanation: But we can make some by deleting indices:
End of explanation
"""
striplog.find_gaps(index=True)
striplog.thinnest(1)
striplog.prune(limit=1)
print(len(striplog))
striplog.plot(legend=legend)
striplog.anneal()
striplog.plot(legend=legend)
striplog.find_gaps()
"""
Explanation: We can also get a list of the indices of intervals that are followed by gaps (i.e. are directly above gaps in 'depth' order, or directly below gaps in 'elevation' order).
End of explanation
"""
print(striplog[:5])
striplog.top
"""
Explanation: Representations of a striplog
There are several ways to inspect a striplog:
print prints the contents of the striplog
top shows us a list of the primary lithologies in the striplog, in order of cumulative thickness
plot makes a plot of the striplog with coloured bars
End of explanation
"""
depth = 0
list_of_int = []
for i in striplog.top:
list_of_int.append(Interval(depth, depth+i[1], components=[i[0]]))
depth += i[1]
t = Striplog(list_of_int)
t.plot(legend)
"""
Explanation: It's easy enough to visualize this. Perhaps this should be a method...
End of explanation
"""
striplog.plot()
"""
Explanation: Plot
If you call plot() on a Striplog you'll get random colours (one per rock type in the striplog), and preset aspect ratio of 10.
End of explanation
"""
striplog.plot(legend, ladder=True, aspect=5, interval=5)
"""
Explanation: For more control, you can pass some parameters. You'll probably always want to pass a legend.
End of explanation
"""
print(striplog[:3])
print(striplog[-1].primary.summary())
for i in striplog[:5]:
print(i.summary())
len(striplog)
import numpy as np
np.array([d.top for d in striplog[5:13]])
"""
Explanation: Manipulating a striplog
Again, the object is indexable and iterable.
End of explanation
"""
indices = [2,4,6]
print(striplog[indices])
"""
Explanation: You can even index into it with an iterable, like a list of indices.
End of explanation
"""
striplog.find('sandstone')
striplog.find('sandstone').top
striplog.find('sandstone').cum
print(striplog.find('sandstone'))
striplog.find('sandstone').plot()
"""
Explanation: Querying the striplog
This results in a new Striplog, contianing only the intervals requested.
End of explanation
"""
rock = striplog.find('sandstone')[1].components[0]
rock
"""
Explanation: Let's ask for the rock we just found by seaching.
End of explanation
"""
striplog.find(rock).plot(legend)
rock in striplog
"""
Explanation: We can also search for a rock...
End of explanation
"""
striplog.depth(90).primary
"""
Explanation: And we can ask what is at a particular depth.
End of explanation
"""
for r in reversed(striplog[:5]):
print(r)
"""
Explanation: Slicing and indexing
End of explanation
"""
striplog[1:3]
rock2 = Component({'lithology':'shale', 'colour':'grey'})
iv = Interval(top=300, base=350, description='', components=[rock, rock2])
striplog[-3:-1] + Striplog([iv])
"""
Explanation: Slicing returns a new striplog:
End of explanation
"""
print(striplog.to_las3())
striplog.source
csv_string = """ 200.000, 230.329, Anhydrite
230.329, 233.269, Grey vf-f sandstone
233.269, 234.700, Anhydrite
234.700, 236.596, Dolomite
236.596, 237.911, Red siltstone
237.911, 238.723, Anhydrite
238.723, 239.807, Grey vf-f sandstone
239.807, 240.774, Red siltstone
240.774, 241.122, Dolomite
241.122, 241.702, Grey siltstone
241.702, 243.095, Dolomite
243.095, 246.654, Grey vf-f sandstone
246.654, 247.234, Dolomite
247.234, 255.435, Grey vf-f sandstone
255.435, 258.723, Grey siltstone
258.723, 259.729, Dolomite
259.729, 260.967, Grey siltstone
260.967, 261.354, Dolomite
261.354, 267.041, Grey siltstone
267.041, 267.350, Dolomite
267.350, 274.004, Grey siltstone
274.004, 274.313, Dolomite
274.313, 294.816, Grey siltstone
294.816, 295.397, Dolomite
295.397, 296.286, Limestone
296.286, 300.000, Volcanic
"""
strip2 = Striplog.from_csv(csv_string, lexicon=lexicon)
"""
Explanation: Read or write CSV or LAS3
End of explanation
"""
Component.from_text('Volcanic', lexicon)
Component.from_text('Grey vf-f sandstone', lexicon)
las3 = """~Lithology_Parameter
LITH . : Lithology source {S}
LITHD. MD : Lithology depth reference {S}
~Lithology_Definition
LITHT.M : Lithology top depth {F}
LITHB.M : Lithology base depth {F}
LITHN. : Lithology name {S}
~Lithology_Data | Lithology_Definition
200.000, 230.329, Anhydrite
230.329, 233.269, Grey vf-f sandstone
233.269, 234.700, Anhydrite
234.700, 236.596, Dolomite
236.596, 237.911, Red siltstone
237.911, 238.723, Anhydrite
238.723, 239.807, Grey vf-f sandstone
239.807, 240.774, Red siltstone
240.774, 241.122, Dolomite
241.122, 241.702, Grey siltstone
241.702, 243.095, Dolomite
243.095, 246.654, Grey vf-f sandstone
246.654, 247.234, Dolomite
247.234, 255.435, Grey vf-f sandstone
255.435, 258.723, Grey siltstone
258.723, 259.729, Dolomite
259.729, 260.967, Grey siltstone
260.967, 261.354, Dolomite
261.354, 267.041, Grey siltstone
267.041, 267.350, Dolomite
267.350, 274.004, Grey siltstone
274.004, 274.313, Dolomite
274.313, 294.816, Grey siltstone
294.816, 295.397, Dolomite
295.397, 296.286, Limestone
296.286, 300.000, Volcanic
"""
strip3 = Striplog.from_las3(las3, lexicon)
strip3
strip3.top
"""
Explanation: Notice the warning about a missing term in the lexicon.
End of explanation
"""
tops_csv = """100, Escanilla Fm.
200, Sobrarbe Fm.
350, San Vicente Fm.
500, Cretaceous
"""
tops = Striplog.from_csv(tops_csv)
print(tops)
tops.depth(254.0)
"""
Explanation: Handling tops
I recommend treating tops as intervals, not as point data.
End of explanation
"""
data_csv = """1200, 6.4
1205, 7.3
1210, 8.2
1250, 9.2
1275, 4.3
1300, 2.2
"""
data = Striplog.from_csv(data_csv, points=True)
print(data)
"""
Explanation: Handling point data
Some things really are point data.
End of explanation
"""
import numpy as np
from matplotlib import pyplot as plt
import seaborn; seaborn.set()
fmt = '{colour}\n{lithology}\n{grainsize}'
labels = [c.summary(fmt=fmt) for c in comps]
colours = [legend.get_colour(c) for c in comps]
fig, ax = plt.subplots()
ind = np.arange(len(comps))
bars = ax.bar(ind, counts, align='center')
ax.set_xticks(ind)
ax.set_xticklabels(labels)
for b, c in zip(bars, colours):
b.set_color(c)
plt.show()
"""
Explanation: One day, when we have a use case, we can do something nice with this, like treat it as numerical data, and make a plot for it. We need an elegant way to get that number into a 'rock', like {'x': 6.4}, etc.
Hacking histogram
End of explanation
"""
|
bloomberg/bqplot | examples/Marks/Pyplot/Pie.ipynb | apache-2.0 | data = np.random.rand(3)
fig = plt.figure(animation_duration=1000)
pie = plt.pie(data, display_labels="outside", labels=list(string.ascii_uppercase))
fig
"""
Explanation: Basic Pie Chart
End of explanation
"""
n = np.random.randint(1, 10)
pie.sizes = np.random.rand(n)
"""
Explanation: Update Data
End of explanation
"""
with pie.hold_sync():
pie.display_values = True
pie.values_format = ".1f"
"""
Explanation: Display Values
End of explanation
"""
pie.sort = True
"""
Explanation: Enable sort
End of explanation
"""
pie.selected_style = {"opacity": 1, "stroke": "white", "stroke-width": 2}
pie.unselected_style = {"opacity": 0.2}
pie.selected = [1]
pie.selected = None
"""
Explanation: Set different styles for selected slices
End of explanation
"""
pie.label_color = "Red"
pie.font_size = "20px"
pie.font_weight = "bold"
"""
Explanation: For more on piechart interactions, see the Mark Interactions notebook
Modify label styling
End of explanation
"""
fig1 = plt.figure(animation_duration=1000)
pie1 = plt.pie(np.random.rand(6), inner_radius=0.05)
fig1
"""
Explanation: Update pie shape and style
End of explanation
"""
# As of now, the radius sizes are absolute, in pixels
with pie1.hold_sync():
pie1.radius = 150
pie1.inner_radius = 100
# Angles are in radians, 0 being the top vertical
with pie1.hold_sync():
pie1.start_angle = -90
pie1.end_angle = 90
"""
Explanation: Change pie dimensions
End of explanation
"""
pie1.y = 0.1
pie1.x = 0.6
pie1.radius = 180
"""
Explanation: Move the pie around
x and y attributes control the position of the pie in the figure.
If no scales are passed for x and y, they are taken in absolute
figure coordinates, between 0 and 1.
End of explanation
"""
pie1.stroke = "brown"
pie1.colors = ["orange", "darkviolet"]
pie1.opacities = [0.1, 1]
fig1
"""
Explanation: Change slice styles
Pie slice colors cycle through the colors and opacities attribute, as the Lines Mark.
End of explanation
"""
from bqplot import ColorScale, ColorAxis
n = 7
size_data = np.random.rand(n)
color_data = np.random.randn(n)
fig2 = plt.figure()
plt.scales(scales={"color": ColorScale(scheme="Reds")})
pie2 = plt.pie(size_data, color=color_data)
fig2
"""
Explanation: Represent an additional dimension using Color
The Pie allows for its colors to be determined by data, that is passed to the color attribute.
A ColorScale with the desired color scheme must also be passed.
End of explanation
"""
|
tritemio/multispot_paper | out_notebooks/usALEX-5samples-PR-raw-out-DexDem-17d.ipynb | mit | ph_sel_name = "DexDem"
data_id = "17d"
# ph_sel_name = "all-ph"
# data_id = "7d"
"""
Explanation: Executed: Mon Mar 27 11:35:09 2017
Duration: 11 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
"""
from fretbursts import *
init_notebook()
from IPython.display import display
"""
Explanation: Load software and filenames definitions
End of explanation
"""
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
"""
Explanation: Data folder:
End of explanation
"""
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'Dex': Ph_sel(Dex='DAem'),
'DexDem': Ph_sel(Dex='Dem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
"""
Explanation: List of data files:
End of explanation
"""
d = loader.photon_hdf5(filename=files_dict[data_id])
"""
Explanation: Data load
Initial loading of the data:
End of explanation
"""
d.ph_times_t, d.det_t
"""
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
"""
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
"""
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
"""
plot_alternation_hist(d)
"""
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
"""
loader.alex_apply_period(d)
"""
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
"""
d
"""
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
"""
d.time_max
"""
Explanation: Or check the measurements duration:
End of explanation
"""
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
"""
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
"""
bs_kws = dict(L=10, m=10, F=7, ph_sel=ph_sel)
d.burst_search(**bs_kws)
th1 = 30
ds = d.select_bursts(select_bursts.size, th1=30)
bursts = (bext.burst_data(ds, include_bg=True, include_ph_index=True)
.round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3, 'width_ms': 4}))
bursts.head()
burst_fname = ('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv'
.format(sample=data_id, th=th1, **bs_kws))
burst_fname
bursts.to_csv(burst_fname)
assert d.dir_ex == 0
assert d.leakage == 0
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print ('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
"""
Explanation: Burst search and selection
End of explanation
"""
def hsm_mode(s):
"""
Half-sample mode (HSM) estimator of `s`.
`s` is a sample from a continuous distribution with a single peak.
Reference:
Bickel, Fruehwirth (2005). arXiv:math/0505419
"""
s = memoryview(np.sort(s))
i1 = 0
i2 = len(s)
while i2 - i1 > 3:
n = (i2 - i1) // 2
w = [s[n-1+i+i1] - s[i+i1] for i in range(n)]
i1 = w.index(min(w)) + i1
i2 = i1 + n
if i2 - i1 == 3:
if s[i1+1] - s[i1] < s[i2] - s[i1 + 1]:
i2 -= 1
elif s[i1+1] - s[i1] > s[i2] - s[i1 + 1]:
i1 += 1
else:
i1 = i2 = i1 + 1
return 0.5*(s[i1] + s[i2])
E_pr_do_hsm = hsm_mode(ds_do.E[0])
print ("%s: E_peak(HSM) = %.2f%%" % (ds.ph_sel, E_pr_do_hsm*100))
"""
Explanation: Donor Leakage fit
Half-Sample Mode
Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005).
End of explanation
"""
E_fitter = bext.bursts_fitter(ds_do, weights=None)
E_fitter.histogram(bins=np.arange(-0.2, 1, 0.03))
E_fitter.fit_histogram(model=mfit.factory_gaussian())
E_fitter.params
res = E_fitter.fit_res[0]
res.params.pretty_print()
E_pr_do_gauss = res.best_values['center']
E_pr_do_gauss
"""
Explanation: Gaussian Fit
Fit the histogram with a gaussian:
End of explanation
"""
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_fitter.calc_kde(bandwidth=bandwidth)
E_fitter.find_kde_max(E_ax, xmin=E_range_do[0], xmax=E_range_do[1])
E_pr_do_kde = E_fitter.kde_max_pos[0]
E_pr_do_kde
"""
Explanation: KDE maximum
End of explanation
"""
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, plot_model=False)
plt.axvline(E_pr_do_hsm, color='m', label='HSM')
plt.axvline(E_pr_do_gauss, color='k', label='Gauss')
plt.axvline(E_pr_do_kde, color='r', label='KDE')
plt.xlim(0, 0.3)
plt.legend()
print('Gauss: %.2f%%\n KDE: %.2f%%\n HSM: %.2f%%' %
(E_pr_do_gauss*100, E_pr_do_kde*100, E_pr_do_hsm*100))
"""
Explanation: Leakage summary
End of explanation
"""
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
"""
Explanation: Burst size distribution
End of explanation
"""
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
"""
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
"""
ds_fret.fit_E_m(weights='size')
"""
Explanation: Weighted mean of $E$ of each burst:
End of explanation
"""
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
"""
Explanation: Gaussian fit (no weights):
End of explanation
"""
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
"""
Explanation: Gaussian fit (using burst size as weights):
End of explanation
"""
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
"""
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
"""
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
"""
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
"""
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
"""
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
"""
sample = data_id
"""
Explanation: Save data to file
End of explanation
"""
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'E_pr_do_kde E_pr_do_hsm E_pr_do_gauss nt_mean\n')
"""
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
"""
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
"""
Explanation: This is just a trick to format the different variables:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/uhh/cmip6/models/sandbox-1/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-1', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: UHH
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
joshspeagle/dynesty | demos/Examples -- LogGamma.ipynb | mit | # system functions that are always useful to have
import time, sys, os
import warnings
# basic numeric setup
import numpy as np
# inline plotting
%matplotlib inline
# plotting
import matplotlib
from matplotlib import pyplot as plt
# seed the random number generator
rstate = np.random.default_rng(1028)
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'font.size': 30})
import dynesty
"""
Explanation: LogGamma
Setup
First, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.
End of explanation
"""
from scipy.stats import loggamma, norm
def lng(x):
lng1 = loggamma.logpdf(x[0], c=1., loc=1./3., scale=1./30.)
lng2 = loggamma.logpdf(x[0], c=1., loc=2./3., scale=1./30.)
return np.logaddexp(lng1, lng2) + np.log(0.5)
def lnn(x):
lnn1 = norm.logpdf(x[1], loc=1./3., scale=1./30.)
lnn2 = norm.logpdf(x[1], loc=2./3., scale=1./30.)
return np.logaddexp(lnn1, lnn2) + np.log(0.5)
def lnd_i(x_i, i):
if i >= 3:
if i <= (ndim + 2) / 2.:
return loggamma.logpdf(x_i, c=1., loc=2./3., scale=1./30.)
else:
return norm.logpdf(x_i, loc=2./3., scale=1./30.)
else:
return 0.
def lnd(x):
return sum([lnd_i(x_i, i) for i, x_i in enumerate(x)])
def loglike(x):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
return lng(x) + lnn(x) + lnd(x)
# define the prior transform
def prior_transform(x):
return x
# plot the log-likelihood surface
plt.figure(figsize=(10., 10.))
axes = plt.axes(aspect=1)
xx, yy = np.meshgrid(np.linspace(0., 1., 200),
np.linspace(0., 1., 200))
logL = np.array([loglike(np.array([x, y]))
for x, y in zip(xx.flatten(), yy.flatten())])
L = np.exp(logL.reshape(xx.shape))
axes.contourf(xx, yy, L, 200, cmap=plt.cm.Purples)
plt.title('Likelihood Surface')
plt.xlabel(r'$x$')
plt.ylabel(r'$y$');
"""
Explanation: The multi-modal LogGamma distribution is useful for stress testing the effectiveness of bounding distributions. It is defined as:
$$
g_a \sim \textrm{LogGamma}(1, 1/3, 1/30) \
g_b \sim \textrm{LogGamma}(1, 2/3, 1/30) \
n_c \sim \textrm{Normal}(1/3, 1/30) \
n_d \sim \textrm{Normal}(2/3, 1/30) \
d_i \sim \textrm{LogGamma}(1, 2/3, 1/30) ~\textrm{if}~ i \leq \frac{d+2}{2} \
d_i \sim \textrm{Normal}(2/3, 1/30) ~\textrm{if}~ i > \frac{d+2}{2} \
\mathcal{L}g = \frac{1}{2} \left( g_a(x_1) + g_b(x_1) \right) \
\mathcal{L}_n = \frac{1}{2} \left( n_a(x_2) + n_d(x_2) \right) \
\ln \mathcal{L} \equiv \ln \mathcal{L}_g + \ln \mathcal{L}_n + \sum{i=3}^{d} \ln d_i(x_i)
$$
End of explanation
"""
ndim = 2
nlive = 250
sampler = dynesty.NestedSampler(loglike, prior_transform, ndim=ndim,
bound='multi', sample='rwalk',
walks=100, nlive=nlive, rstate=rstate)
sampler.run_nested(dlogz=0.01)
res = sampler.results
ndim = 10
nlive = 250
sampler = dynesty.NestedSampler(loglike, prior_transform, ndim=ndim,
bound='multi', sample='rwalk',
walks=100, nlive=nlive, rstate=rstate)
sampler.run_nested(dlogz=0.01)
res2 = sampler.results
"""
Explanation: We will now sample from this distribution using 'multi' and 'rslice' in $d=2$ and $d=10$ dimensions.
End of explanation
"""
from dynesty import plotting as dyplot
# plot 2-D
fig, axes = dyplot.runplot(res, color='blue',
lnz_truth=0., truth_color='black')
fig.tight_layout()
fig, axes = plt.subplots(2, 2, figsize=(14, 8))
fig, axes = dyplot.traceplot(res, truths=[[1./3., 2./3.], [1./3., 2./3.]],
quantiles=None, fig=(fig, axes))
fig.tight_layout()
fig, axes = plt.subplots(2, 2, figsize=(10, 10))
fig, axes = dyplot.cornerplot(res, truths=[[1./3., 2./3.], [1./3., 2./3.]],
quantiles=None, fig=(fig, axes))
# plot 10-D
fig, axes = dyplot.runplot(res2, color='red',
lnz_truth=0., truth_color='black')
fig.tight_layout()
"""
Explanation: Now let's see how we did!
End of explanation
"""
|
PLN-FaMAF/DeepLearningEAIA | deep_learning_tutorial_1.ipynb | bsd-3-clause | import numpy
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.datasets import mnist
"""
Explanation: Express Deep Learning in Python - Part 1
Do you have everything ready? Check the part 0!
How fast can you build a MLP?
In this first part we will see how to implement the basic components of a MultiLayer Perceptron (MLP) classifier, most commonly known as Neural Network. We will be working with the Keras: a very simple library for deep learning.
At this point, you may know how machine learning in general is applied and have some intuitions about how deep learning works, and more importantly, why it works. Now it's time to make some experiments, and for that you need to be as quick and flexible as possible. Keras is an idea tool for prototyping and doing your first approximations to a Machine Learning problem. On the one hand, Keras is integrated with two very powerfull backends that support GPU computations, Tensorflow and Theano. On the other hand, it has a level of abstraction high enough to be simple to understand and easy to use. For example, it uses a very similar interface to the sklearn library that you have seen before, with fit and predict methods.
Now let's get to work with an example:
1 - The libraries
Firts let's check we have installed everything we need for this tutorial:
End of explanation
"""
batch_size = 128
num_classes = 10
epochs = 10
TRAIN_EXAMPLES = 60000
TEST_EXAMPLES = 10000
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# reshape the dataset to convert the examples from 2D matrixes to 1D arrays.
x_train = x_train.reshape(60000, 28*28)
x_test = x_test.reshape(10000, 28*28)
# to make quick runs, select a smaller set of images.
train_mask = numpy.random.choice(x_train.shape[0], TRAIN_EXAMPLES, replace=False)
x_train = x_train[train_mask, :].astype('float32')
y_train = y_train[train_mask]
test_mask = numpy.random.choice(x_test.shape[0], TEST_EXAMPLES, replace=False)
x_test = x_test[test_mask, :].astype('float32')
y_test = y_test[test_mask]
# normalize the input
x_train /= 255
x_test /= 255
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
"""
Explanation: 2 - The dataset
For this quick tutorial we will use the (very popular) MNIST dataset. This is a dataset of 70K images of handwritten digits. Our task is to recognize which digits is displayed in the image: a classification problem. You have seen in previous courses how to train and evaluate a classifier, so we wont talk in further details about supervised learning.
The input to the MLP classifier are going to be images of 28x28 pixels represented as matrixes. The output will be one of ten classes (0 to 9), representing the predicted number written in the image.
End of explanation
"""
model = Sequential()
# Input to hidden layer
model.add(Dense(512, activation='relu', input_shape=(784,)))
# Hidden to output layer
model.add(Dense(10, activation='softmax'))
"""
Explanation: 3 - The model
The concept of Deep Learning is very broad, but the core of it is the use of classifiers with multiple hidden layer of neurons, or smaller classifiers. We all know the classical image of the simplest possible possible deep model: a neural network with a single hidden layer.
credits http://www.extremetech.com/wp-content/uploads/2015/07/NeuralNetwork.png
In theory, this model can represent any function TODO add a citation here. We will see how to implement this network in Keras, and during the second part of this tutorial how to add more features to create a deep and powerful classifier.
First, Deep Learning models are concatenations of Layers. This is represented in Keras with the Sequential model. We create the Sequential instance as an "empty carcass" and then we fill it with different layers.
The most basic type of Layer is the Dense layer, where each neuron in the input is connected to each neuron in the following layer, like we can see in the image above. Internally, a Dense layer has two variables: a matrix of weights and a vector of bias, but the beauty of Keras is that you don't need to worry about that. All the variables will be correctly created, initialized, trained and possibly regularized for you.
Each layer needs to know or be able to calculate al least three things:
The size of the input: the number of neurons in the incoming layer. For the first layer this corresponds to the size of each example in our dataset. The next layers can calculate their input size using the output of the previous layer, so we generally don't need to tell them this.
The type of activation: this is the function that is applied to the output of each neuron. Will talk in detail about this later.
The size of the output: the number of neurons in the next layer.
End of explanation
"""
model.summary()
"""
Explanation: We have successfully build a Neural Network! We can print a description of our architecture using the following command:
End of explanation
"""
model.compile(loss='categorical_crossentropy',
optimizer=keras.optimizers.SGD(),
metrics=['accuracy'])
"""
Explanation: Compiling a model in Keras
A very appealing aspect of Deep Learning frameworks is that they solve the implementation of complex algorithms such as Backpropagation. For those with some numerical optimization notions, minimization algorithms often involve the calculation of first defivatives. Neural Networks are huge functions full of non-linearities, and differentiating them is a... nightmare. For this reason, models need to be "compiled". In this stage, the backend builds complex computational graphs, and we don't have to worry about derivatives or gradients.
In Keras, a model can be compiled with the method .compile(). The method takes two parameters: loss and optimizer. The loss is the function that calculates how much error we have in each prediction example, and there are a lot of implemented alternatives ready to use. We will talk more about this, for now we use the standard categorical crossentropy. As you can see, we can simply pass a string with the name of the function and Keras will find the implementation for us.
The optimizer is the algorithm to minimize the value of the loss function. Again, Keras has many optimizers available. The basic one is the Stochastic Gradient Descent.
We pass a third argument to the compile method: the metric. Metrics are measures or statistics that allows us to keep track of the classifier's performance. It's similar to the loss, but the results of the metrics are not use by the optimization algorithm. Besides, metrics are always comparable, while the loss function can take random values depending on your problem.
Keras will calculate metrics and loss both on the training and the validation dataset. That way, we can monitor how other performance metrics vary when the loss is optimized and detect anomalies like overfitting.
End of explanation
"""
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))
"""
Explanation: [OPTIONAL] We can now visualize the architecture of our model using the vis_util tools. It's a very schematic view, but you can check it's not that different from the image we saw above (and that we intended to replicate).
If you can't execute this step don't worry, you can still finish the tutorial. This step requires graphviz and pydotplus libraries.
End of explanation
"""
history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs,
verbose=1, validation_data=(x_test, y_test));
"""
Explanation: Training
Once the model is compiled, everything is ready to train the classifier. Keras' Sequential model has a similar interface as the sklearn library that you have seen before, with fit and predict methods. As usual, we need to pass our training examples and their corresponding labels. Other parameters needed to train a neural network is the size of the batch and the number of epochs. We have two ways of specifying a validation dataset: we can pass the tuple of values and labels directly with the validation_data parameter, or we can pass a proportion to the validation_split argument and Keras will split the training dataset for us.
To correctly train our model we need to pass two important parameters to the fit function:
* batch_size: is the number of examples to use in each "minibatch" iteration of the Stochastic Gradient Descent algorithm. This is necessary for most optimization algorithms. The size of the batch is important because it defines how fast the algorithm will perform each iteration and also how much memory will be used to load each batch (possibly in the GPU).
* epochs: is the number of passes through the entire dataset. We need enough epochs for the classifier to converge, but we need to stop before the classifier starts overfitting.
End of explanation
"""
import pandas
pandas.DataFrame(history.history)
"""
Explanation: We have trained our model!
Additionally, Keras has printed out a lot of information of the training, thanks to the parameter verbose=1 that we passed to the fit function. We can see how many time it took in each iteration, and the value of the loss and metrics in the training and the validation dataset. The same information is stored in the output of the fit method, which sadly it's not well documented. We can see it in a pretty table with pandas.
End of explanation
"""
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
"""
Explanation: Why is this useful? This will give you an insight on how well your network is optimizing the loss, and how much it's actually learning. When training, you need to keep track of two things:
Your network is actually learning. This means your training loss is decreasing in average. If it's going up or it's stuck for more than a couple of epochs is safe to stop you training and try again.
You network is not overfitting. It's normal to have a gap between the validation and the training metrics, but they should decrease more or less at the same rate. If you see that your metrics for training are getting better but your validation metrics are getting worse, it is also a good point to stop and fix your overfitting problem.
Evaluation
Keras gives us a very useful method to evaluate the current performance called evaluate (surprise!). Evaluate will return the value of the loss function and all the metrics that we pass to the model when calling compile.
End of explanation
"""
prediction = model.predict_classes(x_test)
import seaborn as sns
from sklearn.metrics import confusion_matrix
sns.set_style('white')
sns.set_palette('colorblind')
matrix = confusion_matrix(numpy.argmax(y_test, 1), prediction)
figure = sns.heatmap(matrix / matrix.astype(numpy.float).sum(axis=1),
xticklabels=range(10), yticklabels=range(10),
cmap=sns.cubehelix_palette(8, as_cmap=True))
"""
Explanation: As you can see, using only 10 training epochs we get a very surprising accuracy in the training and test dataset. If you want to take a deeper look into your model, you can obtain the predictions as a vector and then use general purpose tools to explore the results. For example, we can plot the confusion matrix to see the most common errors.
End of explanation
"""
|
rubensfernando/mba-analytics-big-data | Python/2016-07-29/aula4-parte3-tratamento-excecoes.ipynb | mit | 10 *(1/0)
4 + spam*3
'2' + 2
"""
Explanation: O básico sobre tratamento de exceções
Erros detectados durante a execução são chamados de exceções e não são necessariamente fatais. A maioria das exceções não são lidadas pelos programas, entretanto, um resultado de mensagens de erros são ilustradas abaixo:
End of explanation
"""
produtos = ["ipda", "cel", "note"]
print(produtos[1])
print(produtos[3])
"""
Explanation: Podemos controlar o fluxo de execução quando algo inesperado ocorrer em nosso código.
End of explanation
"""
try:
print(produtos[3])
except:
print("O vetor não possui a posição desejada")
"""
Explanation: Para contornar esse erro, podemos utilizar o par try/catch.
End of explanation
"""
produtos[3+'1']
try:
print(produtos[3+'1'])
except:
print("O vetor não possui a posição desejada")
"""
Explanation: Desta forma o erro não aparece, porém caso o erro seja de outro tipo, como:
End of explanation
"""
try:
print(produtos[3+'1'])
except IndexError:
print("O vetor não possui a posição desejada")
"""
Explanation: Note que a saída será a mesma que definimos anteriormente. Portanto precisamos expecificar qual é o tipo no except.
End of explanation
"""
try:
print(produtos[3+'1'])
except IndexError:
print("O vetor não possui a posição desejada")
except TypeError:
print("Erro de Tipo")
"""
Explanation: Para ter mais de uma except, é só adicionar o outro tipo abaixo:
End of explanation
"""
|
jenshnielsen/HJCFIT | exploration/CH82.ipynb | gpl-3.0 | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from dcprogs.likelihood import QMatrix
tau = 1e-4
qmatrix = QMatrix([[ -3050, 50, 3000, 0, 0 ],
[ 2./3., -1502./3., 0, 500, 0 ],
[ 15, 0, -2065, 50, 2000 ],
[ 0, 15000, 4000, -19000, 0 ],
[ 0, 0, 10, 0, -10 ] ], 2)
qmatrix.matrix /= 1000.0
"""
Explanation: CH82 Model
The following tries to reproduce Fig 8 from Hawkes, Jalali, Colquhoun (1992).
First we create the $Q$-matrix for this particular model. Please note that the units are different from other publications.
End of explanation
"""
from dcprogs.likelihood import plot_roots, DeterminantEq
fig, ax = plt.subplots(1, 2, figsize=(7,5))
plot_roots(DeterminantEq(qmatrix, 0.2), ax=ax[0])
ax[0].set_xlabel('Laplace $s$')
ax[0].set_ylabel('$\\mathrm{det} ^{A}W(s)$')
plot_roots(DeterminantEq(qmatrix, 0.2).transpose(), ax=ax[1])
ax[1].set_xlabel('Laplace $s$')
ax[1].set_ylabel('$\\mathrm{det} ^{F}W(s)$')
ax[1].yaxis.tick_right()
ax[1].yaxis.set_label_position("right")
fig.tight_layout()
"""
Explanation: We first reproduce the top tow panels showing $\mathrm{det} W(s)$ for open and shut times.
These quantities can be accessed using dcprogs.likelihood.DeterminantEq. The plots are done using a standard plotting function from the dcprogs.likelihood package as well.
End of explanation
"""
from dcprogs.likelihood import ApproxSurvivor
approx = ApproxSurvivor(qmatrix, tau)
components = approx.af_components
print(components[:1])
"""
Explanation: Then we want to plot the panels c and d showing the excess shut and open-time probability densities$(\tau = 0.2)$. To do this we need to access each exponential that makes up the approximate survivor function. We could use:
End of explanation
"""
from dcprogs.likelihood import MissedEventsG
weight, root = components[1]
eG = MissedEventsG(qmatrix, tau)
# Note: the sum below is equivalent to a scalar product with u_F
coefficient = sum(np.dot(eG.initial_occupancies, np.dot(weight, eG.af_factor)))
pdf = lambda t: coefficient * exp((t)*root)
"""
Explanation: The list components above contain 2-tuples with the weight (as a matrix) and the exponant (or root) for each exponential component in $^{A}R_{\mathrm{approx}}(t)$. We could then create python functions pdf(t) for each exponential component, as is done below for the first root:
End of explanation
"""
from dcprogs.likelihood._methods import exponential_pdfs
def plot_exponentials(qmatrix, tau, x=None, ax=None, nmax=2, shut=False):
from dcprogs.likelihood import missed_events_pdf
if ax is None:
fig, ax = plt.subplots(1,1)
if x is None: x = np.arange(0, 5*tau, tau/10)
pdf = missed_events_pdf(qmatrix, tau, nmax=nmax, shut=shut)
graphb = [x, pdf(x+tau), '-k']
functions = exponential_pdfs(qmatrix, tau, shut=shut)
plots = ['.r', '.b', '.g']
together = None
for f, p in zip(functions[::-1], plots):
if together is None: together = f(x+tau)
else: together = together + f(x+tau)
graphb.extend([x, together, p])
ax.plot(*graphb)
fig, ax = plt.subplots(1,2, figsize=(7,5))
ax[0].set_xlabel('time $t$ (ms)')
ax[0].set_ylabel('Excess open-time probability density $f_{\\bar{\\tau}=0.2}(t)$')
plot_exponentials(qmatrix, 0.2, shut=False, ax=ax[0])
plot_exponentials(qmatrix, 0.2, shut=True, ax=ax[1])
ax[1].set_xlabel('time $t$ (ms)')
ax[1].set_ylabel('Excess shut-time probability density $f_{\\bar{\\tau}=0.2}(t)$')
ax[1].yaxis.tick_right()
ax[1].yaxis.set_label_position("right")
fig.tight_layout()
"""
Explanation: The initial occupancies, as well as the $Q_{AF}e^{-Q_{FF}\tau}$ factor are obtained directly from the object implementing the weight, root = components[1]
missed event likelihood $^{e}G(t)$.
However, there is a convenience function that does all the above in the package. Since it is generally of little use, it is not currently exported to the dcprogs.likelihood namespace. So we create below a plotting function that uses it.
End of explanation
"""
fig, ax = plt.subplots(1,2, figsize=(7,5))
ax[0].set_xlabel('time $t$ (ms)')
ax[0].set_ylabel('Excess open-time probability density $f_{\\bar{\\tau}=0.5}(t)$')
plot_exponentials(qmatrix, 0.5, shut=False, ax=ax[0])
plot_exponentials(qmatrix, 0.5, shut=True, ax=ax[1])
ax[1].set_xlabel('time $t$ (ms)')
ax[1].set_ylabel('Excess shut-time probability density $f_{\\bar{\\tau}=0.5}(t)$')
ax[1].yaxis.tick_right()
ax[1].yaxis.set_label_position("right")
fig.tight_layout()
from dcprogs.likelihood import QMatrix, MissedEventsG
tau = 1e-4
qmatrix = QMatrix([[ -3050, 50, 3000, 0, 0 ],
[ 2./3., -1502./3., 0, 500, 0 ],
[ 15, 0, -2065, 50, 2000 ],
[ 0, 15000, 4000, -19000, 0 ],
[ 0, 0, 10, 0, -10 ] ], 2)
eG = MissedEventsG(qmatrix, tau, 2, 1e-8, 1e-8)
meG = MissedEventsG(qmatrix, tau)
t = 3.5* tau
print(eG.initial_CHS_occupancies(t) - meG.initial_CHS_occupancies(t))
"""
Explanation: Finally, we create the last plot (e), and throw in an (f) for good measure.
End of explanation
"""
|
bashtage/statsmodels | examples/notebooks/ets.ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from statsmodels.tsa.exponential_smoothing.ets import ETSModel
plt.rcParams["figure.figsize"] = (12, 8)
"""
Explanation: ETS models
The ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).
This notebook shows how they can be used with statsmodels. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.
statsmodels implements all combinations of:
- additive and multiplicative error model
- additive and multiplicative trend, possibly dampened
- additive and multiplicative seasonality
However, not all of these methods are stable. Refer to [1] and references therein for more info about model stability.
[1] Hyndman, Rob J., and Athanasopoulos, George. Forecasting: principles and practice, 3rd edition, OTexts, 2021. https://otexts.com/fpp3/expsmooth.html
End of explanation
"""
oildata = [
111.0091,
130.8284,
141.2871,
154.2278,
162.7409,
192.1665,
240.7997,
304.2174,
384.0046,
429.6622,
359.3169,
437.2519,
468.4008,
424.4353,
487.9794,
509.8284,
506.3473,
340.1842,
240.2589,
219.0328,
172.0747,
252.5901,
221.0711,
276.5188,
271.1480,
342.6186,
428.3558,
442.3946,
432.7851,
437.2497,
437.2092,
445.3641,
453.1950,
454.4096,
422.3789,
456.0371,
440.3866,
425.1944,
486.2052,
500.4291,
521.2759,
508.9476,
488.8889,
509.8706,
456.7229,
473.8166,
525.9509,
549.8338,
542.3405,
]
oil = pd.Series(oildata, index=pd.date_range("1965", "2013", freq="AS"))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)")
"""
Explanation: Simple exponential smoothing
The simplest of the ETS models is also known as simple exponential smoothing. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt's method is:
\begin{align}
y_{t} &= y_{t-1} + e_t\
l_{t} &= l_{t-1} + \alpha e_t\
\end{align}
This state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):
\begin{align}
\hat{y}{t|t-1} &= l{t-1}\
l_{t} &= \alpha y_{t-1} + (1 - \alpha) l_{t-1}
\end{align}
Here, $\hat{y}_{t|t-1}$ is the forecast/expectation of $y_t$ given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.
End of explanation
"""
model = ETSModel(oil)
fit = model.fit(maxiter=10000)
oil.plot(label="data")
fit.fittedvalues.plot(label="statsmodels fit")
plt.ylabel("Annual oil production in Saudi Arabia (Mt)")
# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label="R fit", linestyle="--")
plt.legend()
"""
Explanation: The plot above shows annual oil production in Saudi Arabia in million tonnes. The data are taken from the R package fpp2 (companion package to prior version [1]).
Below you can see how to fit a simple exponential smoothing model using statsmodels's ETS implementation to this data. Additionally, the fit using forecast in R is shown as comparison.
End of explanation
"""
model_heuristic = ETSModel(oil, initialization_method="heuristic")
fit_heuristic = model_heuristic.fit()
oil.plot(label="data")
fit.fittedvalues.plot(label="estimated")
fit_heuristic.fittedvalues.plot(label="heuristic", linestyle="--")
plt.ylabel("Annual oil production in Saudi Arabia (Mt)")
# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label="with R params", linestyle=":")
plt.legend()
"""
Explanation: By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. It is possible to only use a heuristic for the initial values:
End of explanation
"""
print(fit.summary())
print(fit_heuristic.summary())
"""
Explanation: The fitted parameters and some other measures are shown using fit.summary(). Here we can see that the log-likelihood of the model using fitted initial states is fractionally lower than the one using a heuristic for the initial states.
End of explanation
"""
austourists_data = [
30.05251300,
19.14849600,
25.31769200,
27.59143700,
32.07645600,
23.48796100,
28.47594000,
35.12375300,
36.83848500,
25.00701700,
30.72223000,
28.69375900,
36.64098600,
23.82460900,
29.31168300,
31.77030900,
35.17787700,
19.77524400,
29.60175000,
34.53884200,
41.27359900,
26.65586200,
28.27985900,
35.19115300,
42.20566386,
24.64917133,
32.66733514,
37.25735401,
45.24246027,
29.35048127,
36.34420728,
41.78208136,
49.27659843,
31.27540139,
37.85062549,
38.83704413,
51.23690034,
31.83855162,
41.32342126,
42.79900337,
55.70835836,
33.40714492,
42.31663797,
45.15712257,
59.57607996,
34.83733016,
44.84168072,
46.97124960,
60.01903094,
38.37117851,
46.97586413,
50.73379646,
61.64687319,
39.29956937,
52.67120908,
54.33231689,
66.83435838,
40.87118847,
51.82853579,
57.49190993,
65.25146985,
43.06120822,
54.76075713,
59.83447494,
73.25702747,
47.69662373,
61.09776802,
66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel("Australian Tourists")
# fit in statsmodels
model = ETSModel(
austourists,
error="add",
trend="add",
seasonal="add",
damped_trend=True,
seasonal_periods=4,
)
fit = model.fit()
# fit with R params
params_R = [
0.35445427,
0.03200749,
0.39993387,
0.97999997,
24.01278357,
0.97770147,
1.76951063,
-0.50735902,
-6.61171798,
5.34956637,
]
fit_R = model.smooth(params_R)
austourists.plot(label="data")
plt.ylabel("Australian Tourists")
fit.fittedvalues.plot(label="statsmodels fit")
fit_R.fittedvalues.plot(label="R fit", linestyle="--")
plt.legend()
print(fit.summary())
"""
Explanation: Holt-Winters' seasonal method
The exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters' method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:
\begin{align}
y_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\
l_{t} &= l_{t-1} + b_{t-1} + \alpha e_t\
b_{t} &= b_{t-1} + \beta e_t\
s_{t} &= s_{t-m} + \gamma e_t
\end{align}
End of explanation
"""
pred = fit.get_prediction(start="2014", end="2020")
df = pred.summary_frame(alpha=0.05)
df
"""
Explanation: Predictions
The ETS model can also be used for predicting. There are several different methods available:
- forecast: makes out of sample predictions
- predict: in sample and out of sample predictions
- simulate: runs simulations of the statespace model
- get_prediction: in sample and out of sample predictions, as well as prediction intervals
We can use them on our previously fitted model to predict from 2014 to 2020.
End of explanation
"""
simulated = fit.simulate(anchor="end", nsimulations=17, repetitions=100)
for i in range(simulated.shape[1]):
simulated.iloc[:, i].plot(label="_", color="gray", alpha=0.1)
df["mean"].plot(label="mean prediction")
df["pi_lower"].plot(linestyle="--", color="tab:blue", label="95% interval")
df["pi_upper"].plot(linestyle="--", color="tab:blue", label="_")
pred.endog.plot(label="data")
plt.legend()
"""
Explanation: In this case the prediction intervals were calculated using an analytical formula. This is not available for all models. For these other models, prediction intervals are calculated by performing multiple simulations (1000 by default) and using the percentiles of the simulation results. This is done internally by the get_prediction method.
We can also manually run simulations, e.g. to plot them. Since the data ranges until end of 2015, we have to simulate from the first quarter of 2016 to the first quarter of 2020, which means 17 steps.
End of explanation
"""
|
Hexiang-Hu/mmds | week6/.ipynb_checkpoints/Quiz-Week6-checkpoint.ipynb | mit | import numpy as np
p1 = (5, 4)
p2 = (8, 3)
p3 = (7, 2)
p4 = (3, 3)
def calc_wb(p1, p2):
dx = ( p1[0] - p2[0] )
dy = ( p1[1] - p2[1] )
return ( ( float(dy) *2 / float(dy - dx), float(-dx)*2 / float(dy - dx) ),\
(dx*p2[1] - dy * p2[0])*2 / float(dy - dx) + 1) # b = dx*y1 - dy*x1
def cal_margin(w, b, pt):
return w[0] * pt[0] + w[1] * pt[1] + b
w, b = calc_wb(p1, p2)
print "w for p1, p2: " + str(w)
print "b for p1, p2: " + str(b)
print "==========================="
print cal_margin(w, b, p1)
print cal_margin(w, b, p2)
print cal_margin(w, b, p3)
print cal_margin(w, b, p4)
print
w, b = calc_wb(p4, p3)
print "w for p1, p2: " + str(w)
print "b for p1, p2: " + str(b)
print "==========================="
print cal_margin(w, b, p1)
print cal_margin(w, b, p2)
print cal_margin(w, b, p3)
print cal_margin(w, b, p4)
"""
Explanation: Quiz -Week 6A
Q1.
The figure below shows two positive points (purple squares) and two negative points (green circles):
That is, the training data set consists of:
(x1,y1) = ((5,4),+1)
(x2,y2) = ((8,3),+1)
(x3,y3) = ((7,2),-1)
(x4,y4) = ((3,3),-1)
Our goal is to find the maximum-margin linear classifier for this data. In easy cases, the shortest line between a positive and negative point has a perpendicular bisector that separates the points. If so, the perpendicular bisector is surely the maximum-margin separator. Alas, in this case, the closest pair of positive and negative points, x2 and x3, have a perpendicular bisector that misclassifies x1 as negative, so that won't work.
The next-best possibility is that we can find a pair of points on one side (i.e., either two positive or two negative points) such that a line parallel to the line through these points is the maximum-margin separator. In these cases, the limit to how far from the two points the parallel line can get is determined by the closest (to the line between the two points) of the points on the other side. For our simple data set, this situation holds.
Consider all possibilities for boundaries of this type, and express the boundary as w.x+b=0, such that w.x+b≥1 for positive points x and w.x+b≤-1 for negative points x. Assuming that w = (w1,w2), identify in the list below the true statement about one of w1, w2, and b.
End of explanation
"""
w = (-1, 1)
b = -2
def cal_margin(w, b, pt):
return w[0] * pt[0] + w[1] * pt[1] + b
print cal_margin(w, b, (7, 10) )
print cal_margin(w, b, (7, 8) )
print cal_margin(w, b, (3, 4) )
print cal_margin(w, b, (3, 4) )
"""
Explanation: Q2.
Consider the following training set of 16 points. The eight purple squares are positive examples, and the eight green circles are negative examples.
We propose to use the diagonal line with slope +1 and intercept +2 as a decision boundary, with positive examples above and negative examples below. However, like any linear boundary for this training set, some examples are misclassified. We can measure the goodness of the boundary by computing all the slack variables that exceed 0, and then using them in one of several objective functions. In this problem, we shall only concern ourselves with computing the slack variables, not an objective function.
To be specific, suppose the boundary is written in the form w.x+b=0, where w = (-1,1) and b = -2. Note that we can scale the three numbers involved as we wish, and so doing changes the margin around the boundary. However, we want to consider this specific boundary and margin.
Determine the slack for each of the 16 points. Then, identify the correct statement in the list below.
End of explanation
"""
def predict_by_tree(pt):
if pt[0] < 45:
if pt[1] < 110:
print "Doesn't buy"
else:
print "Buy"
else:
if pt[1] < 75:
print "Doesn't buy"
else:
print "Buy"
predict_by_tree((43, 83))
predict_by_tree((55, 118))
predict_by_tree((65, 140))
predict_by_tree((28, 145))
print "=============="
predict_by_tree((65, 140))
predict_by_tree((25, 125))
predict_by_tree((44, 105))
predict_by_tree((35, 63))
"""
Explanation: Q3.
Below we see a set of 20 points and a decision tree for classifying the points.
To be precise, the 20 points represent (Age,Salary) pairs of people who do or do not buy gold jewelry. Age (appreviated A in the decision tree) is the x-axis, and Salary (S in the tree) is the y-axis. Those that do are represented by gold points, and those that do not by green points. The 10 points of gold-jewelry buyers are:
(28,145), (38,115), (43,83), (50,130), (50,90), (50,60), (50,30), (55,118), (63,88), and (65,140).
The 10 points of those that do not buy gold jewelry are:
(23,40), (25,125), (29,97), (33,22), (35,63), (42,57), (44, 105), (55,63), (55,20), and (64,37).
Some of these points are correctly classified by the decision tree and some are not. Determine the classification of each point, and then indicate in the list below the point that is misclassified.
End of explanation
"""
import numpy as np
mat = np.array([ [1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10,11,12],
[13,14,15,16] ])
vec = np.array([1, 2, 3, 4])
def key_val(mat, vec):
pair = dict()
for idx, row in enumerate(mat):
# pair[idx + 1] = np.dot(row, vec)
pair[idx + 1] = row * vec
return pair
print key_val(mat, vec)
"""
Explanation: Quiz Week 6A.
Q1.
Using the matrix-vector multiplication described in Section 2.3.1, applied to the matrix and vector:
<pre>
| 1 2 3 4 | | 1 |
| 5 6 7 8 | * | 2 |
| 9 10 11 12 | | 3 |
| 13 14 15 16 | | 4 |
</pre>
Apply the Map function to this matrix and vector. Then, identify in the list below, one of the key-value pairs that are output of Map.
Solution 1.
The matrix-vector product is the vector x of length n, whose ith element xi is given by
$$
\begin{equation}
x_i = \sum_{ j = 1}^n m_{ij} \cdot v_j
\end{equation}
$$
From each matrix element mij it produces the key-value pair ( i, $m_{ij} \cdot v_j$ ).
Thus, all terms of the sum that make up the component $x_i$ of the matrix-vector product will get the same key, i.
End of explanation
"""
|
ajgpitch/qutip-notebooks | examples/piqs-boundary-time-crystals.ipynb | lgpl-3.0 | from time import clock
from scipy.io import mmwrite
import matplotlib.pyplot as plt
from qutip import *
from qutip.piqs import *
"""
Explanation: Boundary time crystals
Notebook author: Nathan Shammah (nathan.shammah at gmail.com)
We apply the Permutational Invariant Quantum Solver (PIQS) [1], imported in QuTiP as $\texttt{qutip.piqs}$ to the study of the following driven-dissipative dynamics
\begin{eqnarray}
\dot{\rho} = \mathcal{D}\text{TLS}(\rho) &=&
-\frac{i}{\hbar}\lbrack H,\rho \rbrack
+\frac{\gamma\text{CE}}{2}\mathcal{L}{J{-}}[\rho]
\nonumber\
&&+\sum_{n=1}^{N}\left(
\frac{\gamma_\text{E}}{2}\mathcal{L}{J{-,n}}[\rho]
+\frac{\gamma_\text{D}}{2}\mathcal{L}{J{z,n}}[\rho]\right)
\end{eqnarray}
where $J_{\alpha,n}=\frac{1}{2}\sigma_{\alpha,n}$ are SU(2) Pauli spin operators, with ${\alpha=x,y,z}$ and $J_{\pm,n}=\sigma_{\pm,n}$. The collective spin operators are $J_{\alpha} = \sum_{n}J_{\alpha,n}$. The Lindblad super-operators are $\mathcal{L}_{A} = 2A\rho A^\dagger - A^\dagger A \rho - \rho A^\dagger A$.
Here the rates $\gamma_\text{CE}$ (gCE), $\gamma_\text{E}$ (gE) and $\gamma_\text{D}$ quantify collective emission, local emission and local dephasing.
Here we study the Hamiltonian $H=\hbar\omega_x J_x$, which has been studied in the context of quantum optics in Refs. [2,3].
The collective driven-dissipative dynamics has been studied in the regime $\gamma_\text{E}=\gamma_\text{D}=0$ and in the context of quantum phase transitions (QPTs) in Ref. [4].
Below we will study the spectrum of the Liouvillian [5] in the two parameter regimes found in Ref. [4], that of strong and of weak dissipation. If only collective processes are present, one can efficiently study the system's dynamics in the reduced symmetric space, whose Hilbert space dimension is only (N+1). We will do so using QuTiP's jmat() functions [6].
We then generalize the study of the collective dynamics to include local terms.
End of explanation
"""
nnn = 10
N = nnn
jj_mat = nnn/2
[jx_mat, jy_mat, jz_mat] = jmat(jj_mat)
jp_mat = jx_mat + 1j * jy_mat
jm_mat = jx_mat - 1j * jy_mat
w0 = 1
kappa = 2 * w0
gg = kappa/ jj_mat
ham = w0 * jx_mat
c_ops = [np.sqrt(gg) * jm_mat]
liouv_mat = liouvillian(ham, c_ops)
print(liouv_mat.shape)
eig_mat = liouv_mat.eigenenergies()
re_eigmat = np.real(eig_mat)
imag_eigmat = np.imag(eig_mat)
fig6 = plt.figure(6)
plt.plot(re_eigmat/kappa, imag_eigmat/kappa, 'k.')
label_size = 15
label_size2 = 15
label_size3 = 15
plt.rc('text', usetex = True)
plt.title(r'BTC - $\mathcal{L}$ spectrum, strong dissipation limit QuTiP jmat',
fontsize = label_size2)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
plt.ylim([-20,15])
plt.xlim([-15,0])
plt.xlabel(r'$\mathrm{Re}(\lambda)$', fontsize = label_size3)
plt.ylabel(r'$\mathrm{Im}(\lambda)$', fontsize = label_size3)
fname = 'figures/btc_eig_N{}_strong_jmat.pdf'.format(N)
savefile = False
if savefile == True:
plt.savefig(fname, bbox_inches='tight')
plt.show()
plt.close()
#Saving for Mathematica
liouvd_jmat =liouv_mat.full()
liouvd_re_jmat = np.real(liouvd_jmat)
liouvd_imag_jmat = np.imag(liouvd_jmat)
#saveto_file_name2 = str("re_liouv_N={}".format(N))
#liouvd_re.astype('float32').tofile('{}.dat'.format(saveto_file_name2))
#saveto_file_name3 = str("imag_liouv_N={}".format(N))
#liouvd_imag.astype('float32').tofile('{}.dat'.format(saveto_file_name3))
#mmwrite('data/liouvrejmat.mtx', liouvd_re_jmat/kappa)
#mmwrite('data/liouvimjmat.mtx', liouvd_imag_jmat/kappa)
fig7 = plt.figure(7)
plt.plot(re_eigmat/kappa, imag_eigmat/kappa, 'k.', re_eigmat/kappa, 0*imag_eigmat/kappa, '-', lw = 0.5)
label_size = 15
label_size2 = 15
label_size3 = 15
plt.title(r'BTC - $\mathcal{L}$ spectrum, strong dissipation limit, Jmat', fontsize = label_size2)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
plt.ylim([-1,1])
plt.xlim([-4,0])
plt.xlabel(r'$\mathrm{Re}(\lambda)$', fontsize = label_size3)
plt.ylabel(r'$\mathrm{Im}(\lambda)$', fontsize = label_size3)
fname = 'figures/btc_eig_inset_N{}_strong_jmat.pdf'.format(N)
if savefile == True:
plt.savefig(fname, bbox_inches='tight')
plt.show()
plt.close()
"""
Explanation: Spectrum of the Liouvillian - Strong dissipation limit $\omega_{0} = 0.5 \kappa $
End of explanation
"""
nnn = 36
N = nnn
jj_mat = nnn/2
[jx_mat, jy_mat, jz_mat] = jmat(jj_mat)
jp_mat = jx_mat + 1j * jy_mat
jm_mat = jx_mat - 1j * jy_mat
w0 = 1
kappa = 2/3 * w0
gg = kappa/ jj_mat
ham = w0 * jx_mat
c_ops = [np.sqrt(gg) * jm_mat]
liouv_mat = liouvillian(ham, c_ops)
print(liouv_mat.shape)
eig_mat = liouv_mat.eigenenergies()
re_eigmat = np.real(eig_mat)
imag_eigmat = np.imag(eig_mat)
fig8 = plt.figure(8)
plt.plot(re_eigmat/kappa, imag_eigmat/kappa, 'k.')
label_size = 15
label_size2 = 15
label_size3 = 15
plt.rc('text', usetex = True)
plt.title(r'BTC - $\mathcal{L}$ spectrum, weak dissipation limit QuTiP jmat', fontsize = label_size2)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
plt.ylim([-50,35])
plt.xlim([-15,0])
plt.xlabel(r'$\mathrm{Re}(\lambda)$', fontsize = label_size3)
plt.ylabel(r'$\mathrm{Im}(\lambda)$', fontsize = label_size3)
fname = 'figures/btc_eig_N{}_weak_jmat.pdf'.format(N)
if savefile == True:
plt.savefig(fname, bbox_inches='tight')
plt.show()
plt.close()
"""
Explanation: The Figure above reproduces qualitatively the study performed in Ref. [4].
Spectrum of the Liouvillian - Weak dissipation limit $\omega_{0} = 1.5 \kappa $
End of explanation
"""
fig9 = plt.figure(9)
plt.plot(re_eigmat/kappa, imag_eigmat/kappa, 'k.', re_eigmat/kappa, 0*imag_eigmat/kappa, '-', lw = 0.5)
label_size = 15
label_size2 = 15
label_size3 = 15
plt.title(r'BTC - $\mathcal{L}$ spectrum, weak dissipation limit, Jmat', fontsize = label_size2)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
plt.ylim([-5,5])
plt.xlim([-0.4,0])
plt.xlabel(r'$\mathrm{Re}(\lambda)$', fontsize = label_size3)
plt.ylabel(r'$\mathrm{Im}(\lambda)$', fontsize = label_size3)
fname = 'figures/btc_eig_inset_N{}_weak_jmat.pdf'.format(N)
if savefile == True:
plt.savefig(fname, bbox_inches='tight')
plt.show()
plt.close()
"""
Explanation: The Figure above reproduces qualitatively the study performed in Ref. [4].
End of explanation
"""
N = 20
ntls = N
nds = num_dicke_states(N)
print("System size: N = ", N, "| nds = ", nds, "| nds^2 = ", nds**2, "| 2^N = ", 2**N)
[jx, jy, jz] = jspin(N)
jp = jspin(N, "+")
jm = jp.dag()
jpjm = jp*jm
w0 = 1
kappa = 0.5 * w0
gCE = 2*kappa/N
gE = 0
gP = 0
gCD = 0
gCP = 0
h = w0 * jx
nt = 1001
td0 = kappa
tmax = 200 * td0
t = np.linspace(0, tmax, nt)
rho0 = dicke(N, N/2, N/2)
jzt_list = []
jpjmt_list = []
jz2t_list = []
gD_list = [0, 0.01, 0.1, 1]
for gD in gD_list:
print(gD)
system = Dicke(N=N)
system.collective_emission = gCE
system.emission = gE
system.dephasing = gD
system.pumping = gP
system.collective_pumping = gCP
system.collective_dephasing = gCD
# energy / dynamics numerical
system.hamiltonian = h
liouv = system.liouvillian()
result = mesolve(liouv, rho0, t, [], e_ops = [jz, jp*jm, jz*jz], options = Options(store_states=True))
rhot = result.states
jz_t = result.expect[0]
jpjm_t = result.expect[1]
jz2_t = result.expect[2]
jzt_list.append(jz_t)
jpjmt_list.append(jpjm_t)
jz2t_list.append(jz2_t)
# gD_list.append(gD)
plt.rc('text', usetex = True)
label_size = 20
label_size2 = 20
label_size3 = 20
plt.rc('text', usetex = True)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
lw = 1
i = 0
fig5 = plt.figure(figsize=(7,5))
for gD in gD_list:
plt.plot(w0*t, jzt_list[i]/(N/2), '-',
label = r"$\gamma_\phi/\omega_x={}$".format(gD), linewidth = 2*lw+0.4*i)
i = i+1
plt.ylim([-1,1])
#plt.title(r'Total inversion', fontsize = label_size2)
plt.xlabel(r'$\omega_x t$', fontsize = label_size3)
plt.ylabel(r'$\langle J_z \rangle (t)$', fontsize = label_size3)
plt.legend(fontsize = label_size3*0.8)
plt.show()
plt.close()
#cooperativity
plt.rc('text', usetex = True)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
fig8 = plt.figure(figsize=(7,5))
i=0
for gD in gD_list:
plt.plot(w0*t, (jz2t_list[i] -jzt_list[i] + jpjmt_list[i])/((N/2*(N/2+1))),
'-', label = r"$\gamma_\phi/\omega_x={}$".format(gD), linewidth = 2*lw+0.4*i)
i = i+1
plt.ylim([0,2.])
plt.xlabel(r'$\omega_x t$', fontsize = label_size3)
plt.ylabel(r'$\langle J^2 \rangle (t)$', fontsize = label_size3)
plt.legend(fontsize = label_size3*0.8)
plt.title(r'Cooperativity', fontsize = label_size2)
plt.show()
plt.close()
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
fig6 = plt.figure(figsize=(8,6))
i=0
for gD in gD_list:
plt.plot(w0*t, jpjmt_list[i]/(N/2)**2, label = r"$\gamma_\phi/\omega_x={}$".format(gD), linewidth = 2*lw+0.4*i)
i = i+1
#plt.ylim([-1,1])
plt.xlabel(r'$\omega_x t$', fontsize = label_size3)
plt.ylabel(r'$\langle J_{+}J_{-} \rangle (t)$', fontsize = label_size3)
plt.legend(fontsize = label_size3*0.7)
plt.title(r'Light emission', fontsize = label_size2)
plt.show()
plt.close()
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
fig7 = plt.figure(figsize=(7,5))
i=0
for gD in gD_list:
plt.plot(w0*t, jz2t_list[i]/(N/2), '-', label = r"$\gamma_\phi/\omega_x={}$".format(gD), linewidth = 2*lw+0.4*i)
i = i+1
#plt.ylim([-1,1])
plt.xlabel(r'$\omega_x t$', fontsize = label_size3)
plt.ylabel(r'$\langle J_z^2 \rangle (t)$', fontsize = label_size3)
plt.legend(fontsize = label_size3*0.7)
plt.title(r'Second moment', fontsize = label_size2)
plt.show()
plt.close()
# Study of local incoherent losses
N = 20
print(N)
w0 = 1
kappa = 0.5 * w0
gCE = 2*kappa /N
gE = 0
gP = 0
gD = 0
gCD = 0
gCP = 0
gD = 0
h = w0 * jx
nt = 1001
td0 = kappa
tmax = 200 * td0
t = np.linspace(0, tmax, nt)
rho0 = dicke(N, N/2, N/2)
jzt_list = []
jpjmt_list = []
jz2t_list = []
gE_list = [0, 0.01, 0.1, 1]
for gE in gE_list:
print(gE)
system = Dicke(N=N)
system.collective_emission = gCE
system.emission = gE
system.dephasing = gD
system.pumping = gP
system.collective_pumping = gCP
system.collective_dephasing = gCD
# energy / dynamics numerical
system.hamiltonian = h
liouv = system.liouvillian()
result = mesolve(liouv, rho0, t, [], e_ops = [jz, jp*jm, jz*jz], options = Options(store_states=True))
rhot = result.states
jz_t = result.expect[0]
jpjm_t = result.expect[1]
jz2_t = result.expect[2]
jzt_list.append(jz_t)
jpjmt_list.append(jpjm_t)
jz2t_list.append(jz2_t)
# gD_list.append(gD)
plt.rc('text', usetex = True)
label_size = 20
label_size2 = 20
label_size3 = 20
plt.rc('text', usetex = True)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
lw = 1
i = 0
fig5 = plt.figure(figsize=(7,5))
for gD in gD_list:
plt.plot(w0*t, jzt_list[i]/(N/2), '-', label = r"$\gamma_\downarrow/\omega_x={}$".format(gD), linewidth = 2*lw+0.4*i)
i = i+1
plt.ylim([-1,1])
#plt.title(r'Total inversion', fontsize = label_size2)
plt.xlabel(r'$\omega_x t$', fontsize = label_size3)
plt.ylabel(r'$\langle J_z \rangle (t)$', fontsize = label_size3)
plt.legend(fontsize = label_size3*0.8)
fname = 'figures/btc_jzt_N{}_gE.pdf'.format(N)
if savefile == True:
plt.savefig(fname, bbox_inches='tight')
plt.show()
plt.close()
#cooperativity
plt.rc('text', usetex = True)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
fig8 = plt.figure(figsize=(7,5))
i=0
for gD in gD_list:
plt.plot(w0*t, (jz2t_list[i] -jzt_list[i] + jpjmt_list[i])/((N/2*(N/2+1))),
'-', label = r"$\gamma_\downarrow/\omega_x={}$".format(gD), linewidth = 2*lw+0.4*i)
i = i+1
plt.ylim([0,2.])
plt.xlabel(r'$\omega_x t$', fontsize = label_size3)
plt.ylabel(r'$\langle J^2 \rangle (t)$', fontsize = label_size3)
plt.legend(fontsize = label_size3*0.8)
plt.title(r'Cooperativity', fontsize = label_size2)
plt.show()
plt.close()
"""
Explanation: The Figure above reproduces qualitatively the study performed in Ref. [4].
Time evolution of collective operators, such as $\langle J_z (t)\rangle$
End of explanation
"""
qutip.about()
"""
Explanation: The plots above integrate the study on the effect of local dissipation performed in Ref. [1]. The boundary time crystals were introduced in Ref. [4]. A study of the effect of inhomogenous broadening (non-identical two level systems) is performed in Ref. [7] with regard to boundary time crystals and in Ref. [8] with regards to Dicke superradiance.
References
[1] N. Shammah, S. Ahmed, N. Lambert, S. De Liberato, and F. Nori,
Open quantum systems with local and collective incoherent processes: Efficient numerical simulation using permutational invariance https://arxiv.org/abs/1805.05129
The PIQS library can be found at https://github.com/nathanshammah/piqs/
[2] R. Bonifacio and L. A. Lugiato, Optical bistability and cooperative effects in resonance fluorescence, Phys. Rev. A 18, 1129 (1978)
[3] S. Sarkar and J. S. Satchell, Optical bistability with small numbers of atoms, Europhys. Lett. 3, 797 (1987)
[4] F. Iemini, A. Russomanno, J. Keeling, M. Schirò, M. Dalmonte, and R. Fazio, Boundary Time Crystals, arXiv:1708.05014 (2017)
[5] V. V. Albert and L. Jiang, Symmetries and conserved quantities in Lindblad master equations, Phys. Rev. A 89, 022118 (2014)
[6] J.R. Johansson, P.D. Nation, and F. Nori, Comp. Phys. Comm. 183, 1760 (2012) http://qutip.org
[7] K. Tucker, B. Zhu, R. Lewis-Swan, J. Marino, F. Jimenez, J. Restrepo, and A. M. Rey, arXiv:1805.03343 (2018)
[8] N. Lambert, Y. Matsuzaki, K. Kakuyanagi, N. Ishida, S. Saito, and F. Nori, Phys. Rev. B 94, 224510 (2016).
End of explanation
"""
|
squishbug/DataScienceProgramming | 05-Operating-with-Multiple-Tables/HW05/CheckHomework05.ipynb | cc0-1.0 | import pandas as pd
import numpy as np
"""
Explanation: Check Homework HW05
Use this notebook to check your solutions. This notebook will not be graded.
End of explanation
"""
import hw5_answers
reload(hw5_answers)
from hw5_answers import *
"""
Explanation: Now, import your solutions from hw5_answers.py. The following code looks a bit redundant. However, we do this to allow reloading the hw5_answers.py in case you made some changes. Normally, Python assumes that modules don't change and therefore does not try to import them again.
End of explanation
"""
Employees = pd.read_excel('/home/data/AdventureWorks/Employees.xls')
Territory = pd.read_excel('/home/data/AdventureWorks/SalesTerritory.xls')
Customers = pd.read_excel('/home/data/AdventureWorks/Customers.xls')
Orders = pd.read_excel('/home/data/AdventureWorks/ItemsOrdered.xls')
"""
Explanation: The Employees, Territory, Customers, and Orders tables are the same as those we used in class.
End of explanation
"""
df1 = get_manager(Employees)
print "Shape of resulting table: ", df1.shape
print "Columns: ", ', '.join(df1.columns)
df1.head()
"""
Explanation: Problem 1
Write a function called get_manager that takes as its one argument the Pandas DataFrame "Employees" and returns a DataFrame containing list of all employees (EmployeeID, first name, middle name, last name), and their manager's first and last name. The columns in the output DataFrame should be: EmployeeID, FirstName, MiddleName, LastName, ManagerFirstName, ManagerLastName.
End of explanation
"""
df2 = get_spend_by_order(Orders, Customers)
print "Shape of resulting table: ", df2.shape
print "Columns: ", ', '.join(df2.columns)
df2.head()
"""
Explanation: Shape of resulting table: (291, 6)
Columns: EmployeeID, FirstName, MiddleName, LastName, ManagerFirstName, ManagerLastName
| EmployeeID | FirstName |MiddleName | LastName | ManagerFirstName | ManagerLastName
-----------|-----------|-----------|----------|------------------|----------------
0 | 259 | Ben | T | Miller |Sheela | Word
1 | 278 | Garrett | R | Vargas |Stephen | Jiang
2 | 204 | Gabe | B | Mares | Peter | Krebs
3 | 78 | Reuben | H | D'sa | Peter | Krebs
4 | 255 | Gordon | L | Hee | Sheela | Word
Problem 2
Write a functon called get_spend_by_order that takes as its two arguments the Pandas DataFrames "Orders" and "Customers", and returns a DataFrame with the following columns: "FirstName", "LastName", "Item", "TotalSpent", listing all cutomer names, their purchased items, and the total amount spend on that item (remember that the "Price" listed in "Orders" is the price per item).
End of explanation
"""
df3 = get_order_location(Orders, Customers, Territory)
print "Shape of resulting table: ", df3.shape
print "Columns: ", ', '.join(df3.columns)
df3.head()
"""
Explanation: Shape of resulting table: (32, 4)
Columns: FirstName, LastName, Item, TotalSpent
|FirstName | LastName | Item | TotalSpent
----------|----------|------|-----------
0 | Anthony | Sanchez | Umbrella | 4.5
1 | Conrad | Giles | Ski Poles | 25.5
2 | Conrad | Giles | Tent | 88.0
3 | Donald | Davids | Lawnchair | 32.0
4 | Elroy | Keller | Inflatable Mattress | 38.0
Problem 3
Write a function called get_order_location that takes three arguments: "Orders", "Customers", and "Territory", and returns a DataFrame containing the following columns: "CustomerID", "Name", and "TotalItems", that gives, for each order, the CustomerID, the name of the territory where the order was placed, and the total number of items ordered (yes, 2 ski poles counts as 2 items).
End of explanation
"""
df4 = employee_info(Employees)
print "Shape of resulting table: ", df4.shape
print "Columns: ", ', '.join(df4.columns)
df4.head()
"""
Explanation: Shape of resulting table: (11, 3)
Columns: CustomerID, Name, TotalItems
| CustomerID | Name | TotalItems
-----------|------|-----------
0 | 10315 | Central | 1
1 | 10438 | Central | 3
2 | 10439 | Central | 2
3 | 10101 | Northwest | 6
4 | 10299 | Northwest | 2
Problem 4
Write a function called employee_info that takes one argument: "Employees", and returns a DataFrame containing the following columns: JobTitle, NumberOfEmployees, and MeanVacationHours, containing all job titles, the number of employees with that job title, and the mean number of vacation days for employees with that job title.
End of explanation
"""
|
wmvanvliet/neuroscience_tutorials | conpy-intro/MEG_connectivity_exercise.ipynb | bsd-2-clause | # Don't worry about warnings in this exercise, as they can be distracting.
import warnings
warnings.simplefilter('ignore')
# Import the required Python modules
import mne
import conpy
import surfer
# Import and configure the 3D graphics backend
from mayavi import mlab
mlab.init_notebook('png')
# Tell MNE-Python to be quiet. The normal barrage of information will only distract us. Only display errors.
mne.set_log_level('ERROR')
# Configure the plotting interface to display plots in their own windows.
# The % sign makes it a "magic" command: a command ment for the notebook environment, rather than a command for Python.
%matplotlib notebook
# Tell MNE-Python and PySurfer where to find the brain model
import os
os.environ['SUBJECTS_DIR'] = 'data/subjects'
# Let's test plotting a brain (this is handled by the PySurfer package)
surfer.Brain('sample', hemi='both', surf='pial')
"""
Explanation: Intro to ConPy: functional connectivity estimation of MEG signals
Welcome to this introductory tutorial for the ConPy package.
Together with MNE-Python, we can use it to perform functional connectivity estimation of MEG data.
This tutorial was written to be used as an exercise during my lectures.
In lieu of my lecture, you can read this paper to get the theoretical background you need to understand the concepts we will be dealing with in this exercise.
Ok, let's get started!
I have similated some data for you.
It's already stored on the virtual server you are talking to right now.
In this simulation, a placed a couple of dipole sources on the cortex, sending out a signal in a narrow frequency band.
During the first part of the recording, these sources are incoherent with each other.
During the second part, some of the sources become coherent with each other.
Your task is to find out:
At which frequency are the sources sending out a signal?
How many dipole sources did I place on the cortex?
Where are the sources located on the cortex?
Which sources are coherent in the second part of the recording?
We will use MNE-Python and ConPy to aswer the above questions.
Loading the required Python modules and configuring the environment
Executing the code cell below will load the required Python modules and configure some things.
If all goes well, you'll be rewarded with a plot of a brain:
End of explanation
"""
mne.read_epochs?
"""
Explanation: Loading the simulation data
If the code in the above cell ran without any errors, we're good to go. Let's load the stimulation data. It is stored as an MNE-Python Epochs file. To load it, you must use the mne.read_epochs function. To see how it works, you need to take a look at the documentation for this function. You can call up the documentation of any function by appending a ? to the function name, like so:
End of explanation
"""
# Write your Python code here
# If your code in the above cell is correct, executing this cell print some information about the data
print(epochs)
"""
Explanation: The documentation shows us that mne.read_epochs takes one required parameter (fname) and three optional parameters (proj, preload and verbose). You can recognize optional parameters by the fact that they have a default value assigned to them. In this exercise, you can always leave the optional parameters as they are, unless explicitly instructed to change them.
So, the only parameter of interest right now is the fname parameter, which must be a string containing the path and filename of the simulated data, namely: 'data/simulated-data-epo.fif'.
Go ahead and call the mne.read_epochs function to load the stimulated data. Store the result in a variable called epochs:
End of explanation
"""
# The semicolon at the end prevents the image from being included in this notebook
epochs.plot();
"""
Explanation: "Epochs" are snippets of MEG sensor data. In this simulation, all sensors are gradiometers. There are two epochs, appoximately 10 second in length: one epoch corresponding to the (simulated) subject "at rest" and one epoch corresponding to the subject performing some task.
Most objects we'll be working with today have a plot method. For example, the cell below will plot the epochs object:
End of explanation
"""
# Write here the code to plot the PSD of the MEG signal
"""
Explanation: In the epochs plot, you can use the scrolling function of your mouse/trackpad to browse through the channels. The vertical dashed line indicates where one epoch ends and the next one begins.
Question 1: At which frequency are the sources sending out a signal?
To find out, let's plot the power spectal density (PSD) of the signal. The PSD is computed by applying a Fourier transform to the data of each MEG sensor. We can use the plot_psd method of the epochs object to show it to us. By default, it will show us the average PSD across the sensors, which is good enough for our purposes. Check the documentation of the epochs.plot_psd method to see what parameters are required (remember: you are free to ignore the optional parameters).
End of explanation
"""
# Fill in the source frequency, in Hertz
source_frequency = ###
"""
Explanation: If you were to name one frequency at which the sources are sending out a signal, what would that frequency be? Fill in the answer below. We'll use it in the upcoming tasks:
End of explanation
"""
# Write here the code to plot some PSD topomaps
"""
Explanation: Question 2: How many sources did I simulate?
Ok, so now we know the frequency at which to look for sources. How many dipole sources did I use in the simulation? To find out, we must look at which sensors have the most activity at the frequency of the sources. The plot_psd_topomap method of the epochs object can do that for us. If you call it with the default parameters, it will plot so called "topomaps" for the following frequency bands:
|Name | Frequency band
|------|---------------
|Delta | 0-4 Hz
|Theta | 4-8 Hz
|Alpha | 8-12 Hz
|Beta | 12-30 Hz
|Gamma | 30-45 Hz
Try it now: take a look at the documentation for the epochs.plot_psd_topomap method and plot some topomaps:
End of explanation
"""
number_of_sources = ###
"""
Explanation: Take a look at the topomap corresponding to the frequency band that contains the frequency at which the sources are sending out their signal. How many sources do you think I simulated? Fill in your answer below:
End of explanation
"""
# Write here the code to construct a CSD matrix
# If the code in the cell above is correct, executing this cell will plot the CSD matrix
csd.plot()[0]
"""
Explanation: Question 3: Where are the sources located on the cortex?
Looking at the topomaps will give you a rough location of the sources, but let's be more exact. We will now use a DICS beamformer to localize the sources on the cortex.
To construct a DICS beamformer, we must first estimate the cross-spectral density (CSD) between all sensors. You can use the mne.time_frequency.csd_morlet function to do so. Go check its documentation.
You will find that one of the parameters is a list of frequencies at which to compute the CSD. Use a list containing a single frequency: the answer to Question 1 that you stored earlier in the source_frequency variable. In Python code, the list can be written like this: [source_frequency]. Store the result of mne.time_frequency.csd_morlet in a variable called csd.
End of explanation
"""
# Write your code to read the forward solution here
# If the code in the above cell is correct, executing this cell will plot the source grid
fwd['src'].plot(trans='data/simulated-data-trans.fif')
"""
Explanation: If you examine the CSD matrix closely, you can already spot which sources are coherent with each other. Sssshhh! we'll look at it in more detail later. For now, let's compute the DICS beamformer!
The next functions to call are mne.beamformer.make_dics and mne.beamformer.apply_dics_csd. Lets examine them more closely.
mne.beamformer.make_dics
This function will create the DICS beamformer weights. These weights are spatial filters: each filter will only pass activity for one specific location on the cortex, at one specific frequency(-band). In order to do this, we'll need a leadfield: a model that simulates how signals on the cortex manifest as magnetic fields as measured by the sensors. MNE-Python calls them "forward solutions". Luckily we have one lying around: the 'data/simulated-data-fwd.fif' file contains one. You can load it with the mne.read_forward_solution function. Take a look at the documentation for that function and load the forward solution in the variable fwd:
End of explanation
"""
# Write your code to compute the DICS filters here
# If the code in the above cell is correct, executing this cell will print some information about the filters
print('Filters have been computed for %d points on the cortex at %d frequency.' %
(filters['weights'].shape[1], filters['weights'].shape[0]))
print('At each point, there are %d source dipoles (XYZ)' % filters['n_orient'])
"""
Explanation: For this exercise, we use a very sparse source grid (the yellow dots in the plot). This grid is enough for our purposes and our computations will run quickly. For real studies, I recommend a much denser grid.
Another thing you'll need for the DICS beamformer is an Info object. This object contains information about the location of the MEG sensors and so forth. The epochs object provides one as epochs.info. Try running print(epochs.info) to check it out.
Now you should have everything you need to create the DICS beamformer weights using the mne.beamformer.make_dics function. Store the result in the variable filters:
End of explanation
"""
# Write your code to compute the power map here
# If the code in the above cell is correct, executing the cell will plot the power map
power_map.plot(hemi='both', smoothing_steps=20);
"""
Explanation: mne.beamformer.apply_dics_csd
With the DICS filters computed, making a cortical power map is straightforward. The mne.beamformer.apply_dics_csd will do it for you. The only new thing here is that this function will return two things (up to now, all our functions only returned one thing!). Don't panick. The Python syntax for dealing with it is like this:
python
power_map, frequencies = mne.beamformer.apply_dics_csd(...)
See? It returns both the power_map that we'll visualize in a minute, and a list of frequencies for which the power map is defined. Go read the documentation for mne.beamformer.apply_dics_csd and make the powermap:
End of explanation
"""
# Write your code to find the seed point here
# If the code in the above cell is correct, executing this cell will plot the seed point on the power map
brain = power_map.plot(hemi='both', smoothing_steps=20) # Plot power map
# We need to find out on which hemisphere the seed point lies
lh_verts, rh_verts = power_map.vertices
if seed_point < len(lh_verts):
# Seed point is on the left hemisphere
brain.add_foci(lh_verts[seed_point], coords_as_verts=True, hemi='lh')
else:
# Seed point is on the right hemisphere
brain.add_foci(rh_verts[seed_point - len(lh_verts)], coords_as_verts=True, hemi='rh')
"""
Explanation: Use the mouse/trackpad to rotate the brain around. Can you find the sources on the cortex? Even though I've simulated them as dipole sources, they show more as "blobs" in the power map. This is called spatial leaking and is due to various inaccuracies and limitations of the DICS beamformer filters.
Question 4: Which sources are coherent in the second part of the recording?
The simulated recording consists of two parts (=epochs): during the first epoch, our simulated subject is at rest and the sources are not coherent. During the second epoch, our simulated subject is performing a task that causes some of the sources to become coherent. It's finally time for some connectivity estimation!
We'll first tackle one-to-all connectivity, as it is much easier to visualize and the results are less messy. Afterward, we'll move on to all-to-all connectivity.
One-to-all connectivity estimation
For this, we must first define a "seed region": one of the source points for which we will estimate coherence with all other source points. A common choice is to use the power map to find the source point with the most power. To find this point, you can use the .argmax() method of the power_map.data object. This is a method that all data arrays have. It will return the index of the maximum element in the array, which in the case of our power_map.data array will be the source point with the maximum power.
Go find your seed point and store it in the variable seed_point:
End of explanation
"""
# Splitting the data is not hard to do.
epochs_rest = epochs['rest']
epochs_task = epochs['task']
"""
Explanation: You may need to rotate the brain around to find the seed point. It should be drawn as a white sphere.
Up to now, we've been using all data. However, we know our sources are only coherent during the second part. Executing the cell below will split the data into a "rest" and "task" part.
End of explanation
"""
# Write your code here to compute the CSD on the epochs_task data
# If the code in the above cell is correct, executing this cell will plot the CSD matrix
csd_task.plot()[0]
"""
Explanation: To estimate connectivity for just the epochs_task part, we need to compute the CSD matrix on only this data. You've computed a CSD matrix before, so rince and repeat: compute the CSD on just the epochs_task data and store it in the csd_task variable:
End of explanation
"""
# Write your code here to compute one-to-all connectivity for the "task" data
# If the code in the above cell is correct, executing this cell will print some information about the connectivity
print(con_task)
"""
Explanation: Now you are ready to compute one-to-all connectivity using DICS. It will take two lines of Python code. First, you'll need to use the conpy.one_to_all_connectivity_pairs function to compute the list of connectivity pairs. Then, you can use the conpy.dics_connectivity function to perform the connectivity estimation. Check the documentation for both functions (remember: you can leave all optional parameters as they are) and store your connectivity result in the con_task variable:
End of explanation
"""
# Write your code here to compute the coherence map for the epochs_task data
# If the code in the above cell is correct, executing this cell will plot the coherence map
brain = coherence_task.plot(hemi='both', smoothing_steps=20)
lh_verts, rh_verts = coherence_task.vertices
if seed_point < len(lh_verts):
# Seed point is on the left hemisphere
brain.add_foci(lh_verts[seed_point], coords_as_verts=True, hemi='lh')
else:
# Seed point is on the right hemisphere
brain.add_foci(rh_verts[seed_point - len(lh_verts)], coords_as_verts=True, hemi='rh')
"""
Explanation: To visualize the connectivity result, we can create a cortical map, where the value at each source point is the coherence between the source point and the seed region. The con_task object defines a .make_stc() method that will do just that. Take a look at its documentation and store the map in the coherence_task variable:
End of explanation
"""
# Write your code here to compute connectivity for the epochs_rest data and make a coherence map
# If the code in the above cell is correct, executing this cell will plot the coherence map
brain = coherence_rest.plot(hemi='both', smoothing_steps=20)
lh_verts, rh_verts = coherence_rest.vertices
if seed_point < len(lh_verts):
# Seed point is on the left hemisphere
brain.add_foci(lh_verts[seed_point], coords_as_verts=True, hemi='lh')
else:
# Seed point is on the right hemisphere
brain.add_foci(rh_verts[seed_point - len(lh_verts)], coords_as_verts=True, hemi='rh')
"""
Explanation: Which source points seem to be in coherence with the seed point? Double-click on the text-cell below to edit it and write down your answer.
Double-click here to edit this text cell. Pressing CTRL+Enter will transform it back into formatted text.
Congratulations! You have now answered the original 4 questions.
If you have some time left, you may continue below to explore all-to-all connectivity.
If you examine the coherence map, you'll find that the regions surrounding the seed point are coherent with the seed point. This is not because there are active coherent sources there, but because of the spatial leakage. You will always find this coherence. Make a one-to-all coherence map like you did above, but this time for the epochs_rest data (in which none of the sources are coherent). Store the connectivity in the con_rest variable and the coherence map in the coherence_rest variable:
End of explanation
"""
# Write your code here to compute a contrast between the "task" and "rest" connectivity and make a coherence map
# If the code in the above cell is correct, executing this cell will plot the coherence map
brain = coherence_contrast.plot(hemi='both', smoothing_steps=20)
lh_verts, rh_verts = coherence_contrast.vertices
if seed_point < len(lh_verts):
# Seed point is on the left hemisphere
brain.add_foci(lh_verts[seed_point], coords_as_verts=True, hemi='lh')
else:
# Seed point is on the right hemisphere
brain.add_foci(rh_verts[seed_point - len(lh_verts)], coords_as_verts=True, hemi='rh')
"""
Explanation: See? You'll find that also when no coherent sources are active, there is an area of coherence surrounding the seed region. This will be a major problem when attempting to estimate all-to-all connectivity.
One way to deal with the spatial leakage problem is to make a contrast between the "task" and "rest" segments. Since the coherence due to spatial leakage is the same for both segments, it should cancel out.
Connectivity objects, like con_task and con_rest have support for common math operators like +, -, * and /. Creating a constract between the two object is therefore as simple as con_task - con_rest. Im the cell below, make a new coherence map of the contrast and store it in the coherence_contrast variable:
End of explanation
"""
# Write your code to produce all-to-all connectivity estimates for the "rest" and "task" segments
# and the contrast between them.
# If the code in the above cell is correct, executing this cell will print some information about the connectivity
print(all_to_all_contrast)
"""
Explanation: If all went well, you'll see that the coherence due to spatial leakage has disappeared from the coherence map.
All-to-all connectivity
Use the conpy.all_to_all_connectivity_pairs function to compute the connectivity pairs in an all-to-all manner. Then, use the conpy.dics_connectivity function like before to create a connectivity object for the "task" and a connectivity object for the "rest" data segments. Store them in the all_to_all_task and all_to_all_rest variables. You'll notice that computing all-to-all connectivity takes a while... Finally, create the contrast between them and store it in the all_to_all_contrast variable.
End of explanation
"""
# This cell will plot the coherence map
all_to_all_coherence = all_to_all_contrast.make_stc()
all_to_all_coherence.plot(hemi='both', smoothing_steps=20);
"""
Explanation: How to visualize this all-to-all connectivity? This is a question worth pondering a bit. But for this exercise, we can get away with producing a coherence map like we did with the one-to-all connectivity. The value of the coherence map is, for each source point, the sum of the coherence of all connections from and to the source point.
Executing the cell below will plot this coherence map. Can you spot the connectivity between the sources?
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td1a_soft/td1a_sql.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 1A.soft - Notions de SQL
Premiers pas avec le langage SQL.
End of explanation
"""
from pyensae.datasource import download_data
download_data("td8_velib.zip", website = 'xd')
"""
Explanation: Le langage SQL est utilisé pour manipuler des bases de données. Pour faire simple, on utilise les bases de données pour accéder rapidement à une information dans des données qui font parfois plusieurs milliards de lignes.
Le tableau dont on se sert est trop grand (comme trier 50000 lignes).
On souhaite faire des opérations sur deux feuilles Excel (associer les lignes de l'une avec celles de l'autre).
Lorsque le volume de données est important, il est impossible de les voir dans leur ensemble. On peut en voir soit une partie soit une aggrégation. Par exemple, la société qui gère les vélib a ouvert l'accès à ses données. Il est possible de télécharger aussi souvent qu'on veut (toutes les minutes par exemple) un état complet des vélos et places disponibles pour toutes les stations de Paris : c'est une table qui s'enrichit de 1300 lignes toutes les minutes.
Récupérer les donnée
End of explanation
"""
from pyensae.sql import import_flatfile_into_database
dbf = "td8_velib.db3"
import_flatfile_into_database(dbf, "td8_velib.txt") # 2 secondes
import_flatfile_into_database(dbf, "stations.txt", table="stations") # 2 minutes
"""
Explanation: On crée une base de données sqlite3. On peut la consulter avec un outil tel que SqliteSpy sous Windows, sqlite_bro sur tous les OS.
End of explanation
"""
import os
[ _ for _ in os.listdir(".") if ".db3" in _]
"""
Explanation: Vous devriez voir un fichier .db3.
End of explanation
"""
%load_ext pyensae
%SQL_connect td8_velib.db3
"""
Explanation: ## Premières requêtes SQL
Dans notre cas, on va faire cela depuis le notebook.
End of explanation
"""
%SQL_tables
"""
Explanation: On regarde les tables de la base de données.
End of explanation
"""
%SQL_schema stations
%SQL_schema td8_velib
"""
Explanation: On regarde les colonnes de chaque table.
End of explanation
"""
%%SQL
SELECT * FROM td8_velib LIMIT 10
"""
Explanation: Et enfin on regarde les premières lignes.
End of explanation
"""
%%SQL
SELECT * FROM td8_velib WHERE last_update >= '2013-09-13 10:00:00' AND last_update <= '2013-09-13 11:00:00'
"""
Explanation: On sélectionne les données sur une plage horaire donnée.
End of explanation
"""
%%SQL
SELECT available_bike_stands, available_bikes FROM td8_velib
WHERE last_update >= '2013-09-13 10:00:00' AND last_update <= '2013-09-13 11:00:00'
ORDER BY available_bike_stands DESC ;
"""
Explanation: Sélectionner certaines colonnes et ordonner les valeurs.
End of explanation
"""
%%SQL
SELECT last_update, available_bike_stands + available_bikes AS place, number FROM td8_velib
WHERE last_update >= '2013-09-13 10:00:00' AND last_update <= '2013-09-13 11:00:00'
ORDER BY place DESC ;
"""
Explanation: Compter le nombre d'emplacements de chaque station.
End of explanation
"""
%%SQL --help
.
"""
Explanation: Par défaut la commande %%SQL n'affiche que les dix premières lignes.
End of explanation
"""
%%SQL -n 5 --df=df
SELECT last_update, available_bike_stands + available_bikes AS place, number FROM td8_velib
WHERE last_update >= '2013-09-13 10:00:00' AND last_update <= '2013-09-13 11:00:00'
ORDER BY place DESC ;
df.tail()
"""
Explanation: On affiche 5 lignes et on stocke le résultat dans un dataframe.
End of explanation
"""
%%SQL
SELECT MAX(available_bike_stands) FROM td8_velib
"""
Explanation: Maximum de vélos disponibles à une station.
End of explanation
"""
%%SQL
SELECT "min" AS label, MIN(available_bike_stands) FROM td8_velib
UNION ALL
SELECT "max" AS label, MAX(available_bike_stands) FROM td8_velib
"""
Explanation: Et le minimum.
End of explanation
"""
%%SQL
SELECT DISTINCT number FROM td8_velib
"""
Explanation: Tous les numéros de stations de façon unique.
End of explanation
"""
%%SQL
SELECT COUNT(*) FROM (
SELECT DISTINCT number FROM td8_velib
)
"""
Explanation: Compter le nombre de stations (1230).
End of explanation
"""
%%SQL --df=df
SELECT last_update, SUM(available_bikes) AS velo_disponible
FROM td8_velib
GROUP BY last_update
ORDER BY last_update
"""
Explanation: Exercice 1
Déterminer le nombre de valeur distinctes pour la colonne last_update.
Déterminer la première et dernière date.
GROUP BY
L'instruction GROUP BY permet d'aggréger des valeurs (min, max, sum) sur un ensemble de ligne partageant le même ensemble de valeurs (ou clé).
End of explanation
"""
df.tail()
"""
Explanation: Le résultat est un tableau avec de petites valeurs au début et de grandes vers la fin. Cela est dû au processus de création de la base de données. Certaines stations sont hors service et la dernière arrivée ou le dernier départ remonte à plusieurs jours. A chaque fois qu'on récupère les données velib, on dispose pour chaque station de la dernière arrivée ou du dernier départ de vélo. Le champ last_update correspond à cette date. Il ne faudra considérer que les dates au-delà de 2013-09-10 11:30:19.
End of explanation
"""
%%SQL --df=df
SELECT last_update, SUM(available_bikes) AS velo_disponible, COUNT(DISTINCT number) AS stations
FROM td8_velib
--WHERE last_update >= "2013-09-10 11:30:19"
GROUP BY last_update
ORDER BY last_update
"""
Explanation: Exercice 1b
Que fait la requête suivante ? Que se passe-t-il si vous enlevez les symboles -- (on décommente la condition WHERE) ?
End of explanation
"""
%%SQL --df=df
SELECT last_update,
CASE WHEN available_bikes>0 THEN 1 ELSE 0 END AS vide,
COUNT(*) AS nb
FROM td8_velib
WHERE last_update >= "2013-09-10 11:30:19"
GROUP BY last_update, vide
ORDER BY last_update
"""
Explanation: et celle-ci ?
End of explanation
"""
%%SQL
SELECT A.*, B.name -- ajout du nom au bout de chaque ligne
FROM td8_velib AS A
JOIN stations AS B
ON A.number == B.number
"""
Explanation: Exerice 2
Pour chaque station, compter le nombre de plages horaires de cinq minutes où il n'y a aucun vélo disponible.
Exercice 3
Si on note $X(s)$ le nombre de plages horaires de cinq minutes où il n'y a aucun vélo disponible, construire le tableau suivant : $k \rightarrow card{ s | X(s) = k }$.
JOIN
L'instruction JOIN sert à associer des lignes d'une table avec les lignes d'une autre table à partir du moment où elles partagent une information commune.
End of explanation
"""
%%SQL
SELECT A.*, 1.0 * A.available_bikes / B.nb_velo AS distribution_temporelle
FROM td8_velib AS A
JOIN (
SELECT number, SUM(available_bikes) AS nb_velo
FROM td8_velib
WHERE last_update >= "2013-09-10 11:30:19"
GROUP BY number
) AS B
ON A.number == B.number
WHERE A.last_update >= "2013-09-10 11:30:19"
"""
Explanation: On peut s'en servir pour calculer un ratio en associant les deux instructions GROUP BY et JOIN. L'instruction suivante permet d'obtenir la distribution des vélos disponibles sur la période d'étude pour chaque station.
End of explanation
"""
from pyquickhelper.helpgen import NbImage
NbImage("images/tb8_dis_hor.png")
"""
Explanation: Exercice 4 : distribution horaire
Pour chaque station, déterminer la distribution du nombre de vélos disponibles pour chaque période horaire d'une journée (par station, il y aura donc 24 * 12 valeurs comprises entre 0 et 1). Le résultat que vous devriez obtenir est illustré par l'image qui suit.
End of explanation
"""
%SQL_close
"""
Explanation: Exercice 5 : zones de travail
On souhaite déterminer si une station se situe plutôt dans une zone de travail ou plutôt dans une zone de résidence. On part de l'hypothèse que, dans une zone de travail, les gens arrivent en vélib et repartent en vélib. C'est sans doute le cas de la station 8003. Les vélos seront plutôt disponibles dans la journée. A l'opposé, dans une zone de résidence, les vélos seront disponibles plutôt la nuit. Comment faire à partir de la distribution des vélos disponibles construite à la question précédente ?
On considère que la plage diurne s'étend de 10h à 16h. Vous trouverez une illustration du résultat dans cet article.
Exercice 6 : lattitude, longitude
On repart de la requête précédente pour effectuer un JOIN avec la table stations pour récupérer les coordonnées (lat, long). Après un copier/coller dans Excel, on peut situer les zones de travail sur la région parisienne.
End of explanation
"""
import sqlite3
conn = sqlite3.connect("td8_velib.db3") # on ouvre une connexion sur la base de données
data = conn.execute("SELECT * FROM stations") # on exécute une requête SQL
for i, d in enumerate(data): # on affiche le résultat
print(d)
if i > 5:
break
conn.close()
"""
Explanation: Sans %%SQL
La commande magique %%SQL s'appuie sur le module sqlite3. On peut faire sans.
End of explanation
"""
|
erickpeirson/statistical-computing | Statistical Learning.ipynb | cc0-1.0 | import numpy as np
from scipy.stats import uniform
f = lambda x: np.log(x)
x = np.linspace(0.1, 5.1, 100)
y = f(x)
Eps = uniform.rvs(-1., 2., size=(100,))
plt.plot(x, y, label='$f(x)$', lw=3)
plt.scatter(x, y + Eps, label='y')
plt.xlabel('x')
plt.legend(loc='best')
plt.show()
"""
Explanation: Statistical Learning
Different from machine learning, in that not merely interested in how well a model fits: also interested in how to interpret/derive meaning.
How to relate my covariates $X = { x_1, x_2, x_3, ... x_p }$ to the response $y$.
Our model of the data is $y = f(x) + \epsilon$
$f(x)$ is not necessarily linear,
Error terms need not be normal.
Goal: to develop an estimate of $f$, $\hat{f}$
Two reasons to estimate $f$ with $\hat{f}$:
Make predictions (not necessarily informed by mechanisms, relationships among covariates),
Want $\hat{y}$ to be close to $y$; $\hat{y} = \hat{f}(x)$
Minimize Mean Squared Error:
$E(y-\hat{y})^2 = E[f(x) + \epsilon - \hat{f}(x)]^2$
$E(y-\hat{y})^2 = [f(x) - \hat{f}(x)]^2 + Var(\epsilon)$
End of explanation
"""
models = ['Subset selection lasso', 'least squares', 'generalized additive model trees',
'bagging, boosting', 'support vector machines']
pos = [(0, 1), (0.2, 0.8), (0.4, 0.6), (0.6, 0.1), (0.7, 0.3)]
xlabels = ['Restrictive', 'Flexible']
ylabels = ['Low', 'High']
plt.figure(figsize=(10, 7))
for m, p in zip(models, pos):
plt.text(p[0]+ 0.02, p[1]-0.05, m, size=16)
plt.xticks([0.07, 0.95], xlabels, size=16)
plt.yticks([0, 1], ylabels, size=16)
plt.ylabel('Interpretability', size=20)
plt.xlabel('Flexibility', size=20)
plt.show()
"""
Explanation: Goal: to develop an estimate of $f$, $\hat{f}$
Two reasons to estimate $f$ with $\hat{f}$:
$\hat{f}$ -> making inference; want to know how covariates X affects y.
End of explanation
"""
x = np.linspace(0., 1.2, 5)
plt.scatter(x[0:4], [0.1, 0.6, 0.25, 0.7])
plt.plot(x, [0.1, 0.6, 0.25, 0.7, 1.2])
plt.plot(x, x/1.5)
plt.scatter(1.2, 0., c='red')
plt.show()
"""
Explanation: How do we estimate $\hat{f}$?
Parametric vs non-parametric methods
Parametric methods
* Assume some form for the relationship between X and y. For example:
$y = \beta_0 + \beta_1x + \epsilon$
$y = X\beta + \epsilon$
$logit(y) = X\beta + \epsilon$
* And fit data by tweaking a few $p << n$ beta terms (much few parameters than the number of observations).
Non-parametric methods
* Assume no form for $f$,
* or the form has $p \simeq n$
End of explanation
"""
plt.figure(figsize=(10, 5))
plt.subplot(121)
plt.title('Supervised')
plt.scatter([.0, .2, .1, .3], [.2, .1, .3, .4], c='red', label='nondiabetic')
plt.scatter([.6, .8, .9, .7], [.55, .74, .5, .8], c='blue', label='diabetic')
plt.ylabel('Weekly sugar intake')
plt.xlabel('BMI')
plt.legend(loc=2)
plt.subplot(122)
plt.title('Unsupervised')
plt.scatter([.6, .8, .9, .7]+[.0, .2, .1, .3], [.55, .74, .5, .8]+[.2, .1, .3, .4], c='black', label='diabetic')
plt.ylabel('Weekly sugar intake')
plt.xlabel('BMI')
plt.tight_layout()
"""
Explanation: Can fit this perfectly with a cubic model. But assuming that this is correct.
What happens when we get a new data point: $(x_0, y_0)$
for non-parametric methods we need some way to penalize "wiggliness"
wiggliness df: cumulative change in the second derivative, $f''$.
Pros & cons:
* Parametric:
* Pros:
* More interpretable
* Requires fewer data
* Cons:
* More rigid
* More assumptions to make
* Non-parametric
* Pros:
* More flexible
* Fewer assumptions
* Cons:
* Need more data
* Harder to interpret
Supervised vs. unsupervised algorithms
in the supervised algorithm we have response variable, $y$
unsupervised case, no response variable
the response variable, $y$, supervises our selection of important covariates, $X$
Examples:
* Regression -- supervised
* NMDS/PCA -- unsupervised
* Diabetes risk -- supervised
End of explanation
"""
x = np.linspace(0., 1., 50)
y = x + np.random.random(size=50) - 0.5
plt.figure(figsize=(10, 5))
plt.subplot(121)
plt.title('Model A')
plt.scatter(x, y)
plt.plot(x, x)
plt.subplot(122)
plt.title('Model B')
plt.scatter(x, y)
plt.plot(x, [0.42]*50)
plt.tight_layout()
plt.show()
"""
Explanation: In the unsupervised case, we don't know the patient groups.
Classification & regression
Regression: response is continuous (either continuous or categorical covariates)
Classification: response is categorical
Regression
Assessing model accuracy
End of explanation
"""
plt.figure(figsize=(7, 5))
x = np.linspace(1, 10, 99)
plt.plot(x, 1./x**0.5 - 0.1, label='$MSE_training$', lw=3)
plt.plot(np.linspace(1, 10, 7), [0.9, 0.6, 0.5, 0.45, 0.55, 0.7, 0.9], label='$MSE_{test}$', lw=3)
plt.ylabel('$MSE$')
plt.xlabel('flexibility')
plt.legend()
plt.show()
"""
Explanation: Model A is better because the $Ave(y-\hat{y})^2$ (Mean Squared Error) is smaller.
Consider the model where we have n parameters (e.g. n-degree polynomial). It can go through every data point: no MSE!
If the model is too flexible (and we overfit the data), then we tend to do a bad job at predicting a new data point that was not used in tuning the model.
Test data & training data
Take our data and split into two groups:
1. Training data: data used to tune the model(s) of interest
2. Test data: data used to assess the accuracy of each model (typically use MSE)
In general, $MSE_{training} \leq MSE_{test}$
Want to look at the impact of model complexity on both $MSE_{training}$ and $MSE_{test}$.
End of explanation
"""
x = np.linspace(0., 1., 20)
y = [1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
plt.scatter(x, y)
plt.ylabel('Cougar occupied')
plt.xlabel('# of dogs')
"""
Explanation: $MSE_{test}$ should bottom out around the "true" function. $MSE_{test}$ should never drop below the "true" amount of error/residuals. Goal is to minimize $MSE_{test}$.
Bias/Variance trade-off
It can be shown that for $y = f(x) + \epsilon$,
$E[ y_0 - \hat{f}(x_0)]^2 = Var(\hat{f}(x_0) + [bias(\hat{f}(x_0))]^2 + Var(\epsilon)$
$E[y_0 - \hat{f}(x_0)]^2$ -- Expected test set MSE
$Var(\hat{f}(x_0)$ -- Measure of how much the $\hat{f}$ function would change if I got new data. If model is well-fit, this should be small.
$Bias(\hat{f}) = E[f(x_0) - \hat{f}(x_0)]$ -- How much am I going to be wrong because my $\hat{f}$ is too restrictive. Want a model that is flexible enough that this bias is small.
$y_0$ is training data
Classification
Assessing accuracy
$\hat{y}$ will be categorical (as is $y$)
Measure will be % of cases mis-classified
Training error rate: $ER = \frac{1}{n}\sum{I(y_i \neq \hat{y}_i)}$
$I(u) =$ 1 if TRUE, 0 if FALSE$
End of explanation
"""
|
tpin3694/tpin3694.github.io | machine-learning/adding_interaction_terms.ipynb | mit | # Load libraries
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_boston
from sklearn.preprocessing import PolynomialFeatures
import warnings
# Suppress Warning
warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd")
"""
Explanation: Title: Adding Interaction Terms
Slug: adding_interaction_terms
Summary: How to add interaction terms in scikit-learn for machine learning in Python.
Date: 2017-09-18 12:00
Category: Machine Learning
Tags: Linear Regression
Authors: Chris Albon
<a alt="Interaction Terms" href="https://machinelearningflashcards.com">
<img src="adding_interaction_terms/Interaction_Term_print.png" class="flashcard center-block">
</a>
Preliminaries
End of explanation
"""
# Load the data with only two features
boston = load_boston()
X = boston.data[:,0:2]
y = boston.target
"""
Explanation: Load Boston Housing Dataset
End of explanation
"""
# Create interaction term (not polynomial features)
interaction = PolynomialFeatures(degree=3, include_bias=False, interaction_only=True)
X_inter = interaction.fit_transform(X)
"""
Explanation: Add Interaction Term
Interaction effects can be account for by including a new feature comprising the product of corresponding values from the interacting features:
$$\hat y = \hat\beta_{0} + \hat\beta_{1}x_{1}+ \hat\beta_{2}x_{2} + \hat\beta_{3}x_{1}x_{2} + \epsilon$$
where $x_{1}$ and $ x_{2}$ are the values of the two features, respectively and $x_{1}x_{2}$ represents the interaction between the two. It can be useful to use scikit-learn's PolynomialFeatures to creative interaction terms for all combination of features. We can then use model selection strategies to identify the combination of features and interaction terms which produce the best model.
End of explanation
"""
# Create linear regression
regr = LinearRegression()
# Fit the linear regression
model = regr.fit(X_inter, y)
"""
Explanation: Fit Linear Regression
End of explanation
"""
|
GEMScienceTools/rmtk | notebooks/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb | agpl-3.0 | from rmtk.vulnerability.derivation_fragility.equivalent_linearization.lin_miranda_2008 import lin_miranda_2008
from rmtk.vulnerability.common import utils
%matplotlib inline
"""
Explanation: Lin and Miranda (2008)
This method, described in Lin and Miranda (2008), estimates the maximum inelastic displacement of an existing structure based on the maximum elastic displacement response of its equivalent linear system without the need of iterations, based on the strength ratio. The equivalent linear system has a longer period of vibration and a higher viscous damping than the original system. The estimation of these parameters is based on the strength ratio $R$.
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
"""
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Sa-Sd.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
utils.plot_capacity_curves(capacity_curves)
"""
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
End of explanation
"""
gmrs_folder = "../../../../../../rmtk_data/accelerograms"
gmrs = utils.read_gmrs(gmrs_folder)
minT, maxT = 0.1, 2.0
utils.plot_response_spectra(gmrs, minT, maxT)
"""
Explanation: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note: Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
"""
damage_model_file = "../../../../../../rmtk_data/damage_model.csv"
damage_model = utils.read_damage_model(damage_model_file)
"""
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are: capacity curve dependent, spectral displacement and interstorey drift. If the damage model type is interstorey drift the user can provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements, otherwise a linear relationship is assumed.
End of explanation
"""
PDM, Sds = lin_miranda_2008.calculate_fragility(capacity_curves, gmrs, damage_model)
"""
Explanation: Obtain the damage probability matrix
End of explanation
"""
IMT = "Sd"
period = 2.0
damping_ratio = 0.05
regression_method = "max likelihood"
fragility_model = utils.calculate_mean_fragility(gmrs, PDM, period, damping_ratio,
IMT, damage_model, regression_method)
"""
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sd" and "Sa".
2. period: This parameter defines the time period of the fundamental mode of vibration of the structure.
3. damping_ratio: This parameter defines the damping ratio for the structure.
4. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
"""
minIML, maxIML = 0.01, 2.00
utils.plot_fragility_model(fragility_model, minIML, maxIML)
"""
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
"""
taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "nrml"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
"""
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
"""
cons_model_file = "../../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
"""
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
"""
utils.plot_vulnerability_model(vulnerability_model)
"""
Explanation: Plot vulnerability function
End of explanation
"""
taxonomy = "RC"
output_type = "nrml"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
"""
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
"""
|
dtamayo/rebound | ipython_examples/HyperbolicOrbits.ipynb | gpl-3.0 | from io import StringIO
import numpy as np
import rebound
epoch_of_elements = 53371.0 # [MJD, days]
c = StringIO(u"""
# id e q[AU] i[deg] Omega[deg] argperi[deg] t_peri[MJD, days] epoch_of_observation[MJD, days]
168026 12.181214 15.346358 136.782470 37.581438 268.412314 54776.806093 55516.41727
21170 2.662235 2.013923 140.646538 23.029490 46.292039 54336.126288 53673.44043
189298 15.503013 11.550314 20.042232 203.240743 150.855761 55761.641176 55718.447145
72278 34.638392 24.742323 157.984412 126.431540 178.612758 54382.158401 54347.240445
109766 8.832472 9.900228 144.857801 243.102255 271.345342 55627.501618 54748.37722
""")
comets = np.loadtxt(c) # load the table into a numpy array
"""
Explanation: Loading Hyperbolic Orbits into REBOUND
Imagine we have a table of orbital elements for comets (kindly provided by Toni Engelhardt).
End of explanation
"""
sim = rebound.Simulation()
k = 0.01720209895 # Gaussian constant
sim.G = k**2
"""
Explanation: We want to add these comits to a REBOUND simulation(s). The first thing to do is set the units, which have to be consistent throughout. Here we have a table in AU and days, so we'll use the gaussian gravitational constant (AU, days, solar masses).
End of explanation
"""
sim.t = epoch_of_elements
"""
Explanation: We also set the simulation time to the epoch at which the elements are valid:
End of explanation
"""
sim.add(m=1.) # Sun
sim.add(m=1.e-3, a=5.) # Jupiter
sim.add(m=3.e-4, a=10.) # Saturn
"""
Explanation: We then add the giant planets in our Solar System to the simulation. You could for example query JPL HORIZONS for the states of the planets at each comet's corresponding epoch of observation (see Horizons.ipynb). Here we set up toy masses and orbits for Jupiter & Saturn:
End of explanation
"""
def addOrbit(sim, comet_elem):
tracklet_id, e, q, inc, Omega, argperi, t_peri, epoch_of_observation = comet_elem
sim.add(primary=sim.particles[0],
a = q/(1.-e),
e = e,
inc = inc*np.pi/180., # have to convert to radians
Omega = Omega*np.pi/180.,
omega = argperi*np.pi/180.,
T = t_peri # time of pericenter passage
)
"""
Explanation: Let's write a function that takes a comet from the table and adds it to our simulation:
End of explanation
"""
addOrbit(sim, comets[0])
%matplotlib inline
fig = rebound.OrbitPlot(sim, trails=True)
"""
Explanation: By default, REBOUND adds and outputs particles in Jacobi orbital elements. Typically orbital elements for comets are heliocentric. Mixing the two will give you relative errors in elements, positions etc. of order the mass ratio of Jupiter to the Sun ($\sim 0.001$) which is why we pass the additional primary=sim.particles[0] argument to the add() function. If this level of accuracy doesn't matters to you, you can ignore the primary argument.
We can now set up the first comet and quickly plot to see what the system looks like:
End of explanation
"""
tfinal = comets[0][-1]
sim.integrate(tfinal)
fig = rebound.OrbitPlot(sim, trails=True)
"""
Explanation: Now we just integrate until whatever final time we’re interested in. Here it's the epoch at which we observe the comet, which is the last column in our table:
End of explanation
"""
sim = rebound.Simulation()
sim.G = k**2
sim.t = epoch_of_elements
sim.add(m=1.) # Sun
sim.add(m=1.e-3, a=5.) # Jupiter
sim.add(m=3.e-4, a=10.) # Saturn
for comet in comets:
addOrbit(sim, comet)
fig = rebound.OrbitPlot(sim, trails=True)
"""
Explanation: REBOUND automatically find out if you want to integrate forward or backward in time.
For fun, let's add all the coments to a simulation:
End of explanation
"""
|
rjurney/Agile_Data_Code_2 | ch07/Making_Predictions.ipynb | mit | import pyspark.sql.functions as F
import pyspark.sql.types as T
from pyspark.sql import SparkSession
# Initialize PySpark with MongoDB and Elastic support
spark = (
SparkSession.builder.appName("Exploring Data with Reports")
# Load support for MongoDB and Elasticsearch
.config("spark.jars.packages", "org.mongodb.spark:mongo-spark-connector_2.12:3.0.1,org.elasticsearch:elasticsearch-spark-30_2.12:7.14.2")
# Add Configuration for MongopDB
.config("spark.mongodb.input.uri", "mongodb://mongo:27017/test.coll")
.config("spark.mongodb.output.uri", "mongodb://mongo:27017/test.coll")
.getOrCreate()
)
sc = spark.sparkContext
sc.setLogLevel("ERROR")
print("\nPySpark initialized...")
"""
Explanation: Now that we have interactive reports exposing different aspects of our data, we’re ready to make our first prediction. This forms our fourth agile sprint.
When making predictions, we take what we know about the past and use it to infer what will happen in the future. In doing so, we transition from batch processing of historical data to real-time extrapolation about the future. In real terms, our task in this chapter is to take historical flight records and use them to predict things about future flights.
End of explanation
"""
# Load the on-time Parquet file
on_time_dataframe = spark.read.parquet('../data/january_performance.parquet')
on_time_dataframe.createOrReplaceTempView("on_time_performance")
total_flights = on_time_dataframe.count()
# Flights that were late leaving...
late_departures = on_time_dataframe.filter(on_time_dataframe.DepDelayMinutes > 0)
total_late_departures = late_departures.count()
# Flights that were late arriving...
late_arrivals = on_time_dataframe.filter(on_time_dataframe.ArrDelayMinutes > 0)
total_late_arrivals = late_arrivals.count()
# Flights that left late but made up time to arrive on time...
on_time_heros = on_time_dataframe.filter(
(on_time_dataframe.DepDelayMinutes > 0)
&
(on_time_dataframe.ArrDelayMinutes <= 0)
)
total_on_time_heros = on_time_heros.count()
# Get the percentage of flights that are late, rounded to 1 decimal place
pct_late = round((total_late_arrivals / (total_flights * 1.0)) * 100, 1)
print("Total flights: {:,}".format(total_flights))
print("Late departures: {:,}".format(total_late_departures))
print("Late arrivals: {:,}".format(total_late_arrivals))
print("Recoveries: {:,}".format(total_on_time_heros))
print("Percentage Late: {}%".format(pct_late))
"""
Explanation: The Role of Predictions
We are all used to predictions in life. Some forecasts are based on statistical inference, and some are simply the opinions of pundits. Statistical inference is increasingly involved in predictions of all kinds. From weather forecasts to insurance actuaries determining rates to the point spread in sports betting or odds in poker, statistical predictions are a part of modern life. Sometimes forecasts are accurate, and sometimes they are inaccurate.
For instance, as I was working on this edition of the book, pundits repeatedly dismissed Donald Trump’s presidential candidacy as a joke, even as he gained on, pulled ahead of, and ultimately defeated all opponents in the primary and edged closer to Hillary Clinton as the election approached. Pundits are usually wrong, but accurate predictions in elections have emerged thanks to Nate Silver of FiveThirtyEight. He uses an advanced statistical model called a 538 regression to predict election results state-by-state, and combines these predictions into a model that was highly accurate in 2008 and 2012 (although, as it turns out, Silver—along with every rational member of the world with faith in the American voter—failed to predict Trump’s election... to be fair, though, he did predict a 29% chance for Trump, which was about double what others predicted).
We’ll be making predictions using statistical inference through a technique called machine learning. According to TechTarget, machine learning (ML for short) is “a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed.” Another way of explaining it is to say that machine learning handles tasks that would be impossibly complex for humans to program manually themselves.
Machine learning is an intimidating topic, an advanced field of study. Mastering all aspects of it can take many years. However, in practice, getting started with machine learning is easy, thanks to some of the libraries we’ll be using in this chapter. Once we explain the fundamentals, we’ll get on with some simple code.
Predict What?
In this chapter we will employ machine learning to build a predictive analytics application using the dataset we’ve been visualizing so far. The prediction we’ll be making is one with great practical importance for anyone who travels by air. We’ll be predicting flight delays. Specifically, we’ll be predicting the arrival delay, or how late a flight is when arriving at the gate at its destination airport.
First, let’s cover the fundamentals of predictive analytics.
Introduction to Predictive Analytics
According to Wikipedia “Predictive analytics encompasses a variety of statistical techniques from predictive modeling, machine learning, and data mining that analyze current and historical facts to make predictions about future or otherwise unknown events.”
Predictive analytics requires training data. Training data is composed of examples of the entity we are trying to predict. Examples are made up of one or more features. Dependent features are the values we are trying to predict. Independent features are features describing the things we want to predict that relate to the dependent features. For instance, our training data for predicting flight delays is our atomic records: our flight delay records. A flight with its delay is an example of a record with a dependent variable. Our independent features are other things we can associate with flights—in other words, all the entities and their properties we’ve been working with in the preceding chapters! The independent features are the other properties of flights—things like the departure delay, the airline, the origin and destination cities, the day of the week or year, etc.
We’ve been analyzing our data to better understand the features that make up a flight. We know a lot about flight delays, and about flights themselves and those things that combine to produce a flight: airplanes, airlines, airports, etc. This will enable us to effectively engage in feature engineering, which is the critical part of making predictions. Interactive visualization and exploratory data analysis as a part of feature engineering is the heart of Agile Data Science. It drives and organizes our efforts.
Now that the groundwork is laid, let’s learn the mechanics of making actual predictions.
Making Predictions
There are two ways to approach most predictions: regression and classification. A regression takes examples made up of features as input and produces a numeric output. Classification takes examples as input and produces a categorical classification. The example dataset that serves as input to a statistical prediction and that enables the machine to learn is called the training data.
Whether to build a regression or a classification depends on our business need. The type of response variable often determines which to build. If we need to predict a continuous variable, we build a regression. If we need to predict a nominal/categorical variable, we build a classification.
This decision can be more complex than that, however, taking into account the user interface where we’ll present our prediction. For instance, if we were creating an API we were going to sell access to that predicts flight delays, we would probably want to use a regression to produce a numeric prediction. On the other hand, if we were presenting flight delays to users in a mobile application, usability considerations apply that might mean a classification might be better.
In this book, we’ll create both a regression and a classification of flight delays using decision trees, which can both classify and regress.
Features
A feature is what it sounds like: a feature of an example. In software terminology: if examples are objects, features are fields or properties of those objects. Two or more features make up the training data of a statistical prediction—two being the minimum because one field is required as the one to predict, and at least one additional feature is required to make an inference about in order to create a prediction.
Sometimes features are already a part of the training data in question, in their own fields. Sometimes we have to perform feature engineering to derive the training values we need from the ones the data includes.
The models we’ll be using employ decision trees. Decision trees are important for a few reasons. First, they can both classify and regress. It requires literally one line of code to switch between the two models just described, from a classification to a regression. Second, they are able to determine and share the feature importance of a given training set.
Feature importances tell us which features in the training data were most important in creating an accurate model. This is invaluable, because it gives us insight into what features we should engineer and the approach we should take to improving performance. It also gives us insight into the underlying data, by telling us which features have relationships with the predicted feature.
Regression
The simplest kind of regression analysis is a linear regression. Stat Trek defines linear regression as follows:
In a cause and effect relationship, the independent variable is the cause, and the dependent variable is the effect. Least squares linear regression is a method for predicting the value of a dependent variable Y, based on the value of an independent variable X.
A linear regression is a trend line. We’ve all seen them in Excel (if you haven’t, check out North Carolina State University’s Excel regression tutorial). Given a set of variables that characterize a flight, a linear regression might predict how early or late the flight will be, in minutes.
Classification
The second way to solve the problem is to define a set of categories and to classify a flight into one of those categories. Flight delays are a continuous distribution, so they don’t naturally yield to classification. The trick here is to define the categories so they simplify the continuous distribution of flight delays into two or more categories. For instance, we might formulate categories similar to the buckets we will use for the weather delay distribution (0–15, 15–60, and 60+), and then classify into these three categories.
Exploring Flight Delays
Our topic for this chapter is flight delays. If we want to predict the feature, we must first understand it. Let’s lay the groundwork by creating a delay entity in our application and fleshing it out.
We’ll begin by exploring the magnitude of the problem. Just how often are flights late? It feels like “all the time,” but is it? This dataset is exciting in that it can answer questions like this one!
Check out ch07/explore_delays.py:
End of explanation
"""
# Get the average minutes late departing and arriving
spark.sql("""
SELECT
ROUND(AVG(DepDelay),1) AS AvgDepDelay,
ROUND(AVG(ArrDelay),1) AS AvgArrDelay
FROM on_time_performance
"""
).show()
"""
Explanation: Wow, flights arrive late 39.0% of the time! The problem is as big as it seems. But how late is the average flight?
End of explanation
"""
late_flights = spark.sql("""
SELECT
FlightDate,
ArrDelayMinutes,
WeatherDelay,
CarrierDelay,
NASDelay,
SecurityDelay,
LateAircraftDelay
FROM
on_time_performance
WHERE
WeatherDelay IS NOT NULL
OR
CarrierDelay IS NOT NULL
OR
NASDelay IS NOT NULL
OR
SecurityDelay IS NOT NULL
OR
LateAircraftDelay IS NOT NULL
ORDER BY
FlightDate
""")
late_flights.sample(0.1).show(10)
"""
Explanation: Flights are 9.4 minutes late departing and 4.4 minutes late arriving on average. Why the constant tardiness? Are the airlines incompetent (as we often angrily suspect), or is the problem weather? Weather is presently out of human control, so that would let the airlines off the hook. Should we be mad at the airlines or angry with the god(s)? (Personally, I’m fearful of Zeus!)
Let’s take a look at some delayed flights, and specifically the fields that specify the kinds of delay. We want to be sure to use a random sample, which we can obtain via Spark’s DataFrame.sample function. In the first rendition of this chapter, I did not use a random sample and was deceived by what appeared to be constant weather delays, when these are actually not very common. Don’t be lazy—it’s very easy to insert a .sample(False, 0.01) before every one of your .show functions:
End of explanation
"""
# Calculate the percentage contribution to delay for each source
total_delays = spark.sql("""
SELECT
ROUND(SUM(WeatherDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_weather_delay,
ROUND(SUM(CarrierDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_carrier_delay,
ROUND(SUM(NASDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_nas_delay,
ROUND(SUM(SecurityDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_security_delay,
ROUND(SUM(LateAircraftDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_late_aircraft_delay
FROM on_time_performance
""")
total_delays.show()
"""
Explanation: An explanation of the different kinds of delay is available on the Federal Aviation Administration (FAA) website.
What does this small sample tell us? Carrier delays are constant and sometimes severe. NAS delays—delays under the control of the National Airspace System (NAS) that can be attributed to conditions such as traffic volume and air traffic control—are as common as carrier delays. Security delays appear rare, while late aircraft delays (which result from the propagation of a previous delay) are frequent and sometimes severe.
A small sample is a good way to get familiar with the data, but small samples can be deceptive. We want real answers we can trust, so let’s quantify the sources of delay. What percentage of total delay does each source contribute? We’ll use arrival delay for our total—a simplification we’ll have to live with, since some delay may be on departure and some in flight:
End of explanation
"""
import sys, os, re
import iso8601
import datetime
# Load the on-time Parquet file
on_time_dataframe = spark.read.parquet('../data/january_performance.parquet')
on_time_dataframe.registerTempTable("on_time_performance")
on_time_dataframe = on_time_dataframe.filter(on_time_dataframe.Month == '1')
# Select a few features of interest
simple_on_time_features = spark.sql("""
SELECT
FlightNum,
FlightDate,
DayOfWeek,
DayofMonth AS DayOfMonth,
CONCAT(Month, '-', DayofMonth) AS DayOfYear,
Carrier,
Origin,
Dest,
Distance,
DepDelay,
ArrDelay,
CRSDepTime,
CRSArrTime
FROM on_time_performance
WHERE FlightDate < '2015-02-01'
""")
simple_on_time_features.limit(5).toPandas()
# Sample 10% to make executable inside the notebook
# simple_on_time_features = simple_on_time_features.sample(False, 0.1)
"""
Explanation: Our result isn’t perfect—the sources of delay don’t total to 100%. This is a result of our aforementioned simplification regarding arrival/departing delays. Nevertheless, we do get a sense of things; our sample is informative. Most delay is from previous delays with the same airplane, which have a cascading effect on the rest of the schedule. Of delays originating during a flight’s operations, most are carrier delays. Specifically, 29% of delays are carrier delays, versus 21% for air traffic control delays and only 4.5% for weather delays.
The answer to our earlier question is clear: we should usually be mad at the airline. However, not all carrier delays are because of mistakes the carrier makes. The FAA website explains:
Examples of occurrences that may determine carrier delay are: aircraft cleaning, aircraft damage, awaiting the arrival of connecting passengers or crew, baggage, bird strike, cargo loading, catering, computer, outage-carrier equipment, crew legality (pilot or attendant rest), damage by hazardous goods, engineering inspection, fueling, handling disabled passengers, late crew, lavatory servicing, maintenance, oversales, potable water servicing, removal of unruly passenger, slow boarding or seating, stowing carry-on baggage, weight and balance delays.
In other words, sometimes shit happens and the carrier didn’t do anything wrong. We don’t have data to determine how often the carrier is really to blame. Importantly for our problem in this chapter, predicting flight delays, the best we’ll be able to do is to characterize the overall carrier delay of each airline. We won’t be modeling bird strikes or unruly passengers.
Having familiarized ourselves with flight delays, now let’s plug some of the features we’ve discovered into a simple classification and regression.
Extracting Features with PySpark
To use features, we need to extract them from the broader dataset. Let’s begin by extracting just a few features from our dataset using PySpark, along with the time delays themselves. In order to do this, we need to decide which feature we’re going to predict. There are two delay fields listed in minutes: ArrDelayMinutes and DepDelayMinutes. Which are we to predict?
In thinking about our use case, it seems that our users want to know both things: whether and how late a flight will depart, and whether and how late it will arrive. Let’s include both in our training data. In terms of other features to extract, a little thought tells me that a few things are certain to matter. For instance, some airports have more delays than others, so departing and arriving airport is a no brainer. Flights are probably more often delayed in the hurricane and snow seasons, so the month of the year makes sense. Some carriers are more punctual than others. Finally, some routes must have more delays than others, so the flight number makes sense too.
We’ll also include the last of the unique identifiers for the flight, the flight date. Flights are uniquely identified by FlightDate, Carrier, FlightNum, and Origin and Dest. Always include all of the fields that uniquely identify a record, as it makes debugging easier.
That is all the features we will start with. The more features you use, the more complex wrangling them can get, so keep it simple and use just a few features at first. Once you have a pipeline set up with sklearn where you can iterate quickly and determine what helps and what doesn’t, you can add more.
All these features are simple and tabular, so it is easy to select them and store them as JSON for our model to read.
Let’s pick out and check our features. Check out ch07/extract_features.py:
End of explanation
"""
# Filter nulls, they can't help us
print(f"Original feature records: {simple_on_time_features.count():,}")
# Three ways to access a DataFrame Column in PySpark
# "ArrDelay", F.col("ArrDelay"), df.ArrDelay
filled_on_time_features = simple_on_time_features.filter(
simple_on_time_features.ArrDelay.isNotNull()
&
simple_on_time_features.DepDelay.isNotNull()
)
print(f"Non-null feature records: {filled_on_time_features.count():,}")
"""
Explanation: Looks like a few flights don’t have delay information. Let’s filter those, and sort the data before saving it as a single JSON file:
End of explanation
"""
# We need to turn timestamps into timestamps, and not strings or numbers
def convert_hours(hours_minutes):
hours = hours_minutes[:-2]
minutes = hours_minutes[-2:]
if hours == '24':
hours = '23'
minutes = '59'
time_string = "{}:{}:00Z".format(hours, minutes)
return time_string
def compose_datetime(iso_date, time_string):
return "{} {}".format(iso_date, time_string)
def create_iso_string(iso_date, hours_minutes):
time_string = convert_hours(hours_minutes)
full_datetime = compose_datetime(iso_date, time_string)
return full_datetime
def create_datetime(iso_string):
return iso8601.parse_date(iso_string)
def convert_datetime(iso_date, hours_minutes):
iso_string = create_iso_string(iso_date, hours_minutes)
dt = create_datetime(iso_string)
return dt
def day_of_year(iso_date_string):
dt = iso8601.parse_date(iso_date_string)
doy = dt.timetuple().tm_yday
return doy
def alter_feature_datetimes(row):
"""Process the DateTimes to handle overnight flights and day of year"""
flight_date = iso8601.parse_date(row['FlightDate'])
scheduled_dep_time = convert_datetime(row['FlightDate'], row['CRSDepTime'])
scheduled_arr_time = convert_datetime(row['FlightDate'], row['CRSArrTime'])
# Handle overnight flights
if scheduled_arr_time < scheduled_dep_time:
scheduled_arr_time += datetime.timedelta(days=1)
doy = day_of_year(row['FlightDate'])
return {
'FlightNum': row['FlightNum'],
'FlightDate': flight_date,
'DayOfWeek': int(row['DayOfWeek']),
'DayOfMonth': int(row['DayOfMonth']),
'DayOfYear': doy,
'Carrier': row['Carrier'],
'Origin': row['Origin'],
'Dest': row['Dest'],
'Distance': row['Distance'],
'DepDelay': row['DepDelay'],
'ArrDelay': row['ArrDelay'],
'CRSDepTime': scheduled_dep_time,
'CRSArrTime': scheduled_arr_time,
}
"""
Explanation: DateTime Conversion
Now we need to convert all our dates and times (datetimes) from a string representation to a mathematical one—otherwise, our predictive algorithms can’t understand them in their proper and most useful contexts. To do so, we need some utility functions:
End of explanation
"""
from pyspark.sql import Row
timestamp_features = filled_on_time_features.rdd.map(alter_feature_datetimes)
timestamp_features.first()
timestamp_df = timestamp_features.map(lambda x: Row(**x)).toDF()
# **{"name": "Russell"}
# name="Russell"
# a = ["Russell", "Jurney"]
# df.select(*a)
# df.select("Russell", "Jurney")
timestamp_df.limit(3).toPandas()
timestamp_df
"""
Explanation: In practice, these functions were worked out iteratively over the course of an hour. Employing them is then simple:
End of explanation
"""
# Explicitly sort the data and keep it sorted throughout.
# Leave nothing to chance.
sorted_features = timestamp_df.sort(
timestamp_df.DayOfYear,
timestamp_df.Carrier,
timestamp_df.Origin,
timestamp_df.Dest,
timestamp_df.FlightNum,
timestamp_df.CRSDepTime,
timestamp_df.CRSArrTime,
)
"""
Explanation: Always explicitly sort your data before vectorizing it. Don’t leave the sort up to the system. If you do so, a software version change or some other unknown cause might ultimately change the sort order of your training data as compared with your result data. This would be catastrophic and confusing and should be avoided at all costs. Explicitly sorting training data in a way that avoids arbitrary sorting is essential:
End of explanation
"""
# Store as a single JSON file and bzip2 it
sorted_features.write.mode("overwrite").json("../data/simple_flight_delay_features.jsonl")
"""
Explanation: Let’s copy the file into a JSON Lines file and check it out:
End of explanation
"""
%%bash
du -sh ../data/simple_flight_delay_features.json*
echo ""
head -5 ../data/simple_flight_delay_features.jsonl/part-0000*
"""
Explanation: Now take a look at the result:
End of explanation
"""
#
# {
# "ArrDelay":5.0,"CRSArrTime":"2015-12-31T03:20:00.000-08:00",
# "CRSDepTime":"2015-12-31T03:05:00.000-08:00",
# "Carrier":"WN","DayOfMonth":31,"DayOfWeek":4,
# "DayOfYear":365,"DepDelay":14.0,"Dest":"SAN",
# "Distance":368.0, "FlightDate":"2015-12-30T16:00:00.000-08:00",
# "FlightNum":"6109","Origin":"TUS"
# }
#
from pyspark.sql.types import (
StringType, IntegerType, FloatType, DateType, TimestampType,
StructType, StructField
)
schema = StructType([
StructField("ArrDelay", FloatType(), True), # "ArrDelay":5.0
StructField("CRSArrTime", TimestampType(), True), # "CRSArrTime":"2015-12..."
StructField("CRSDepTime", TimestampType(), True), # "CRSDepTime":"2015-12..."
StructField("Carrier", StringType(), True), # "Carrier":"WN"
StructField("DayOfMonth", IntegerType(), True), # "DayOfMonth":31
StructField("DayOfWeek", IntegerType(), True), # "DayOfWeek":4
StructField("DayOfYear", IntegerType(), True), # "DayOfYear":365
StructField("DepDelay", FloatType(), True), # "DepDelay":14.0
StructField("Dest", StringType(), True), # "Dest":"SAN"
StructField("Distance", FloatType(), True), # "Distance":368.0
StructField("FlightDate", DateType(), True), # "FlightDate":"2015-12..."
StructField("FlightNum", StringType(), True), # "FlightNum":"6109"
StructField("Origin", StringType(), True), # "Origin":"TUS"
])
features = spark.read.json(
"../data/simple_flight_delay_features.jsonl",
schema=schema
)
print(features.first())
features.limit(5).toPandas()
"""
Explanation: Looking good! Our features are now prepared for vectorization.
Building a Classifier with Spark ML
As we saw in our last example, in order to use sklearn to classify or regress all 5.4 million usable flight on-time performance records for 2015, we had to sample down to 1 million records. There simply isn’t enough RAM on one typical machine to train the model on all the training data. This is where Spark MLlib comes in. From the Machine Learning Library (MLlib) Guide:
Its goal is to make practical machine learning scalable and easy. At a high level, it provides tools such as:
ML Algorithms: common learning algorithms such as classification, regression, clustering, and collaborative filtering
Featurization: feature extraction, transformation, dimensionality reduction, and selection
Pipelines: tools for constructing, evaluating, and tuning ML Pipelines
Persistence: saving and load algorithms, models, and Pipelines
Utilities: linear algebra, statistics, data handling, etc.
MLlib uses Spark DataFrames as the foundation for tables and records. Although some RDD-based methods still remain, they are not under active development.
Note that we are using Spark MLlib because it can work across many machines to handle large volumes of data. We’re only using one machine in this book’s examples, but the code and the process are identical regardless of the size of the cluster. By learning to build a predictive model with Spark MLlib on a single machine, you are learning to operate a cluster of 1,000 machines. Services like Amazon Elastic MapReduce make booting a working Spark cluster a matter of point-and-click. We covered doing analytics in the cloud in the first edition, but removed that chapter to make room for other content in this edition.
Now, follow along as we build a classifier using PySpark and Spark ML in ch07/train_spark_mllib_model.py.
Loading Our Training Data with a Specified Schema
First we must load our training data back into Spark. When we first loaded our data, Spark SQL had trouble detecting our timestamp and date types, so we must specify a schema for Spark to go on (just like in our sklearn model, it is important for our training data to be typed correctly for it to be interpreted for statistical inference):
End of explanation
"""
null_counts = [(column, features.where(features[column].isNull()).count()) \
for column in features.columns]
cols_with_nulls = filter(lambda x: x[1] > 0, null_counts)
print(list(cols_with_nulls))
"""
Explanation: With our data loaded, now we need to prepare our data for classification.
Addressing Nulls
Before we can use the tools that PySpark’s MLlib provides us, we must eliminate null values from fields in rows of our DataFrames. Otherwise our code will crash as we start to employ tools from pyspark.ml.features.
To detect null values in columns, we need only loop through our columns and inspect them with pyspark.sql.Column.isNull:
End of explanation
"""
filled_features = features.na.fill({'DepDelay': 0})
"""
Explanation: If null values are found, we need only employ DataFrame.na.fill to fill them. Supply fillna with a dict with the column name as the key and the column’s fill value as the value, and it will fill in the column name with that value:
End of explanation
"""
#
# Add a Route variable to replace FlightNum
#
features_with_route = features.withColumn(
'Route',
F.concat(
features.Origin,
F.lit('-'),
features.Dest
)
)
features_with_route.select("Origin", "Dest", "Route").show(5)
"""
Explanation: In our dataset, no nulls are found, but there usually are some, so take note of this step for the future. It will save you trouble as you start engineering and vectorizing your features.
Replacing FlightNum with Route
At this point it occurs to us that FlightNums will change, but routes do not… so long as we define a route as a pair of cities. So, let’s add a column Route, which is defined as the concatenation of Origin, -, and Dest, such as ATL-SFO. This will very simply inform our model whether certain routes are frequently delayed, separately from whether certain airports tend to have delays for inbound or outbound flights.
To add Route, we need to use two utilities from the pyspark.sql.functions package. The concat function concatenates multiple strings together, and the lit function is needed to specify a literal string to concatenate:
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
# Look at overall histogram
data_tuple = (
features
.select("ArrDelay")
.rdd
.flatMap(lambda x: x)
.histogram([-87.0, -60, -30, -15, 0, 15, 30, 60, 120])
)
data_tuple
arr_delay = features.select("ArrDelay")
sample = arr_delay.filter(arr_delay.ArrDelay < 120).sample(0.10)
sample.toPandas().hist('ArrDelay', bins=20)
"""
Explanation: RDD Alternative RouteNum
Note that if we wanted to, we could convert the record to an RDD, where we could run something like the following:
```python
def add_route(record):
record = record.asDict()
record['Route'] = record['Origin'] + "-" + record['Dest']
return record
features_with_route_rdd = features.rdd.map(add_route)
```
The reason to use DataFrames is that they are much, much faster than RDDs, even if the API is slightly more complex.
Bucketizing a Continuous Variable for Classification
Classification does not work to predict a continuous variable like flight delays (in minutes); classifications predict two or more categories. Therefore, in order to build a classifier for flight delays, we have to create categories our of delays in minutes.
Determining Arrival Delay Buckets
In the first run-through writing the book, we used the same buckets as Bay Area startup FlightCaster (founded in 2009 and acquired in 2011 by Next Jump): on time, slightly late, and very late. The values corresponding to these bins come from a natural split in how people think about time in terms of minutes, hours, and days. One hour is an intuitive value for the high end of slightly late. Over one hour would then be very late. “On time” would be within 15 minutes of the scheduled arrival time. If such natural bins weren’t available, you would want to closely analyze the distribution of your continuous variable to determine what buckets to use.
As it turned out, this analysis was necessary in our case too. When writing the book, while debugging an issue with our Spark ML classifier model, we did an analysis where we found that a different set of categories were needed. Check out the Jupyter notebook at ch09/Debugging Prediction Problems.ipynb for details. Note that GitHub supports the display of Jupyter notebooks, which makes them a really powerful way to share data analyses—just commit and push to a GitHub repository and you’ve got a shared report. When you are doing iterative visualization, notebooks are very handy.
Iterative Visualization with Histograms
To begin, check out the overall distribution of flight delays, which we compute by converting the features DataFrame to an RDD and then employing RDD.histogram. RDD.histogram returns two lists: a set of buckets, and the count for each bucket. We then use matplotlib.pyplot to create a histogram. Note that because our buckets are already counted, we can’t use pyplot.hist. Instead, we employ pyplot.bar to create a histogram from our precomputed buckets and their corresponding counts.
To gather our data, we select the ArrDelay column, convert the DataFrame to an RDD, and call RDD.flatMap to convert our records into an RDD containing a single list of floats:
End of explanation
"""
heights = np.array(data_tuple[1])
# The bins are 1 > length than the values
full_bins = data_tuple[0]
heights, full_bins
"""
Explanation: Next, we extract the heights of the bars and the bin definitions from the tuple returned by histogram:
End of explanation
"""
# Bars are drawn from the left
mid_point_bins = full_bins[:-1]
mid_point_bins
"""
Explanation: Since bars are drawn from the left, we remove the rightmost item in the bins list:
End of explanation
"""
# The width of a bar should be the range it maps in the data
widths = [abs(i - j) for i, j in zip(full_bins[:-1], full_bins[1:])]
widths
"""
Explanation: Next, we use a list comprehension to determine the range between the values defining the buckets, which gives us the width of the bars. We’ve decided that the bars should be as wide as the data they measure:
End of explanation
"""
# And now the bars should plot nicely
bar = plt.bar(mid_point_bins, heights, width=widths, color='b')
"""
Explanation: Finally, we plot the bar chart, specifying our bar widths (they draw from the left) and coloring our bars blue:
End of explanation
"""
def create_hist(rdd_histogram_data):
"""Given an RDD.histogram, plot a pyplot histogram"""
heights = np.array(rdd_histogram_data[1])
full_bins = rdd_histogram_data[0]
mid_point_bins = full_bins[:-1]
widths = [abs(i - j) for i, j in zip(full_bins[:-1], full_bins[1:])]
bar = plt.bar(mid_point_bins, heights, width=widths, color='b')
return bar
"""
Explanation: We can summarize the previous operations in a function called create_hist, which we will reuse to draw other histograms like this one:
End of explanation
"""
%matplotlib inline
buckets = [-87.0, 15, 60, 200]
rdd_histogram_data = features\
.select("ArrDelay")\
.rdd\
.flatMap(lambda x: x)\
.histogram(buckets)
create_hist(rdd_histogram_data)
"""
Explanation: To start, let’s visualize the first set of buckets we considered: –87 to 15, 15 to 60, and 60 to 200. Note that the first item in the bucket definition, –87, comes from the minimum delay in the dataset. We use 200 to keep from distorting the chart, although the maximum delay is actually 1,971 minutes:
End of explanation
"""
%matplotlib inline
buckets = [-87.0, -30, -15, 0, 15, 30, 120]
rdd_histogram_data = (
features
.select("ArrDelay")
.rdd
.flatMap(lambda x: x)
.histogram(buckets)
)
create_hist(rdd_histogram_data)
"""
Explanation: Wow. This is a very distorted distribution. We have created an imbalanced class set from one that should ideally be balanced. This is a problem, because imbalanced classes can produce classifiers that only predict the most common value, and yet still seem fairly accurate. At best, this label set would have made things hard for our classifier when there is no benefit to doing so. We need to rethink our labels.
Let’s try something a little more granular and check the distribution using the set of buckets: [-87.0, -30, -15, 0, 15, 30, 120]:
End of explanation
"""
%matplotlib inline
buckets = [-87.0, -15, 0, 15, 30, 120]
rdd_histogram_data = (
features
.select("ArrDelay")
.rdd
.flatMap(lambda x: x)
.histogram(buckets)
)
create_hist(rdd_histogram_data)
"""
Explanation: Hmm... this looks better, but the leftmost and rightmost buckets look too small. Let’s combine the –87 to –30 and –30 to –15 buckets, and try again:
End of explanation
"""
%matplotlib inline
buckets = [-87.0, -15, 0, 30, 120]
rdd_histogram_data = (
features
.select("ArrDelay")
.rdd
.flatMap(lambda x: x)
.histogram(buckets)
)
create_hist(rdd_histogram_data)
"""
Explanation: This looks better! However, the 15–30 bucket seems too small. Let’s merge this bucket with the 0–15 bucket and try again:
End of explanation
"""
#
# Categorize or 'bucketize' the arrival delay field using a DataFrame UDF
#
@F.udf(StringType())
def bucketize_arr_delay(arr_delay: float) -> float:
"""Convert the numeric delays into buckets"""
bucket = None
if arr_delay <= -15.0:
bucket = 0.0
elif arr_delay > -15.0 and arr_delay <= 0.0:
bucket = 1.0
elif arr_delay > 0.0 and arr_delay <= 15.0:
bucket = 2.0
elif arr_delay > 15.0 and arr_delay <= 30.0:
bucket = 3.0
elif arr_delay > 30.0:
bucket = 4.0
return bucket
# Wrap the function in pyspark.sql.functions.udf with
# pyspark.sql.types.StructField information
from pyspark.sql.functions import udf
# dummy_function_udf = udf(bucketize_arr_delay, )
# Add a category column via pyspark.sql.DataFrame.withColumn
manual_bucketized_features = features_with_route.withColumn(
"ArrDelayBucket",
bucketize_arr_delay(features['ArrDelay'])
)
manual_bucketized_features.select("ArrDelay", "ArrDelayBucket").limit(10).toPandas()
"""
Explanation: Ah-ha! That looks pretty good. The buckets end up being “very early” (> 15 minutes early), “early” (0–15 minutes early), “late” (0–30 minutes late), and “very late” (30+ minutes late). These aren’t perfect in terms of usability, but I think they can work. Ideally the distribution in the buckets would be equal, but they are close enough.
Bucket Quest Conclusion
We have now determined the right bucket scheme for converting a continuous variable, flight delays, into four categories. Note how we used a Jupyter notebook along with PySpark and PyPlot to iteratively visualize the flights that fell into each bucketing scheme. This notebook is now a shareable asset. This would serve as a great jumping-off point for a discussion involving the data scientist who created the notebook, the product manager for the product, and the engineers working on the project.
Now that we’ve got our buckets, let’s apply them and get on with our prediction!
Bucketizing with a DataFrame UDF
We can bucketize our data in one of two ways: using a DataFrame UDF, or with pyspark.ml.feature.Bucketizer.
Let’s begin by using a UDF to categorize our data in accordance with the scheme in the preceding section. We’ll create a function, bucketize_arr_delay, to achieve the “bucketizing,” and then wrap it in a UDF along with a StructField of type information—in this case the string DataType StringType. Next, we’ll apply the UDF to create a new column via DataFrame.withColumn. Finally, we’ll select ArrDelay and ArrDelayBucket and see how they compare:
End of explanation
"""
#
# Use pysmark.ml.feature.Bucketizer to bucketize ArrDelay
#
from pyspark.ml.feature import Bucketizer
splits = [-float("inf"), -15.0, 0, 15.0, 30.0, float("inf")]
bucketizer = Bucketizer(
splits=splits,
inputCol="ArrDelay",
outputCol="ArrDelayBucket"
)
ml_bucketized_features = bucketizer.transform(features_with_route)
# Check the buckets out
ml_bucketized_features.select("ArrDelay", "ArrDelayBucket").limit(10).toPandas()
ml_bucketized_features.limit(3).toPandas()
"""
Explanation: You can see that ArrDelay is mapped to ArrDelayBucket as we indicated.
Bucketizing with pyspark.ml.feature.Bucketizer
Creating buckets for classification is simpler using Bucketizer. We simply define our splits in a list, instantiate our Bucketizer, and then apply a transformation on our features DataFrame. We’ll do this transformation for the ArrDelay field:
End of explanation
"""
from pyspark.ml.feature import StringIndexer, VectorAssembler
"""
Explanation: You can see the result is the same as with our UDF buckets. Now that we’ve created the ArrDelayBucket fields, we’re ready to vectorize our features using tools from pyspark.ml.feature.
Feature Vectorization with pyspark.ml.feature
Spark MLlib has an extremely rich library of functions for various machine learning tasks, so it is helpful when using MLlib to have the API documentation open in a browser tab, along with the DataFrame API documentation. While an RDD-based API does exist, we’ll be using the DataFrame-based MLlib routines.
Vectorizing Categorical Columns with Spark ML
To follow along with this section, open the pyspark.ml.feature documentation. First we need to import our tools from pyspark.ml.feature:
End of explanation
"""
# Turn category fields into categoric feature vectors, then drop
# intermediate fields
for column in ["Carrier", "DayOfMonth", "DayOfWeek", "DayOfYear",
"Origin", "Dest", "Route"]:
string_indexer = StringIndexer(
inputCol=column,
outputCol=column + "_index"
)
ml_bucketized_features = (
string_indexer.fit(ml_bucketized_features)
.transform(ml_bucketized_features)
)
# Check out the indexes
ml_bucketized_features.limit(5).toPandas()
"""
Explanation: Then we need to index our nominal or categorical string columns into sets of vectors made up of binary variables for every unique value found in a given column. To achieve this, for each categorical column (be it a string or number), we need to:
Configure and create a StringIndexer to index the column into one number per unique value.
Execute fit on the StringIndexer to get a StringIndexerModel.
Run the training data through StringIndexerModel.transform to index the strings into a new column.
The code to implement these steps for each categorical variable column looks like this:
End of explanation
"""
# Handle continuous numeric fields by combining them into one feature vector
numeric_columns = ["DepDelay", "Distance"]
index_columns = [
"Carrier_index",
"DayOfMonth_index",
"DayOfWeek_index",
"DayOfYear_index",
"Origin_index",
"Origin_index",
"Dest_index",
"Route_index"
]
vector_assembler = VectorAssembler(
inputCols=numeric_columns + index_columns,
outputCol="Features_vec"
)
final_vectorized_features = vector_assembler.transform(ml_bucketized_features)
# Drop the index columns
for column in index_columns:
final_vectorized_features = final_vectorized_features.drop(column)
# Check out the features
final_vectorized_features.limit(5).toPandas()
final_vectorized_features.select("Features_vec").show(10, False)
final_vectorized_features = final_vectorized_features.filter(final_vectorized_features.FlightDate < '2015-02-01')
"""
Explanation: Having indexed our categorical features, now we combine them with our numeric features into a single feature vector for our classifier.
Vectorizing Continuous Variables and Indexes with Spark ML
As they are already numeric, there isn’t much work required to vectorize our continuous numeric features. And now that we have indexes, we have a numeric representation of each string column. Now we simply employ VectorAssembler to combine the numeric and index columns into a single feature Vector. Then we drop the index columns, as they aren’t needed anymore:
End of explanation
"""
# Test/train split
training_data, test_data = final_vectorized_features.randomSplit([0.8, 0.2], seed=31337)
training_data.count(), test_data.count()
"""
Explanation: Now we’re ready to train our classifier!
Classification with Spark ML
Our features are prepared in a single field, Features_vec, and we’re ready to compose the experiment we’ll run as part of creating our classifier. To drive our experiment, we require a training dataset and a test dataset. As we discussed earlier, a training dataset is used to train the model and a test set is used to gauge its accuracy. Cross-validation ensures that the models we create in the lab perform well in the real world, and not just on paper.
Test/Train Split with DataFrames
As before with scikit-learn, we need to cross-validate. This means splitting our data between a training set and a test set.
The DataFrame API makes this easy with DataFrame.randomSplit. This takes an array featuring the ratios of the splits, which should add up to 1:
End of explanation
"""
# Instantiate and fit random forest classifier
from pyspark.ml.classification import RandomForestClassifier
rfc = RandomForestClassifier(
featuresCol="Features_vec",
labelCol="ArrDelayBucket",
maxBins=4657,
maxMemoryInMB=1024,
seed=31337
)
model = rfc.fit(training_data)
model
"""
Explanation: Creating and Fitting a Model
It takes three lines to import, instantiate, and fit a random forest classifier using our training dataset. Note that we’re using a random forest classifier because this is the most accurate decision tree model available in Spark MLlib that can classify into multiple categories. These classifiers also offer feature importances, which we will use in Chapter 9 to improve the model.
Also note that we run the model once, and it throws an exception because we have more than 32 unique values for one feature, the default value for maxBins. We set maxBins to the value suggested by the exception, 4657, and the model fits successfully. Note that this can take a while, so grab some coffee:
End of explanation
"""
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
# Evaluate model using test data
predictions = model.transform(test_data)
(
predictions.select(
"ArrDelayBucket",
"Features_vec",
"rawPrediction",
"probability",
"prediction"
)
.sample(0.001)
.limit(5)
.toPandas()
)
evaluator = MulticlassClassificationEvaluator(
labelCol="ArrDelayBucket", metricName="accuracy"
)
accuracy = evaluator.evaluate(predictions)
print("Accuracy = {}".format(accuracy))
evaluator = MulticlassClassificationEvaluator(
labelCol="ArrDelayBucket", metricName="f1"
)
accuracy = evaluator.evaluate(predictions)
print("F1 = {}".format(accuracy))
"""
Explanation: Next, we need to evaluate the classifier we’ve created.
EVALUATING A MODEL
We can evaluate the performance of our classifier using the MulticlassClassificationEvaluator, which simply wraps the predictions we get from running pyspark.ml.classification.RandomForestClassificationModel.transform on the test dataset. Several metrics are available, but we’ll start with the raw accuracy:
End of explanation
"""
# Sanity-check a sample
predictions.sample(False, 0.001, 18).orderBy("CRSDepTime").limit(5).toPandas()
"""
Explanation: Not great, but good enough for now. Don’t worry, we’ll work on making the model more accurate in Chapter 9.
Let’s lay eyes on some of the predictions, to see that they’re sane. At one point we had a bug where all predictions were 0.0. Seeing a sample with different prediction values takes a bit of cleverness because of the way the transformation sorts the data, so we order the sample by the reservation system departure time before displaying it:
End of explanation
"""
predictions.groupBy("prediction").count().orderBy('prediction').toPandas()
predictions.sample(0.001).sort("CRSDepTime").select("ArrDelayBucket","prediction").limit(20).toPandas()
"""
Explanation: Now let’s see the distribution of the Prediction field, to verify we don’t have that same bug:
End of explanation
"""
# Handle continuous numeric fields by combining them into one feature vector
numeric_columns = ["Distance"]
index_columns = [
"Carrier_index",
"DayOfMonth_index",
"DayOfWeek_index",
"DayOfYear_index",
"Origin_index",
"Origin_index",
"Dest_index",
"Route_index"
]
vector_assembler = VectorAssembler(
inputCols=numeric_columns + index_columns,
outputCol="Features_vec"
)
final_vectorized_features = vector_assembler.transform(ml_bucketized_features)
# Drop the index columns
for column in index_columns:
final_vectorized_features = final_vectorized_features.drop(column)
# Check out the features
final_vectorized_features.limit(5).toPandas()
from pyspark.ml import Pipeline
from pyspark.ml.regression import GBTRegressor
from pyspark.ml.feature import VectorIndexer
from pyspark.ml.evaluation import RegressionEvaluator
# Train a GBT model.
gbt = GBTRegressor(
featuresCol="Features_vec",
labelCol="DepDelay",
maxIter=10,
maxBins=4657
)
gbt_model = gbt.fit(final_vectorized_features)
gbt_model
predictions = gbt_model.transform(final_vectorized_features)
error = predictions.select(
"Origin",
"Dest",
"Carrier",
"DepDelay",
"prediction",
(F.col("DepDelay") - F.col("prediction")).alias("Error")
)
(
error
.sample(False, 0.001, 10)
.orderBy("CRSDepTime")
.limit(10)
.toPandas()
)
import pyspark.sql.functions as F
(
error.select(
F.percentile_approx("Error", 0.5, 10000).alias("Median Error"),
F.stddev("Error").alias("STD Error"),
F.mean("Error").alias("Average Error")
)
.toPandas()
)
"""
Explanation: These “sanity checks” seem okay!
Evaluation Conclusion
With Spark, we can create, train, and evaluate a classifier or regression in a few lines of code. Surprisingly, it is even more powerful than scikit-learn. But to be useful, we’ve got to deploy our prediction. We’ll do that in the next chapter.
Now we have a problem—how do we deploy Spark ML models? Unlike scikit-learn models, we can’t simply place them inside our web application as an API, because they require the Spark platform to run. This is something we will address in the next chapter.
Exercises
Using the code above as a guide, create a model that predicts departure delay, DepDelay. Be sure not to use arrival delay, ArrDelay in your model :)
End of explanation
"""
# Trim our columns and shorten our DataFrame/column names for brevity
p = predictions.select(
predictions.ArrDelayBucket.alias("actual"),
predictions.prediction
)
# Get a list of all labels in the training data
buckets_df = p.groupBy("actual").count()
buckets = buckets_df.rdd.map(lambda x: x.actual).collect()
buckets
# Now compute the confusion matrix, where: "Each element i,j of the matrix would be
# the number of items with true class i that were classified as being in class j."
rows = []
for actual in buckets:
column = []
for prediction in buckets:
value = p.filter(p.actual == actual).filter(p.prediction == prediction).count()
column.append(value)
rows.append(column)
rows
%matplotlib inline
conf_arr = np.array(rows)
norm_conf = []
for i in conf_arr:
a = 0
tmp_arr = []
a = sum(i, 0)
for j in i:
tmp_arr.append(float(j)/float(a))
norm_conf.append(tmp_arr)
fig = plt.figure(figsize=(8, 8))
plt.clf()
ax = fig.add_subplot(111)
ax.set_aspect(1)
res = ax.imshow(np.array(norm_conf), cmap='summer',
interpolation='nearest')
width, height = conf_arr.shape
for x in range(width):
for y in range(height):
ax.annotate(str(conf_arr[x][y]), xy=(y, x),
horizontalalignment='center',
verticalalignment='center')
cb = fig.colorbar(res)
alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
plt.xticks(range(width), ['0','1','2','3','4'])#alphabet[:width])
plt.show()
"""
Explanation: Conclusion
In this chapter we’ve taken what we know about the past to predict the future.
In the next chapter, we’ll drill down into this prediction to drive a new action that can take advantage of it.
Confusion Matrix
A confusion matrix is a way of understanding the way your model behaves when it is confused. It shows how often your model is accurate for each label in the actual data versus the predicted data.
We can easily calculate a confusion matrix by looping through all buckets twice, once for the true values and once for the predicted values. "Each element i,j of the matrix would be the number of items with true class i that were classified as being in class j." For more information, see here.
End of explanation
"""
|
DJCordhose/speed-limit-signs | notebooks/retrain-cnn-step-3-fine-tuning-bottleneck-layer.ipynb | apache-2.0 | import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import matplotlib.pylab as plt
import numpy as np
from distutils.version import StrictVersion
import sklearn
print(sklearn.__version__)
assert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1')
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
assert StrictVersion(tf.__version__) >= StrictVersion('1.1.0')
import keras
print(keras.__version__)
assert StrictVersion(keras.__version__) >= StrictVersion('2.0.0')
"""
Explanation: Retrain a CNN, part 3, fine tuning bottleneck layer
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
based on https://gist.github.com/fchollet/7eb39b44eb9e16e59632d25fb3119975 including comments to get things to work (gist does NOT just work out of the box)
End of explanation
"""
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Model, Sequential
from keras.layers import Dropout, Flatten, Dense, Input
# dimensions of our images.
img_width, img_height = 150, 150
train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
nb_train_samples = 2000
nb_validation_samples = 800
input_tensor = Input(shape=(img_width, img_height, 3))
base_model = applications.VGG16(weights='imagenet', include_top=False, input_tensor=input_tensor)
base_model.summary()
# would be (None, None, 512), but this is not specific enough for Flatten layer further down...
bottleneck_output_shape = base_model.output_shape[1:]
# so, we manually set this to the dimension we know it really has from previous step
bottleneck_output_shape = (4, 4, 512)
# build a classifier model to put on top of the convolutional model
top_model = Sequential()
top_model.add(Flatten(input_shape=bottleneck_output_shape))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(1, activation='sigmoid'))
top_model.summary()
# note that it is necessary to start with a fully-trained
# classifier, including the top classifier,
# in order to successfully do fine-tuning
top_model_weights_path = 'bottleneck_fc_model.h5'
top_model.load_weights(top_model_weights_path)
model = Model(input=base_model.input, output=top_model(base_model.output))
model.layers
len(model.layers)
first_conv_layer = model.layers[1]
first_conv_layer.trainable
first_max_pool_layer = model.layers[3]
first_max_pool_layer.trainable
# set the first 15 layers (up to the last conv block)
# to non-trainable (weights will not be updated)
# so, the general features are kept and we (hopefully) do not have overfitting
non_trainable_layers = model.layers[:15]
non_trainable_layers
for layer in non_trainable_layers:
layer.trainable = False
first_max_pool_layer.trainable
first_conv_layer.trainable
# compile the model with a SGD/momentum optimizer
# and a very slow learning rate
# make updates very small and non adaptive so we do not ruin previous learnings
model.compile(loss='binary_crossentropy',
optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
model.summary()
# this might actually take a while even on GPU
# ~ 92% validation accuracy seems to be realistic
epochs = 50
batch_size = 16
# ... and viz progress in tensorboard to see what is going on
!rm -rf tf_log/
tb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log')
# prepare data augmentation configuration
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary')
"""
Explanation: This script goes along the blog post
"Building powerful image classification models using very little data"
from blog.keras.io.
It uses data that can be downloaded at:
https://www.kaggle.com/c/dogs-vs-cats/data
In our setup, we:
- created a data/ folder
- created train/ and validation/ subfolders inside data/
- created cats/ and dogs/ subfolders inside train/ and validation/
- put the cat pictures index 0-999 in data/train/cats
- put the cat pictures index 1000-1400 in data/validation/cats
- put the dogs pictures index 12500-13499 in data/train/dogs
- put the dog pictures index 13500-13900 in data/validation/dogs
So that we have 1000 training examples for each class, and 400 validation examples for each class.
In summary, this is our directory structure:
data/
train/
dogs/
dog001.jpg
dog002.jpg
...
cats/
cat001.jpg
cat002.jpg
...
validation/
dogs/
dog001.jpg
dog002.jpg
...
cats/
cat001.jpg
cat002.jpg
...
End of explanation
"""
# due to very small learning rate
# takes ~ 30s per epoch on AWS K80, with 50 epochs: ~ 30 minutes
# on GPU might take up to 20 times more
# fine-tune the model
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size,
callbacks=[tb_callback])
model.save('models/cat-dog-vgg-retrain.hdf5')
"""
Explanation:
End of explanation
"""
|
jpilgram/phys202-2015-work | assignments/assignment12/FittingModelsEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
"""
Explanation: Fitting Models Exercise 1
Imports
End of explanation
"""
a_true = 0.5
b_true = 2.0
c_true = -4.0
"""
Explanation: Fitting a quadratic curve
For this problem we are going to work with the following model:
$$ y_{model}(x) = a x^2 + b x + c $$
The true values of the model parameters are as follows:
End of explanation
"""
# YOUR CODE HERE
#raise NotImplementedError()
x = np.linspace(-5,5,30)
dy = 2.0
np.random.seed(0)
y = a_true*x**2 + b_true*x + c_true + np.random.normal(0.0, dy, size = 30)
plt.scatter(x,y)
plt.title('Random Data')
plt.box(False);
assert True # leave this cell for grading the raw data generation and plot
"""
Explanation: First, generate a dataset using this model using these parameters and the following characteristics:
For your $x$ data use 30 uniformly spaced points between $[-5,5]$.
Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal).
After you generate the data, make a plot of the raw data (use points).
End of explanation
"""
# YOUR CODE HERE
#raise NotImplementedError()
def chi2(theta, x, y, dy):
#theta = [a,b,c]
return np.sum(((y - theta[0]*(x**2) - theta[1]*x - theta[2])/dy)**2)
guess = [0.4, 2.5, -3.8]
sol = opt.minimize(chi2, guess, args=(x,y,dy))
bestfit = sol.x
def uncert(theta, x, y, dy):
return (y - theta[0]*(x**2) - theta[1]*x - theta[2])/dy
deviation = opt.leastsq(uncert, guess, args=(x, y, dy), full_output=True)
best = deviation[0]
best_div = deviation[1]
print('a = {0:.3f} +/- {1:.3f}'.format(best[0], np.sqrt(best_div[0,0])))
print('b = {0:.3f} +/- {1:.3f}'.format(best[1], np.sqrt(best_div[1,1])))
print('c = {0:.3f} +/- {1:.3f}'.format(best[2], np.sqrt(best_div[2,2])))
yfit = bestfit[0]*(x**2) + bestfit[1]*x + bestfit[2]
plt.plot(x,yfit)
plt.scatter(x,y)
plt.title('Random Data with Best Fit Curve')
plt.box(False);
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
"""
Explanation: Now fit the model to the dataset to recover estimates for the model's parameters:
Print out the estimates and uncertainties of each parameter.
Plot the raw data and best fit of the model.
End of explanation
"""
|
tensorflow/examples | courses/udacity_intro_to_tensorflow_for_deep_learning/l10c02_nlp_multiple_models_for_predicting_sentiment.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
import tensorflow_datasets as tfds
import numpy as np
"""
Explanation: Using LSTMs, CNNs, GRUs with a larger dataset
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l10c02_nlp_multiple_models_for_predicting_sentiment.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l10c02_nlp_multiple_models_for_predicting_sentiment.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
In this colab, you use different kinds of layers to see how they affect the model.
You will use the glue/sst2 dataset, which is available through tensorflow_datasets.
The General Language Understanding Evaluation (GLUE) benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
These resources include the Stanford Sentiment Treebank (SST) dataset that consists of sentences from movie reviews and human annotations of their sentiment. This colab uses version 2 of the SST dataset.
The splits are:
train 67,349
validation 872
and the column headings are:
sentence
label
For more information about the dataset, see https://www.tensorflow.org/datasets/catalog/glue#gluesst2
End of explanation
"""
# Get the dataset.
# It has 70000 items, so might take a while to download
dataset, info = tfds.load('glue/sst2', with_info=True)
print(info.features)
print(info.features["label"].num_classes)
print(info.features["label"].names)
# Get the training and validation datasets
dataset_train, dataset_validation = dataset['train'], dataset['validation']
dataset_train
# Print some of the entries
for example in dataset_train.take(2):
review, label = example["sentence"], example["label"]
print("Review:", review)
print("Label: %d \n" % label.numpy())
# Get the sentences and the labels
# for both the training and the validation sets
training_reviews = []
training_labels = []
validation_reviews = []
validation_labels = []
# The dataset has 67,000 training entries, but that's a lot to process here!
# If you want to take the entire dataset: WARNING: takes longer!!
# for item in dataset_train.take(-1):
# Take 10,000 reviews
for item in dataset_train.take(10000):
review, label = item["sentence"], item["label"]
training_reviews.append(str(review.numpy()))
training_labels.append(label.numpy())
print ("\nNumber of training reviews is: ", len(training_reviews))
# print some of the reviews and labels
for i in range(0, 2):
print (training_reviews[i])
print (training_labels[i])
# Get the validation data
# there's only about 800 items, so take them all
for item in dataset_validation.take(-1):
review, label = item["sentence"], item["label"]
validation_reviews.append(str(review.numpy()))
validation_labels.append(label.numpy())
print ("\nNumber of validation reviews is: ", len(validation_reviews))
# Print some of the validation reviews and labels
for i in range(0, 2):
print (validation_reviews[i])
print (validation_labels[i])
"""
Explanation: Get the dataset
End of explanation
"""
# There's a total of 21224 words in the reviews
# but many of them are irrelevant like with, it, of, on.
# If we take a subset of the training data, then the vocab
# will be smaller.
# A reasonable review might have about 50 words or so,
# so we can set max_length to 50 (but feel free to change it as you like)
vocab_size = 4000
embedding_dim = 16
max_length = 50
trunc_type='post'
pad_type='post'
oov_tok = "<OOV>"
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
tokenizer = Tokenizer(num_words = vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(training_reviews)
word_index = tokenizer.word_index
"""
Explanation: Tokenize the words and sequence the sentences
End of explanation
"""
# Pad the sequences so that they are all the same length
training_sequences = tokenizer.texts_to_sequences(training_reviews)
training_padded = pad_sequences(training_sequences,maxlen=max_length,
truncating=trunc_type, padding=pad_type)
validation_sequences = tokenizer.texts_to_sequences(validation_reviews)
validation_padded = pad_sequences(validation_sequences,maxlen=max_length)
training_labels_final = np.array(training_labels)
validation_labels_final = np.array(validation_labels)
"""
Explanation: Pad the sequences
End of explanation
"""
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
"""
Explanation: Create the model using an Embedding
End of explanation
"""
num_epochs = 20
history = model.fit(training_padded, training_labels_final, epochs=num_epochs,
validation_data=(validation_padded, validation_labels_final))
"""
Explanation: Train the model
End of explanation
"""
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
"""
Explanation: Plot the accurracy and loss
End of explanation
"""
# Write some new reviews
review1 = """I loved this movie"""
review2 = """that was the worst movie I've ever seen"""
review3 = """too much violence even for a Bond film"""
review4 = """a captivating recounting of a cherished myth"""
new_reviews = [review1, review2, review3, review4]
# Define a function to prepare the new reviews for use with a model
# and then use the model to predict the sentiment of the new reviews
def predict_review(model, reviews):
# Create the sequences
padding_type='post'
sample_sequences = tokenizer.texts_to_sequences(reviews)
reviews_padded = pad_sequences(sample_sequences, padding=padding_type,
maxlen=max_length)
classes = model.predict(reviews_padded)
for x in range(len(reviews_padded)):
print(reviews[x])
print(classes[x])
print('\n')
predict_review(model, new_reviews)
"""
Explanation: Write a function to predict the sentiment of reviews
End of explanation
"""
def fit_model_and_show_results (model, reviews):
model.summary()
history = model.fit(training_padded, training_labels_final, epochs=num_epochs,
validation_data=(validation_padded, validation_labels_final))
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
predict_review(model, reviews)
"""
Explanation: Define a function to train and show the results of models with different layers
End of explanation
"""
num_epochs = 30
model_cnn = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.Conv1D(16, 5, activation='relu'),
tf.keras.layers.GlobalMaxPooling1D(),
tf.keras.layers.Dense(1, activation='sigmoid')
])
# Default learning rate for the Adam optimizer is 0.001
# Let's slow down the learning rate by 10.
learning_rate = 0.0001
model_cnn.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(learning_rate),
metrics=['accuracy'])
fit_model_and_show_results(model_cnn, new_reviews)
"""
Explanation: Use a CNN
End of explanation
"""
num_epochs = 30
model_gru = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.Bidirectional(tf.keras.layers.GRU(32)),
tf.keras.layers.Dense(1, activation='sigmoid')
])
learning_rate = 0.00003 # slower than the default learning rate
model_gru.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(learning_rate),
metrics=['accuracy'])
fit_model_and_show_results(model_gru, new_reviews)
"""
Explanation: Use a GRU
End of explanation
"""
num_epochs = 30
model_bidi_lstm = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(embedding_dim)),
tf.keras.layers.Dense(1, activation='sigmoid')
])
learning_rate = 0.00003
model_bidi_lstm.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(learning_rate),
metrics=['accuracy'])
fit_model_and_show_results(model_bidi_lstm, new_reviews)
"""
Explanation: Add a bidirectional LSTM
End of explanation
"""
num_epochs = 30
model_multiple_bidi_lstm = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(embedding_dim,
return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(embedding_dim)),
tf.keras.layers.Dense(1, activation='sigmoid')
])
learning_rate = 0.0003
model_multiple_bidi_lstm.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(learning_rate),
metrics=['accuracy'])
fit_model_and_show_results(model_multiple_bidi_lstm, new_reviews)
"""
Explanation: Use multiple bidirectional LSTMs
End of explanation
"""
# Write some new reviews
review1 = """I loved this movie"""
review2 = """that was the worst movie I've ever seen"""
review3 = """too much violence even for a Bond film"""
review4 = """a captivating recounting of a cherished myth"""
review5 = """I saw this movie yesterday and I was feeling low to start with,
but it was such a wonderful movie that it lifted my spirits and brightened
my day, you can\'t go wrong with a movie with Whoopi Goldberg in it."""
review6 = """I don\'t understand why it received an oscar recommendation
for best movie, it was long and boring"""
review7 = """the scenery was magnificent, the CGI of the dogs was so realistic I
thought they were played by real dogs even though they talked!"""
review8 = """The ending was so sad and yet so uplifting at the same time.
I'm looking for an excuse to see it again"""
review9 = """I had expected so much more from a movie made by the director
who made my most favorite movie ever, I was very disappointed in the tedious
story"""
review10 = "I wish I could watch this movie every day for the rest of my life"
more_reviews = [review1, review2, review3, review4, review5, review6, review7,
review8, review9, review10]
print("============================\n","Embeddings only:\n", "============================")
predict_review(model, more_reviews)
print("============================\n","With CNN\n", "============================")
predict_review(model_cnn, more_reviews)
print("===========================\n","With bidirectional GRU\n", "============================")
predict_review(model_gru, more_reviews)
print("===========================\n", "With a single bidirectional LSTM:\n", "===========================")
predict_review(model_bidi_lstm, more_reviews)
print("===========================\n", "With multiple bidirectional LSTM:\n", "==========================")
predict_review(model_multiple_bidi_lstm, more_reviews)
"""
Explanation: Try some more reviews
End of explanation
"""
|
vbalderdash/LMAsimulation | LMAsimulation_full.ipynb | mit | %pylab inline
import pyproj as proj4
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import datetime
import time
import simulation_functions as sf
# import read_logs
from mpl_toolkits.basemap import Basemap
from coordinateSystems import TangentPlaneCartesianSystem, GeographicSystem, MapProjection
from scipy.stats import norm
c0 = 3.0e8 # m/s
dt_rms = 23.e-9 # seconds
lma_digitizer_window = 40.0e-9 # seconds per sample
"""
Explanation: Monte Carlo simulation
Please cite: V. C. Chmielewski and E. C. Bruning (2016), Lightning Mapping Array flash detection performance with variable receiver thresholds, J. Geophys. Res. Atmos., 121, 8600-8614, doi:10.1002/2016JD025159
If any results from this model are presented.
Contact:
vanna.chmielewski@noaa.gov
End of explanation
"""
# import os
# # start_time = datetime.datetime(2014,5,26,2) #25 set
# # end_time = datetime.datetime(2014,5,26,3,50)
# useddir = '/Users/Vanna/Documents/logs/'
# exclude = np.array(['W','A',])
# days = np.array([start_time+datetime.timedelta(days=i) for i in range((end_time-start_time).days+1)])
# days_string = np.array([i.strftime("%y%m%d") for i in days])
# logs = pd.DataFrame()
# dir = os.listdir(useddir)
# for file in dir:
# if np.any(file[2:] == days_string) & np.all(exclude!=file[1]):
# print file
# logs = logs.combine_first(read_logs.parsing(useddir+file,T_set='True'))
# aves = logs[start_time:end_time].mean()
# aves = np.array(aves).reshape(4,len(aves)/4).T
"""
Explanation: Station coordinates and thresholds from a set of log files
Specify:
start time
end time
the directory holding the log files
any stations you wish to exclude from the analysis
End of explanation
"""
Network = 'grid_LMA' # name of network in the csv file
stations = pd.read_csv('network.csv') # network csv file with one or multiple networks
stations.set_index('network').loc[Network]
aves = np.array(stations.set_index('network').loc[Network])[:,:-1].astype('float')
"""
Explanation: Station coordinates from csv file
Input network title and csv file here
End of explanation
"""
center = (np.mean(aves[:,1]), np.mean(aves[:,2]), np.mean(aves[:,0]))
geo = GeographicSystem()
tanp = TangentPlaneCartesianSystem(center[0], center[1], center[2])
mapp = MapProjection
projl = MapProjection(projection='laea', lat_0=center[0], lon_0=center[1])
alt, lat, lon = aves[:,:3].T
stations_ecef = np.array(geo.toECEF(lon, lat, alt)).T
stations_local = tanp.toLocal(stations_ecef.T).T
center_ecef = np.array(geo.toECEF(center[1],center[0],center[2]))
ordered_threshs = aves[:,-1]
plt.scatter(stations_local[:,0]/1000., stations_local[:,1]/1000., c=aves[:,3])
plt.colorbar()
circle=plt.Circle((0,0),30,color='k',fill=False)
# plt.xlim(-80,80)
# plt.ylim(-80,80)
# fig = plt.gcf()
# fig.gca().add_artist(circle)
plt.show()
"""
Explanation: Setting up and checking station locations
End of explanation
"""
xmin, xmax, xint = -200001, 199999, 5000
ymin, ymax, yint = -200001, 199999, 5000
# alts = np.arange(500,20500,500.)
alts = np.array([7000])
initial_points = np.array(np.meshgrid(np.arange(xmin,xmax+xint,xint),
np.arange(ymin,ymax+yint,yint), alts))
x,y,z=initial_points.reshape((3,int(np.size(initial_points)/3)))
points2 = tanp.toLocal(np.array(projl.toECEF(x,y,z))).T
means = np.empty(np.shape(points2))
stds = np.empty(np.shape(points2))
misses= np.empty(np.shape(points2))
tanp_all = []
for i in range(len(aves[:,0])):
tanp_all = tanp_all + [TangentPlaneCartesianSystem(aves[i,1],aves[i,2],aves[i,0])]
"""
Explanation: Setting up grid
Input desired grid boundaries and interval here in meters from the center of the network (no point located over the center!)
End of explanation
"""
iterations=100
# # for r,theta,z errors and standard deviations and overall detection efficiency
for i in range(len(x)):
means[i], stds[i], misses[i] = sf.black_box(points2[i,0], points2[i,1], points2[i,2],
iterations,
stations_local,
ordered_threshs,
stations_ecef,center_ecef,tanp_all, c0,dt_rms,tanp,projl,
chi2_filter=5.,
min_stations=6,
just_rms=False
)
iterations=100
rms = np.empty(np.shape(points2))
# Just rmse values:
for i in range(len(x)):
rms[i] = sf.black_box(x[i], y[i], z[i], iterations,
stations_local,ordered_threshs,stations_ecef,center_ecef,
tanp_all,c0,dt_rms,tanp,projl,
chi2_filter=5.,min_stations=6,just_rms=True
)
means = (means.T.reshape(np.shape(initial_points)))
stds = (stds.T.reshape(np.shape(initial_points)))
misses = (misses.T.reshape(np.shape(initial_points)))
rms = (rms.T.reshape(np.shape(initial_points)))
means = np.ma.masked_where(np.isnan(means) , means)
stds = np.ma.masked_where(np.isnan(stds) , stds)
misses = np.ma.masked_where(np.isnan(misses), misses)
rms = np.ma.masked_where(np.isnan(rms), rms)
"""
Explanation: General calculations at grid points
Set number of iterations and solution requirements here
Note that if any source is not retreived at enough stations for a solution, a RuntimeWarning will be raised by the following fuction but this should not impact the final solutions.
End of explanation
"""
domain = 197.5*1000
sf.mapped_plot(rms[0,:,:,0]/1000.,
from_this=0,to_this=1.5,with_this='jet',
dont_forget=stations_local,
xmin=xmin,xmax=xmax,xint=xint,
ymin=ymin,ymax=ymax,yint=yint,location=center)
CS = plt.contour(np.arange(xmin,xmax+xint,xint)+xmax,
np.arange(ymin,ymax+yint,yint)+ymax,
rms[0,:,:,0]/1000., colors='k',levels=(0.05,0.1,0.5,1,5))
plt.clabel(CS, inline=1, fontsize=10)
plt.title('X RMS')
plt.show()
sf.mapped_plot(rms[1,:,:,0]/1000.,0,1.5,'jet',stations_local,xmin,xmax,xint,ymin,ymax,yint,center)
CS = plt.contour(np.arange(xmin,xmax+xint,xint)+xmax,
np.arange(ymin,ymax+yint,yint)+ymax,
rms[1,:,:,0]/1000., colors='k',levels=(0.05,0.1,0.5,1,5))
plt.clabel(CS, inline=1, fontsize=10)
plt.title('Y RMS')
plt.show()
sf.mapped_plot(rms[2,:,:,0]/1000.,0,1.5,'jet',stations_local,xmin,xmax,xint,ymin,ymax,yint,center)
CS = plt.contour(np.arange(xmin,xmax+xint,xint)+xmax,
np.arange(ymin,ymax+yint,yint)+ymax,
rms[2,:,:,0]/1000., colors='k',levels=(0.05,0.1,0.5,1,5))
plt.clabel(CS, inline=1, fontsize=10)
plt.title('Z RMS')
plt.show()
"""
Explanation: Average error plots
RMS Error plots of errors in (x, y, z) coordinates
End of explanation
"""
sf.mapped_plot(np.mean(means[0,:,:,:],axis=2)/1000.,-0.5,0.5,'PuOr',stations_local,xmin,xmax,xint,ymin,ymax,yint,center)
plt.title('Average Error')
plt.show()
sf.mapped_plot(np.mean(stds[0,:,:,:],axis=2)/1000.,0,3.5,'rainbow',stations_local,xmin,xmax,xint,ymin,ymax,yint,center)
plt.title('Standard Deviation')
plt.show()
"""
Explanation: Standard Errors in (r, theta, z) coordinates
Mean Range Error
End of explanation
"""
sf.mapped_plot(np.mean(means[2,:,:,:],axis=2)/1000.,-1,1,'PuOr',stations_local,xmin,xmax,xint,ymin,ymax,yint,center)
plt.title('Average Error')
plt.show()
sf.mapped_plot(np.mean(stds[2,:,:,:],axis=2)/1000.,0,7,'rainbow',stations_local,xmin,xmax,xint,ymin,ymax,yint,center)
plt.title('Standard Deviation')
plt.show()
"""
Explanation: Mean Altitude Error
End of explanation
"""
sf.mapped_plot(np.mean(means[1,:,:,:],axis=2),-0.005,0.005,'PuOr',stations_local,xmin,xmax,xint,ymin,ymax,yint,center)
plt.title('Average Error')
plt.show()
sf.mapped_plot(np.mean(stds[1,:,:,:],axis=2),0,0.05,'rainbow',stations_local,xmin,xmax,xint,ymin,ymax,yint,center)
plt.title('Standard Deviation')
plt.show()
"""
Explanation: Mean Azimuth Error
End of explanation
"""
# iterations=100 # Set if reading in from a file
xs = 1000./np.arange(10,1000,1.) # Theoretical source detection efficiency that corresponds with fde
fde = 100-np.load('fde.csv',fix_imports=True, encoding='latin1') # Corresponding flash DE
sde = 100-np.mean(misses[0,:,:,:], axis=2)*100./iterations # Calculated source detection efficiency
fde_a = np.empty_like(sde)
selects = sde == 100. # Put into the next lowest or equivalent flash DE from given source DE
fde_a[selects] = 100.
for i in range(len(xs)-1):
selects = (sde >= xs[1+i]) & (sde < xs[i])
fde_a[selects] = fde[i]
# Find center of 95% SOURCE detection efficiency
goods = (100-np.mean(misses[0,:,:,:], axis=2)*100./iterations)>95.
de_centery = np.mean(initial_points[1,:,:,0][goods])
de_centerx = np.mean(initial_points[0,:,:,0][goods])
print ("DE center location in km: ", de_centerx/1000., ", ", de_centery/1000.)
domain = 197.5*1000 # Relates back to domain size (xmax - xint/2) to shift map
maps = Basemap(projection='laea', lat_0=center[0], lon_0=center[1], width=domain*2, height=domain*2)
s = plt.pcolormesh(np.arange(xmin-xint/2.,xmax+3*xint/2.,xint)+domain,
np.arange(ymin-yint/2.,ymax+3*yint/2.,yint)+domain,
100-np.mean(misses[0,:,:,:], axis=2)*100./iterations,
cmap = 'jet_r')
s.set_clim(vmin=0,vmax=100)
plt.colorbar(label='Source Detection Efficiency')
CS = plt.contour(np.arange(xmin,xmax+xint,xint)+domain,
np.arange(ymin,ymax+yint,yint)+domain,
fde_a, colors='k',levels=(20,40,60,70,80,85,90,95,99))
plt.clabel(CS, inline=1, fontsize=10,fmt='%3.0f')
plt.scatter(stations_local[:,0]+domain, stations_local[:,1]+domain, color='k')
# plt.scatter(np.array([de_centerx+domain]), np.array([de_centery+domain]), color='r')
maps.drawstates()
plt.tight_layout()
plt.show()
"""
Explanation: Detection efficiency
End of explanation
"""
# # Full Levels
# sigmar = np.mean(stds[0,:,:,9:30],axis=2)
# sigmaa = np.mean(stds[1,:,:,9:30],axis=2)
# sde = 100-np.mean(misses[0,:,:,9:30], axis=2)*100./iterations # Calculated source detection efficiency
# One Level
sigmar = (stds[0,:,:,0])
sigmaa = (stds[1,:,:,0])
sde = (1-misses[0,:,:,0]/iterations)*100
xf = np.arange(xmin,xmax+xint,xint)/1000.
yf = np.arange(ymin,ymax+yint,yint)/1000.
xy = np.meshgrid(xf,yf)
ranges = (xy[0]**2+xy[1]**2)**0.5*1000.
fl_areas,fl_numbers = np.load('typical_flashes.csv',fix_imports=True, encoding='latin1')
areas = np.empty_like(sde)
numbers = np.empty_like(sde)
selects = sde == 100. # Put into the next lowest or equivalent flash DE from given source DE
areas[selects] = fl_areas[0]
numbers[selects] = fl_numbers[0]
for i in range(len(xs)-1):
selects = (sde >= xs[1+i]) & (sde < xs[i])
areas[selects] = fl_areas[i]
numbers[selects] = fl_numbers[i]
domain = (xmax-xint/2.)
maps = Basemap(projection='laea',lat_0=center[0],lon_0=center[1],width=domain*2,height=domain*2)
s = plt.pcolormesh(np.arange(xmin-xint/2.,xmax+3*xint/2.,xint)+domain,
np.arange(ymin-yint/2.,ymax+3*yint/2.,yint)+domain,
(areas), cmap = 'viridis_r')
s.set_clim(vmin=18,vmax=50)
plt.colorbar()
plt.scatter(stations_local[:,0]+domain, stations_local[:,1]+domain, color='k', s=0.5)
maps.drawstates()
circle=plt.Circle((domain,domain),100000,color='k',fill=False)
fig = plt.gcf()
fig.gca().add_artist(circle)
circle=plt.Circle((domain,domain),200000,color='k',fill=False)
fig.gca().add_artist(circle)
plt.title('Typical Flash area')
plt.tight_layout()
plt.show()
major_axis_est = norm.ppf(0.975**(1./(numbers*sde/100.)))*sigmar*2+2000*(areas/np.pi)**0.5
domain = (xmax-xint/2.)
maps = Basemap(projection='laea',lat_0=center[0],lon_0=center[1],width=domain*2,height=domain*2)
s = plt.pcolormesh(np.arange(xmin-xint/2.,xmax+3*xint/2.,xint)+domain,
np.arange(ymin-yint/2.,ymax+3*yint/2.,yint)+domain,
(major_axis_est)/(2000*(areas/np.pi)**0.5), cmap = 'viridis_r')
plt.colorbar(label='Ratio of Range Extent of 95% CI :\nExpected Range Extent')
plt.clim(1,3)
CS = plt.contour(np.arange(xmin,xmax+xint,xint)+domain,
np.arange(ymin,ymax+yint,yint)+domain,
major_axis_est/1000., colors='k',levels=(5,7.5,10,15,20))
plt.clabel(CS, inline=1, fontsize=10,fmt='%3.1f')
plt.scatter(stations_local[:,0]+domain, stations_local[:,1]+domain, color='k', s=0.5)
maps.drawstates()
circle=plt.Circle((domain,domain),100000,color='0.5',fill=False)
fig = plt.gcf()
fig.gca().add_artist(circle)
circle=plt.Circle((domain,domain),200000,color='0.5',fill=False)
fig.gca().add_artist(circle)
plt.title('Expected Flash-Size Errors along Major Axis')
plt.tight_layout()
plt.show()
minor_axis_est = 2.*ranges*np.tan(np.deg2rad(norm.ppf(0.975**(1./(numbers*sde/100.)))*sigmaa))+2000*(areas/np.pi)**0.5
domain = (xmax-xint/2.)
maps = Basemap(projection='laea',lat_0=center[0],lon_0=center[1],width=domain*2,height=domain*2)
s = plt.pcolormesh(np.arange(xmin-xint/2.,xmax+3*xint/2.,xint)+domain,
np.arange(ymin-yint/2.,ymax+3*yint/2.,yint)+domain,
(minor_axis_est)/(2000*(areas/np.pi)**0.5), cmap = 'viridis_r')
plt.colorbar()
CS = plt.contour(np.arange(xmin,xmax+xint,xint)+domain,
np.arange(ymin,ymax+yint,yint)+domain,
major_axis_est/1000., colors='k',levels=(5,10,15,20))
plt.clabel(CS, inline=1, fontsize=10,fmt='%3.0f')
plt.scatter(stations_local[:,0]+domain, stations_local[:,1]+domain, color='k', s=0.5)
maps.drawstates()
circle=plt.Circle((domain,domain),100000,color='k',fill=False)
fig = plt.gcf()
fig.gca().add_artist(circle)
circle=plt.Circle((domain,domain),200000,color='k',fill=False)
fig.gca().add_artist(circle)
plt.title('Expected Flash-Size Errors along Minor Axis')
plt.tight_layout()
plt.show()
"""
Explanation: Flash Distortion
End of explanation
"""
rs = np.arange(0,200000+xint,xint)
radial_ave = np.empty((len(rs)-1,3,len(alts)))
for i in range(len(rs)-1):
selects = ((initial_points[0,:,:,0]**2+initial_points[1,:,:,0]**2)**0.5>rs[i]
) & ((initial_points[0,:,:,0]**2+initial_points[1,:,:,0]**2)**0.5<=rs[i+1])
radial_ave[i] = np.mean(means[:,selects,:],axis=1)
rs_ecef = tanp.fromLocal(np.vstack((rs, np.zeros_like(rs), np.zeros_like(rs))))
s = plt.pcolormesh(rs/1000.,
np.hstack((alts-250.,np.max(alts)+250.))/1000.,
radial_ave[:,2,:].T/1000.)
plt.plot(rs/1000.,projl.fromECEF(rs_ecef[0], rs_ecef[1], rs_ecef[2])[2]/1000., color='0.8')
plt.clim(vmin=-0.25,vmax=2.5)
plt.ylim(0.25,20)
plt.xlabel('Range')
plt.ylabel('Altitude')
plt.colorbar(label='Average Altitude Error (km)')
plt.colorbar()
plt.tight_layout()
plt.show()
"""
Explanation: Plotting radial average in the map projection
Must have multiple altitudes
End of explanation
"""
zs = np.arange(-6000,20000,1000)
radial_ave2 = np.empty((len(rs)-1,3,len(zs)))
for i in range(len(rs)-1):
for j in range(len(zs)-1):
selects = (((points2[:,1]**2+points2[:,0]**2)**0.5>rs[i]) &
((points2[:,1]**2+points2[:,0]**2)**0.5<=rs[i+1]) &
((points2[:,2])>zs[j]) & ((points2[:,2]<=zs[j+1])))
radial_ave2[i,:,j] = np.mean(means[:,selects.T.reshape(np.shape(initial_points[0]))],axis=1)
s = plt.pcolormesh(rs/1000.,
zs/1000.,
radial_ave2[:,2,:].T/1000.)
plt.clim(vmin=-0.25,vmax=2.5)
plt.xlabel('Radial Distance (km)')
plt.ylabel('Height (km)')
plt.colorbar(label='Average Altitude Error (km)')
plt.show()
"""
Explanation: Plotting radial average in the local tangent plane
End of explanation
"""
case_name = 'WTLMA_ave_6_1000'
# misses.dump('cases/%s/error_misses.csv' %(case_name))
# stds.dump( 'cases/%s/error_stds.csv' %(case_name))
# means.dump( 'cases/%s/error_means.csv' %(case_name))
stds = np.load('cases/%s/error_stds.csv' %(case_name),fix_imports=True, encoding='latin1')
means = np.load('cases/%s/error_means.csv' %(case_name),fix_imports=True, encoding='latin1')
misses = np.load('cases/%s/error_misses.csv' %(case_name),fix_imports=True, encoding='latin1')
"""
Explanation: Reading/writing the arrays from/to a file
End of explanation
"""
|
tensorflow/recommenders | docs/examples/deep_recommenders.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
import os
import tempfile
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
plt.style.use('seaborn-whitegrid')
"""
Explanation: Building deep retrieval models
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/recommenders/examples/deep_recommenders"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/deep_recommenders.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/recommenders/blob/main/docs/examples/deep_recommenders.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/deep_recommenders.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In the featurization tutorial we incorporated multiple features into our models, but the models consist of only an embedding layer. We can add more dense layers to our models to increase their expressive power.
In general, deeper models are capable of learning more complex patterns than shallower models. For example, our user model incorporates user ids and timestamps to model user preferences at a point in time. A shallow model (say, a single embedding layer) may only be able to learn the simplest relationships between those features and movies: a given movie is most popular around the time of its release, and a given user generally prefers horror movies to comedies. To capture more complex relationships, such as user preferences evolving over time, we may need a deeper model with multiple stacked dense layers.
Of course, complex models also have their disadvantages. The first is computational cost, as larger models require both more memory and more computation to fit and serve. The second is the requirement for more data: in general, more training data is needed to take advantage of deeper models. With more parameters, deep models might overfit or even simply memorize the training examples instead of learning a function that can generalize. Finally, training deeper models may be harder, and more care needs to be taken in choosing settings like regularization and learning rate.
Finding a good architecture for a real-world recommender system is a complex art, requiring good intuition and careful hyperparameter tuning. For example, factors such as the depth and width of the model, activation function, learning rate, and optimizer can radically change the performance of the model. Modelling choices are further complicated by the fact that good offline evaluation metrics may not correspond to good online performance, and that the choice of what to optimize for is often more critical than the choice of model itself.
Nevertheless, effort put into building and fine-tuning larger models often pays off. In this tutorial, we will illustrate how to build deep retrieval models using TensorFlow Recommenders. We'll do this by building progressively more complex models to see how this affects model performance.
Preliminaries
We first import the necessary packages.
End of explanation
"""
ratings = tfds.load("movielens/100k-ratings", split="train")
movies = tfds.load("movielens/100k-movies", split="train")
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"timestamp": x["timestamp"],
})
movies = movies.map(lambda x: x["movie_title"])
"""
Explanation: In this tutorial we will use the models from the featurization tutorial to generate embeddings. Hence we will only be using the user id, timestamp, and movie title features.
End of explanation
"""
timestamps = np.concatenate(list(ratings.map(lambda x: x["timestamp"]).batch(100)))
max_timestamp = timestamps.max()
min_timestamp = timestamps.min()
timestamp_buckets = np.linspace(
min_timestamp, max_timestamp, num=1000,
)
unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000))))
unique_user_ids = np.unique(np.concatenate(list(ratings.batch(1_000).map(
lambda x: x["user_id"]))))
"""
Explanation: We also do some housekeeping to prepare feature vocabularies.
End of explanation
"""
class UserModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.user_embedding = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_user_ids) + 1, 32),
])
self.timestamp_embedding = tf.keras.Sequential([
tf.keras.layers.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32),
])
self.normalized_timestamp = tf.keras.layers.Normalization(
axis=None
)
self.normalized_timestamp.adapt(timestamps)
def call(self, inputs):
# Take the input dictionary, pass it through each input layer,
# and concatenate the result.
return tf.concat([
self.user_embedding(inputs["user_id"]),
self.timestamp_embedding(inputs["timestamp"]),
tf.reshape(self.normalized_timestamp(inputs["timestamp"]), (-1, 1)),
], axis=1)
"""
Explanation: Model definition
Query model
We start with the user model defined in the featurization tutorial as the first layer of our model, tasked with converting raw input examples into feature embeddings.
End of explanation
"""
class QueryModel(tf.keras.Model):
"""Model for encoding user queries."""
def __init__(self, layer_sizes):
"""Model for encoding user queries.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
# We first use the user model for generating embeddings.
self.embedding_model = UserModel()
# Then construct the layers.
self.dense_layers = tf.keras.Sequential()
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
"""
Explanation: Defining deeper models will require us to stack mode layers on top of this first input. A progressively narrower stack of layers, separated by an activation function, is a common pattern:
+----------------------+
| 128 x 64 |
+----------------------+
| relu
+--------------------------+
| 256 x 128 |
+--------------------------+
| relu
+------------------------------+
| ... x 256 |
+------------------------------+
Since the expressive power of deep linear models is no greater than that of shallow linear models, we use ReLU activations for all but the last hidden layer. The final hidden layer does not use any activation function: using an activation function would limit the output space of the final embeddings and might negatively impact the performance of the model. For instance, if ReLUs are used in the projection layer, all components in the output embedding would be non-negative.
We're going to try something similar here. To make experimentation with different depths easy, let's define a model whose depth (and width) is defined by a set of constructor parameters.
End of explanation
"""
class MovieModel(tf.keras.Model):
def __init__(self):
super().__init__()
max_tokens = 10_000
self.title_embedding = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=unique_movie_titles,mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, 32)
])
self.title_vectorizer = tf.keras.layers.TextVectorization(
max_tokens=max_tokens)
self.title_text_embedding = tf.keras.Sequential([
self.title_vectorizer,
tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),
tf.keras.layers.GlobalAveragePooling1D(),
])
self.title_vectorizer.adapt(movies)
def call(self, titles):
return tf.concat([
self.title_embedding(titles),
self.title_text_embedding(titles),
], axis=1)
"""
Explanation: The layer_sizes parameter gives us the depth and width of the model. We can vary it to experiment with shallower or deeper models.
Candidate model
We can adopt the same approach for the movie model. Again, we start with the MovieModel from the featurization tutorial:
End of explanation
"""
class CandidateModel(tf.keras.Model):
"""Model for encoding movies."""
def __init__(self, layer_sizes):
"""Model for encoding movies.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
self.embedding_model = MovieModel()
# Then construct the layers.
self.dense_layers = tf.keras.Sequential()
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
"""
Explanation: And expand it with hidden layers:
End of explanation
"""
class MovielensModel(tfrs.models.Model):
def __init__(self, layer_sizes):
super().__init__()
self.query_model = QueryModel(layer_sizes)
self.candidate_model = CandidateModel(layer_sizes)
self.task = tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(self.candidate_model),
),
)
def compute_loss(self, features, training=False):
# We only pass the user id and timestamp features into the query model. This
# is to ensure that the training inputs would have the same keys as the
# query inputs. Otherwise the discrepancy in input structure would cause an
# error when loading the query model after saving it.
query_embeddings = self.query_model({
"user_id": features["user_id"],
"timestamp": features["timestamp"],
})
movie_embeddings = self.candidate_model(features["movie_title"])
return self.task(
query_embeddings, movie_embeddings, compute_metrics=not training)
"""
Explanation: Combined model
With both QueryModel and CandidateModel defined, we can put together a combined model and implement our loss and metrics logic. To make things simple, we'll enforce that the model structure is the same across the query and candidate models.
End of explanation
"""
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
cached_train = train.shuffle(100_000).batch(2048)
cached_test = test.batch(4096).cache()
"""
Explanation: Training the model
Prepare the data
We first split the data into a training set and a testing set.
End of explanation
"""
num_epochs = 300
model = MovielensModel([32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
one_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
"""
Explanation: Shallow model
We're ready to try out our first, shallow, model!
End of explanation
"""
model = MovielensModel([64, 32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
two_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
"""
Explanation: This gives us a top-100 accuracy of around 0.27. We can use this as a reference point for evaluating deeper models.
Deeper model
What about a deeper model with two layers?
End of explanation
"""
num_validation_runs = len(one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"])
epochs = [(x + 1)* 5 for x in range(num_validation_runs)]
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
"""
Explanation: The accuracy here is 0.29, quite a bit better than the shallow model.
We can plot the validation accuracy curves to illustrate this:
End of explanation
"""
model = MovielensModel([128, 64, 32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
three_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = three_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
"""
Explanation: Even early on in the training, the larger model has a clear and stable lead over the shallow model, suggesting that adding depth helps the model capture more nuanced relationships in the data.
However, even deeper models are not necessarily better. The following model extends the depth to three layers:
End of explanation
"""
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.plot(epochs, three_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="3 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
"""
Explanation: In fact, we don't see improvement over the shallow model:
End of explanation
"""
|
kitu2007/dl_class | autoencoder/Simple_Autoencoder.ipynb | mit | %matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
"""
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
"""
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
"""
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
inputs_ = tf.placeholder(tf.float32,shape=(None,784))
targets_ = tf.placeholder(tf.float32,shape=(None,784))
# Output of hidden layer
encoded = tf.contrib.layers.fully_connected(inputs_,encoding_dim)
logits = tf.contrib.layers.fully_connected(encoded,784, activation_fn=None)
# Output layer logits
# Sigmoid output from logits
decoded = tf.sigmoid(logits)
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
"""
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
"""
# Create the session
sess = tf.Session()
"""
Explanation: Training
End of explanation
"""
epochs = 2
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
"""
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
comp1 = compressed/np.max(compressed)
fig1, axes1 = plt.subplots(nrows=1, ncols=10, sharex=True, sharey=True, figsize=(100,4))
for i,ax in enumerate(axes1):
ax.imshow(comp1[i].reshape(1,32), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
"""
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation
"""
|
211217613/python_meetup | Word frequency python session.ipynb | unlicense | # Lets see how many lines are in the PDF
# We can use the '!' special character to run Linux commands inside of our notebook
!wc -l test.txt
# Now lets see how many words
!wc -w test.txt
import nltk
from nltk import tokenize
# Lets open the file so we can access the ascii contents
# fd stands for file descriptor but we can use whatever name we want
# the open command returns a file descripor object, which itself isn't very useful
# so we need to read the entire contents so we have a text string we can parse
# advanced: use a context manager with open() as x:
fd = open('test.txt', 'r')
text = fd.read()
text
"""
Explanation: Meetup 1
Going to parse texts for most used words.
End of explanation
"""
# import the regular expression module
import re
"""
Explanation: We want to "tokenize" the text and discard "stopwords" like 'a', 'the', 'in'. These words aren't relevant for our analysis.
To tokenize our text we're going to use regular expressions. Regular expressions are cool and you should try to use them whenever you can. To use regular expression we need to import the regular expression module re. Lets do this in the next cell!!
End of explanation
"""
match_words = '\w+'
tokens = re.findall(match_words, text)
tokens[0:9]
# We can also use nltk to accomplish the same thing
# from nltk.tokenize import RegexpTokenizer
# tokenizer = RegexpTokenizer('\w+')
# tokenizer.tokenize(text)
"""
Explanation: We want to tokenize words. We will use \w+ regular expression to tokenize all the words.
- Lets break this down - \w will match any alphanumerica and underscore characters
- The +
End of explanation
"""
words = []
for word in tokens:
words.append(word.lower())
words[0:8]
"""
Explanation: That was the easy part..... We want all the data(text) to be "normalized". The word 'Linear' is different then the word 'linear' but for our case it shouldn't be counted twice.
Lets create a Python list container/data structure to store all of our words. For a more in depth look at Python lists and how to use them efficiently take a look at .....
End of explanation
"""
#Here we want a list of common stopwords but we need to download them first.
nltk.download('stopwords')
stop_words = nltk.corpus.stopwords.words('english')
stop_words
"""
Explanation: Now we must...clean the data yet more. It's like when you think you've cleaned your room but your mom tells you it ain't that clean yet.
End of explanation
"""
words_nsw = []
for w in words:
if w not in stop_words:
words_nsw.append(w)
words_nsw[0:11]
"""
Explanation: Now we have a Python list of stop words and a Python list of words in our text. We want to cross reference the tokens with the stop words and save those in a new list. Lets do that....
End of explanation
"""
# lets import a graphing and data visualization library
import matplotlib.pyplot as plt
# Lets tell jupyter notebook to display images inside our notebook
# %matplotlib inline
freq_dist = nltk.FreqDist(words_nsw)
freq_dist.plot(30)
"""
Explanation: Now comes the real fun stuff. Lets plot the word frequency histogram with two lines of actual code.
End of explanation
"""
|
VirusTotal/vt-py | examples/jupyter/ransomware_report_usecases1.ipynb | apache-2.0 | #@markdown Please, insert your VT API Key*:
API_KEY = '' #@param {type: "string"}
#@markdown **The API key should have Premium permissions, otherwise some of the use cases might not provide the expected results.*
#@markdown
"""
Explanation: Jupyter Notebook - Ransomware report use cases 1
Copyright © 2021 Google
Welcome to our Ransomware in a Global Context Jupyter notebook!
You can find this and other very interesting posts in our blog.
End of explanation
"""
!pip install vt-py nest_asyncio
"""
Explanation: ---
End of explanation
"""
import base64
import hashlib
import json
import requests
QUERIES = [
"engines:gandcrab fs:2020-02-01+ fs:2020-05-01- (type:peexe or type:pedll) tag:exploit (have:compressed_parents OR have:execution_parents OR have:pcap_parents OR have:email_parents)",
"engines:gandcrab fs:2020-02-01+ fs:2020-05-01- (type:peexe or type:pedll) have:in_the_wild"
]
RELATIONSHIPS = ["itw_ips", "itw_urls", "itw_domains", "compressed_parents", "execution_parents", "pcap_parents", "email_parents"]
separator = ","
RELATIONSHIPS_URL = separator.join(RELATIONSHIPS)
detected_domains = {}
detected_ips = {}
detected_urls = {}
detected_files = {}
analyzed_objects = []
def get_search_results(query, vt_client):
"""Execute the search and return the results."""
url = "/intelligence/search"
results = vt_client.iterator(url, params={"query": query, "relationships": RELATIONSHIPS_URL})
return results
def analyse_observable(observable, observable_type, vt_client):
if observable not in analyzed_objects:
observable_report = get_observable_report(observable,observable_type, vt_client)
if observable_report:
extract_iocs(observable, observable_type, vt_client, observable_report)
else:
extract_iocs(observable, observable_type, vt_client)
def get_observable_report(observable, observable_type, vt_client):
"""Get the observable's intelligence report from VirusTotal."""
endpoint = {"url": "urls", "domain": "domains", "ip_address": "ip_addresses", "file": "files"}
if observable_type == "url":
observable = base64.urlsafe_b64encode(observable.encode()).decode().strip("=")
endpoint = endpoint[observable_type]
if observable_type == "file":
results = vt_client.get_object(f"/{endpoint}/{observable}?relationships={RELATIONSHIPS_URL}")
else:
results = vt_client.get_object(f"/{endpoint}/{observable}?relationships=votes")
return results
def add_observable(iocs_dict, observable, positives=False):
"""Check if the observable was already seen before adding it into the final report."""
if observable not in iocs_dict:
iocs_dict[observable] = {}
iocs_dict[observable]["positives"] = positives
iocs_dict[observable]["relatives"] = 1
else:
iocs_dict[observable]["relatives"] += 1
if observable not in analyzed_objects:
analyzed_objects.append(observable)
return iocs_dict
def extract_iocs(observable, observable_type, vt_client, observable_report=False):
"""Add the malicious relationships into the list of detected ovservables."""
global detected_ips
global detected_urls
global detected_domains
global detected_files
if observable_report:
positives = observable_report.last_analysis_stats["malicious"]
else:
try:
positives = detected_files[observable]["positives"]
except:
positives = False
if observable_type == "ip_address":
detected_ips = add_observable(detected_ips, observable, positives)
elif observable_type == "url":
detected_urls = add_observable(detected_urls, observable, positives)
elif observable_type == "domain":
detected_domains = add_observable(detected_domains, observable, positives)
else:
detected_files = add_observable(detected_files, observable, positives)
if detected_files[observable]["relatives"] == 1:
search_and_hunt(vt_client, observable_report)
return
def search_and_hunt(vt_client, observable_report=False):
"""Iterate over the queries and get matches.
Then, analyze the relationships of those matches.
"""
if observable_report:
results = observable_report
for relationship in RELATIONSHIPS:
if results.relationships[relationship]["data"]:
for hit in results.relationships[relationship]["data"]:
observable_type = hit["type"]
observable = hit["id"]
if observable_type == "url":
observable = hit["context_attributes"]["url"]
analyse_observable(observable, observable_type, vt_client)
else:
for query in QUERIES:
results = get_search_results(query, vt_client)
for result in results:
match = result.id
for relationship in RELATIONSHIPS:
if result.relationships[relationship]["data"]:
for hit in result.relationships[relationship]["data"]:
observable_type = hit["type"]
observable = hit["id"]
if observable_type == "url":
observable = hit["context_attributes"]["url"]
analyse_observable(observable, observable_type, vt_client)
def print_report():
"""Iterate over the detected observables and print the results."""
def print_header():
row = ["Positives","Relatives","VT Link","Observable"]
print("{: ^15} {: <15} {: ^20} {: >100}".format(*row))
print("_"*200)
def print_dict(d,type):
d2 = {}
for item in d:
d2[item] = d[item]["relatives"]
top_view = [ (v,k) for k,v in d2.items() ]
top_view.sort(reverse=True)
for items in top_view:
observable = items[1]
if type == "url":
encoded_url = hashlib.sha256(items[1].encode()).hexdigest()
row = [d[observable]["positives"],d[observable]["relatives"],"https://www.virustotal.com/gui/" + type + "/"+encoded_url,observable]
else:
row = [d[observable]["positives"],d[observable]["relatives"],"https://www.virustotal.com/gui/" + type + "/"+observable,observable]
print("{: ^10} {:^17} {: <110} {: <70}".format(*row))
print("\n\tDISTRIBUTION VECTORS REPORT\n")
print("\n#1: FILES\n")
print_header()
print_dict(detected_files, "file")
print("\n##2: DOMAINS\n")
print_header()
print_dict(detected_domains, "domain")
print("\n#3: URLS\n")
print_header()
print_dict(detected_urls, "url")
print("\n#4: IP ADDRESSES\n")
print_header()
print_dict(detected_ips, "ip-address")
print("\n")
def main():
vt_client = vt.Client(API_KEY)
search_and_hunt(vt_client)
print_report()
main()
"""
Explanation: Use Case: Distribution vectors extraction
Given a couple of searches like these ones:
engines:gandcrab fs:2020-02-01+ fs:2020-05-01- (type:peexe or type:pedll) tag:exploit (have:compressed_parents OR have:execution_parents OR have:pcap_parents OR have:email_parents)
engines:gandcrab fs:2020-02-01+ fs:2020-05-01- (type:peexe or type:pedll) have:in_the_wild
You can easily extract the distribution vectors of those matches thanks to the different relationships related to those observables. More concretely, we are going to focus on these relationships:
itw_ips
itw_urls
itw_domains
compressed_parents
execution_parents
pcap_parents
email_parents
Please note that as the search is looking for old files, the retrospection's limitation that you might have can affect to the number of results that this report provides.
Workflow:
Script:
End of explanation
"""
import nest_asyncio
nest_asyncio.apply()
import base64
import requests
import vt
QUERIES = [
"engines:gandcrab fs:2020-02-01+ fs:2020-05-01- (type:peexe or type:pedll) tag:exploit have:behaviour_network"
]
RELATIONSHIPS = ["behaviours"]
separator = ","
RELATIONSHIPS_URL = separator.join(RELATIONSHIPS)
extractions = {}
verdicts = {}
analyzed_objects = []
def get_search_results(query, vt_client):
"""Execute the search and return the results."""
url = "/intelligence/search"
results = vt_client.iterator(url, params={"query": query, "relationships": RELATIONSHIPS_URL})
return results
def analyse_observable(observable, observable_type, vt_client):
if observable not in analyzed_objects:
analyzed_objects.append(observable)
observable_report = get_observable_report(observable, observable_type, vt_client)
if observable_report:
extract_iocs(observable, observable_report, vt_client)
def get_observable_report(observable, observable_type, vt_client):
"""Get the observable's intelligence report from VirusTotal."""
endpoint = {"file_behaviour": "file_behaviours", "ip_address": "ip_addresses"}
endpoint = endpoint[observable_type]
results = vt_client.get_object(f"/{endpoint}/{observable}")
return results
def extract_iocs(observable, observable_report, vt_client):
"""Add the malicious relationships into the list of detected ovservables."""
global extractions
global verdicts
if hasattr(observable_report, "ip_traffic"):
comms = observable_report.ip_traffic
for comm in comms:
try:
protocol = comm["transport_layer_protocol"]
except:
protocol = "Unknown"
ip = comm["destination_ip"]
port = comm["destination_port"]
hash = observable[:64]
if hash not in extractions:
extractions[hash] = {"ports":[], "ips":[], "protocols":[]}
if port not in extractions[hash]["ports"]:
extractions[hash]["ports"].append(port)
verdicts[port] = "N/A"
if ip not in extractions[hash]["ips"]:
extractions[hash]["ips"].append(ip)
if "DNS" in ip:
positives = "N/A"
else:
observable_report = get_observable_report(ip, "ip_address", vt_client)
positives = observable_report.last_analysis_stats["malicious"]
verdicts[ip] = positives
if protocol not in extractions[hash]["protocols"]:
extractions[hash]["protocols"].append(protocol)
verdicts[protocol] = "N/A"
return
def search_and_hunt(vt_client):
for query in QUERIES:
results = get_search_results(query, vt_client)
for result in results:
match = result.id
for relationship in RELATIONSHIPS:
if result.relationships[relationship]["data"]:
for hit in result.relationships[relationship]["data"]:
observable_type = hit["type"]
observable = hit["id"]
analyse_observable(observable, observable_type, vt_client)
def print_report():
elements = ["ports", "ips", "protocols"]
for e in elements:
get_top_list(e)
def get_top_list(element):
e_list = []
for hash in extractions:
e_list = list(set(e_list) | set(extractions[hash][element]))
e_top = {}
for hash in extractions:
for e in e_list:
if e not in e_top:
e_top[e] = 0
if e in extractions[hash][element]:
e_top[e] += 1
print("\nTOP " + element + "\n" + "_"*100 + "\n")
top_view = [ (v,k) for k,v in e_top.items() ]
top_view.sort(reverse=True)
for matches,observable in top_view:
row = [str(verdicts[observable]), observable, "Number of matches: ", matches]
print("\tPositives: {: >8} {: ^20} {: >20} {: >3}".format(*row))
def main():
vt_client = vt.Client(API_KEY)
search_and_hunt(vt_client)
print_report()
main()
"""
Explanation: Use Case: Ports and IP extraction
In this use case we are going to focus on the lateral movements prevention.
In order to this we will make use of the behavioural network reports.
Script:
End of explanation
"""
#@markdown
VTI_SEARCH = 'engines:gandcrab fs:2020-02-01+ fs:2020-05-01- (type:peexe or type:pedll) tag:exploit' #@param {type: "string"}
#@markdown
"""
Explanation: Use Case: Geographical Distribution
In this use case we will iterate over all the matches of our initial search:
End of explanation
"""
import nest_asyncio
nest_asyncio.apply()
import json
import requests
import vt
QUERIES = [
VTI_SEARCH
]
RELATIONSHIPS = ["submissions"]
separator = ","
RELATIONSHIPS_URL = separator.join(RELATIONSHIPS)
countries = {}
cities = {}
def get_submission(id, vt_client):
results = vt_client.get_object(f"/submissions/{id}")
interface = results.interface
if interface == "email":
return False, False, interface
country = results.country
if country == "ZZ":
city = country
else:
city = results.city
return country, city, interface
def get_search_results(query, vt_client):
"""Execute the search and return the results."""
url = "/intelligence/search"
results = vt_client.iterator(url, params={"query": query, "relationships": RELATIONSHIPS_URL})
return results
def print_report():
print("\nTOP targeted countries\n" + "_"*100 + "\n")
countries_view = [ (v,k) for k,v in countries.items() ]
countries_view.sort(reverse=True)
for submission,country in countries_view:
row = [country,"Number of submissions: ",submission]
print("{: >15} {: >25} {: >2}".format(*row))
print("\nTOP targeted cities\n" + "_"*100 + "\n")
cities_view = [ (v,k) for k,v in cities.items() ]
cities_view.sort(reverse=True)
for submission,city in cities_view:
row = [city,"Number of submissions: ",submission]
print("{: >35} {: >25} {: >2}".format(*row))
def search_and_hunt(vt_client):
global countries
global cities
for query in QUERIES:
results = get_search_results(query, vt_client)
for result in results:
positives = result.last_analysis_stats["malicious"]
for hit in result.relationships["submissions"]["data"]:
submission_id = hit["id"]
country,city,interface = get_submission(submission_id, vt_client)
if interface != "email":
if country not in countries:
countries[country] = 0
countries[country] += 1
if city not in cities:
cities[city] = 0
cities[city] += 1
def main():
vt_client = vt.Client(API_KEY)
search_and_hunt(vt_client)
print_report()
main()
"""
Explanation: The difference now is that we will take a look to the Submissions tab in order to indentify how this malware has been spread around the world.
Script:
End of explanation
"""
#@markdown
VTI_SEARCH = 'engines:gandcrab fs:2020-02-01+ fs:2020-05-01- (type:peexe or type:pedll) tag:exploit' #@param {type: "string"}
#@markdown
"""
Explanation: Use Case: TOP vulnerabilities
In this use case we are going to extract the top exploited vulnerabilities given an intelligence search like the one below:
End of explanation
"""
import nest_asyncio
nest_asyncio.apply()
import json
import requests
import vt
QUERIES = [
VTI_SEARCH
]
cve_tags = {}
def get_search_results(query, vt_client):
"""Execute the search and return the results."""
url = "/intelligence/search"
results = vt_client.iterator(url, params={"query": query})
return results
def search_and_hunt(vt_client):
global cve_tags
for query in QUERIES:
results = get_search_results(query, vt_client)
for result in results:
for tag in getattr(result, "tags", []):
if "cve" in tag:
if tag not in cve_tags:
cve_tags[tag] = 0
cve_tags[tag] += 1
def print_report():
print("\nTOP Vulnerabilities" + "\n" + "_"*100 + "\n")
top_view = [ (v,k) for k,v in cve_tags.items() ]
top_view.sort(reverse=True)
for v,k in top_view:
row = [k,"Number of matches: ",v]
print("{: >15} {: >25} {: >5}".format(*row))
def main():
vt_client = vt.Client(API_KEY)
search_and_hunt(vt_client)
print_report()
main()
"""
Explanation: Script:
End of explanation
"""
|
nproctor/phys202-2015-work | assignments/assignment04/TheoryAndPracticeEx01.ipynb | mit | from IPython.display import Image
"""
Explanation: Theory and Practice of Visualization Exercise 1
Imports
End of explanation
"""
# Add your filename and uncomment the following line:
Image(filename='graphie.JPG')
"""
Explanation: Graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information.
Vox
Upshot
538
BuzzFeed
Upload the image for the visualization to this directory and display the image inline in this notebook.
End of explanation
"""
|
ymero/pyDataScienceToolkits_Base | Visualization/(3)special_curves_plot.ipynb | mit | %matplotlib inline
import numpy as np
from matplotlib.pyplot import plot
from matplotlib.pyplot import show
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
"""
Explanation: 内容索引
利萨如曲线 --- 使用标准三角函数绘制
绘制方波 --- 利用无穷傅里叶级数表示
绘制锯齿波和三角波
End of explanation
"""
# 为简单起见,令A和B为1
t = np.linspace(-np.pi, np.pi, 201)
a = 9
b = 8
x = np.sin(a*t + np.pi/2)
y = np.sin(b*t)
plot(x, y)
show()
def lissajous(a, b):
t = np.linspace(-np.pi, np.pi, 201)
x = np.sin(a*t + np.pi/2)
y = np.sin(b*t)
return x, y
# matplotlib.gridspec.GridSpecBase
# 指定figure中subplot的位置
gs = gridspec.GridSpec(3,3)
fig = plt.figure()
ax = []
for a in xrange(3):
for b in xrange(3):
ax.append(fig.add_subplot(gs[a,b]))
a1 = a + 6
b1 = b + 6
x, y = lissajous(a1, b1)
ax[-1].set_title('a=%d,b=%d' % (a1,b1))
ax[-1].plot(x, y)
# 使得子图适应figure的间距
fig.tight_layout()
show()
"""
Explanation: 1. 利萨如曲线
在NumPy中,所有标准三角函数如sin、cos、tan等均有对应的通用函数。利萨如曲线(Lissajous curve)是一种很有趣的使用三角函数的方式。
利萨如曲线由如下参数方程定义:
- x = A sin(at + π/2)
- y = B sin(bt)
End of explanation
"""
Latex(r"$\sum_{k=1}^\infty\frac{4\sin((2k-1)t)}{(2k-1)\pi}$")
t = np.linspace(-np.pi, np.pi, 201)
k = np.arange(1,99)
k = 2*k - 1
f = np.zeros_like(t)
for i in range(len(t)):
f[i] = np.sum(np.sin(k * t[i])/k)
f = (4/np.pi) * f
plot(t, f)
show()
"""
Explanation: 2. 绘制方波
方波可以近似表示为多个正弦波的叠加。事实上,任意一个方波信号都可以用无穷傅里叶级数表示。
End of explanation
"""
# 锯齿波的无穷级数表达式
Latex(r"$\sum_{k=1}^\infty\frac{-2\sin(2\pi kt)}{k\pi}$")
t = np.linspace(-np.pi, np.pi, 201)
k = np.arange(1,99)
f = np.zeros_like(t)
for i in range(len(t)):
f[i] = np.sum(np.sin(2*np.pi*k * t[i])/k)
f = (-2/np.pi) * f
plot(t, f)
show()
plot(t, np.abs(f),c='g',lw=2.0)
show()
"""
Explanation: 锯齿波和三角波
锯齿波和三角波也是常见的波形。和方波类似,我们也可以将它们表示成无穷傅里叶级数。对锯齿波取绝对值即可得到三角波。
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/notebook_eleves/2017-2018/dimensions_reduction.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
import matplotlib.pyplot as plt # traçage de graphiques
import numpy as np # traitement des arrays numériques
import pandas as pd
from sklearn import datasets # datasets classiques
from sklearn import preprocessing # normalisation les données
from sklearn import decomposition # PCA et NMF
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis # LDA
np.random.seed = 2017 # pour des résultats reproductibles
"""
Explanation: Réduction des dimensions
On peut souhaiter réduire de nombre de dimensions d'un jeu de données :
pour le compresser : diminution le volume d'informations utiles à stocker et par la même occasion la durée d'exécution d'un algorithme d'apprentisssage (car l'espace à explorer est plus petit)
pour n'en conserver que les caractéristiques (features) discriminantes et éviter ainsi le surapprentissage (apprentissage du bruit dans les données)
End of explanation
"""
iris = datasets.load_iris()
X_iris = iris.data
y_iris = iris.target
print("Dimensions de l'espace de départ : {}".format(X_iris.shape[1]))
print("Représentation des données dans ces dimensions :")
plt.figure(figsize=(12,4))
plt.subplot(1, 2, 1)
plt.title("Dimensions 0 et 1:")
plt.scatter(X_iris[:, 0], X_iris[:, 1], c=y_iris)
plt.subplot(1, 2, 2)
plt.title("Dimensions 2 et 3:")
plt.scatter(X_iris[:, 2], X_iris[:, 3], c=y_iris)
plt.plot([1,7],[0.05,2.5])
plt.show()
"""
Explanation: 1. PCA - L'analyse en composante principale
1.1 Algorithme
L'analyse en composante principale pour des données numériques en n dimensions est un algorithme non supervisé d'identification des dimensions de variance décroissante et de changement de base pour ne conserver que les k dimensions de plus grande variance.
Il consiste à :
Optionnel : Normaliser les données (important si les données n'ont par exemple pas été mesurées aux mêmes échelles)
Construire la matrice de covariance entre les variables :
$\Sigma = \frac{1}{n-1}\sum_{i=1}^{n}{((X - \bar{x})'(X - \bar{x}))}$
Trouver les valeurs propres $\lambda_i$ et vecteurs propres $v_i$ :
$\Sigma v_i = \lambda_iv_i$, ces vecteurs propres forment un repère orthogonal de l'espace des données (en tant que vecteurs propres d'une matrice symmétrique qu'on supposera de rang n)
Classer les valeurs propres (et les vecteurs associés) de façon décroissante : ${\lambda_{(n)}, \lambda_{(n-1)}...\lambda_{(1)} }$ où $\lambda_{(i)}$ est la i-ème variance dans l'ordre croissant
Ne conserver que les k ($k \leqslant n$) premiers vecteurs : ${v_{(n)}, v_{(n-1)}...v_{(n-k+1)} }$
Construire la matrice de projection dans l'espace de ces vecteurs (changement de base si n=k)
Projeter les données initiales dans cet espace de dimension k
Tout cela peut bien sur être implémenté from scratch avec Numpy mais nous utiliserons ici scikit-learn pour raccourcir l'implémentation et nous concentrer sur la visualisation des résultats.
Cf. https://fr.wikipedia.org/wiki/Analyse_en_composantes_principales et http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html
1.2 Implémentation avec scikit-learn
1.2.1 Données
Nous partirons classiquement du dataset Iris (classification de 3 fleurs sur la base de certaines de leurs mesures) :
End of explanation
"""
# Gardons toutes les composantes pour le moment
# Nous pourrons toujours en retirer ensuite puisqu'elles seront triées par significativité
pca = decomposition.PCA(n_components=4)
X_iris_PCA = pca.fit(X_iris).transform(X_iris)
def graph_acp2(X_PC2, y):
plt.figure(figsize=(15,4))
plt.subplot(1, 3, 1)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.scatter(X_PC2[:, 0], X_PC2[:, 1], c=y)
plt.subplot(1, 3, 2)
plt.title("Dimension de la plus grande variance :")
plt.scatter(X_PC2[:, 0], np.ones(X_PC2.shape[0]), c=y)
plt.subplot(1, 3, 3)
plt.title("Dimension de 2nde variance :")
plt.scatter(X_PC2[:, 1], np.ones(X_PC2.shape[0]), c=y)
plt.show()
graph_acp2(X_iris_PCA, y_iris)
"""
Explanation: 1.2.2 Premier exemple de PCA
Graphiquement, on peut se dire que les 2 dernières dimensions sont très corrélées et donc redondantes. Dans un strict but de classification, on pourrait d'ailleurs presque se contenter de la dimension indiquée par la ligne bleue pour correctement discriminer les 3 types de fleurs (nous verrons par la suite qu'il s'agit d'un cas particulier non généralisable).
Effectuons une PCA avec scikit-learn avec un changemment de base conservant les 4 dimensions pour illustrer leurs différences :
End of explanation
"""
X11 = np.random.rand(30)*10
X21 = X11 + 1
X12 = np.random.rand(20)*10
X22 = X12 + 2
X = np.array([np.concatenate((X11,X12)),
np.concatenate((X21,X22))]).T
y = np.concatenate((np.zeros(30), np.ones(20)))
X = preprocessing.scale(X, with_mean=True, with_std=True)
plt.scatter(X[:, 0], X[:, 1], c=y)
plt.show()
pca = decomposition.PCA(n_components=2)
X_PC2 = pca.fit(X).transform(X)
graph_acp2(X_PC2, y)
"""
Explanation: 1.3 Quelques réserves
Comme évoqué dans la présentation, il est à noter qu'il s'agit d'un algorithme non supervisé, qui ne tient donc pas compte des étiquettes des données.
Dans le cas ci-dessus, nous avons eu la chance que les données soient linéairement séparables sur la dimension de plus grande variance. Dans le cas contraire, l'ACP aurait pu ne pas nous aider et nous aurions même pu perdre les dimensions selon lesquelles discriminer les données correctement.
A noter également que dans le cas de données de variance assez homogène selon toutes les dimensions, une ACP ne nous apporte rien.
L'ACP peut donc être inutile voire contreproductive dans un objectif de classification.
Ci-après 2 contre-exemples :
1.3.1 ACP et discrimination selon dimension de moindre variance
End of explanation
"""
X11 = np.random.normal(0, 10, 500)
X21 = abs(np.random.normal(0, 10, 500))
X12 = np.random.normal(0, 10, 500)
X22 = -abs(np.random.normal(0, 10, 500))
X = np.array([np.concatenate((X11,X12)),
np.concatenate((X21,X22))]).T
y = np.concatenate((np.zeros(500), np.ones(500)))
y = y.astype(int)
plt.figure(figsize=(6,6))
plt.scatter(X[:, 0], X[:, 1], c=y)
plt.show()
pca = decomposition.PCA(n_components=2)
pca.fit(X)
X_PC2 = pca.transform(X)
graph_acp2(X_PC2, y)
"""
Explanation: Ici une ACP ne retenant que la dimension de plus grande variance nous aurait donc fait perdre toute possibilité de discrimination.
1.3.2 ACP sur des données de variance homogène
End of explanation
"""
lda = LinearDiscriminantAnalysis(n_components=2)
X_iris_LDA = lda.fit(X_iris, y_iris).transform(X_iris)
print("Rappel des composantes identifiées par le PCA :")
graph_acp2(X_iris_PCA, y_iris)
print("Composantes identifiées par le LDA (on remarque une meilleure séparation des classes sur la 1ère composante) :")
graph_acp2(X_iris_LDA, y_iris)
"""
Explanation: Ici une ACP est inutile car la variance des données est homogène selon les dimensions initiales (cf. orientation diagonale entre les 2 classes).
2. Autres méthodes
D'autres méthodes, que nous détaillerons moins pour le moment, peuvent être plus pertinentes que l'ACP dans certains contextes :
2.1 L'analyse discriminante linéaire ou quadratique (LDA/QDA)
Plutôt que de maximiser la variance sur des dimensions des données, on va ici chercher à maximiser la variance inter-classes par rapport à celle intra classe. Cette méthode transformera donc l'espace d'origine en un espace plus adapté que l'ACP dans un objectif de classification.
End of explanation
"""
df = pd.DataFrame.from_dict({'loves_everything': [9,9,9,9,9,9,0],
'big_guns': [1,2,1,8,9,8,9],
'testosterone guy': [0,0,1,9,9,9,7],
'girlygirl': [9,0,8,1,0,0,7],
'romance_addict': [9,8,0,0,0,1,0],
'machoman': [0,1,0,8,7,9,8],
'loves_flowers': [7,8,0,0,0,0,8],
'easily_pleased': [0,8,8,0,7,9,7],
'chuck_norris_fan': [0,2,0,9,0,9,8],
'mylittleponey98': [7,0,7,0,1,0,8],
'allmoviesrock': [7,8,0,0,7,8,7],
'more_guns_please': [0,2,0,9,8,0,7],
'yeah_guns666': [1,0,3,0,9,9,0]},
).transpose()
df.index.name = "Users"
df.columns = ['Charming prince', 'First date', 'Lovely love', 'Guns are cool', 'Ultra badass 4', 'My fist in your face', 'Guns & roses']
df.columns.name = "Movies"
df
nmf = decomposition.NMF(n_components=2,
random_state=1,
alpha=.1,
l1_ratio=.5).fit(df)
profiles = pd.DataFrame(nmf.transform(df),
index=df.index,
columns=['action lover', 'romcom lover'])
profiles.columns.name = 'Categories'
profiles
profiles = profiles > 1
profiles
movie_cat = pd.DataFrame(nmf.components_,
columns=df.columns,
index=['action lover', 'romcom lover'])
movie_cat.index.name = 'Categories'
movie_cat = movie_cat > 1
movie_cat
"""
Explanation: 2.1La factorisation de matrices
On va ici chercher à approcher une matrice V de dimensions m*n, de grande taille souvent creuse et positive (e.g. les évaluations de tous les clients sur tous les produits dans un site marchand), par un produit d'une matrice W de dimensions m*k (e.g. le profils de tous les clients) avec une matrice H de dimensions k*n (e.g. les évaluations moyennes pour ces profils). Nous chercherons ainsi à avoir (m*k + k*n) << m*n pour diminuer l'espace de représentation de nos données, tout en conservant le maximum d'information.
$$
\begin{bmatrix}
v_{11} & v_{12} & v_{13} & \dots & v_{1n} \
v_{21} & v_{22} & v_{23} & \dots & v_{2n} \
v_{31} & V_{32} & v_{33} & \dots & v_{3n} \
\vdots & \vdots & \vdots & \ddots & \vdots \
v_{m1} & V_{m2} & v_{m3} & \dots & v_{mn}
\end{bmatrix}
=
\begin{bmatrix}
w_{11} & \dots & w_{1k} \
w_{21} & \dots & w_{2k} \
w_{31} & \dots & w_{3k} \
\vdots & \ddots & \vdots \
w_{m1} & \dots & w_{mk}
\end{bmatrix}
*
\begin{bmatrix}
h_{11} & h_{12} & h_{13} & \dots & h_{1n} \
\vdots & \vdots & \vdots & \ddots & \vdots \
h_{k1} & h_{k2} & h_{k3} & \dots & h_{kn}
\end{bmatrix}
$$
Sans entrer dans les détails, ces algorithmes consisteront à trouver W et H qui minimisent $||V - W*H ||_F^2$. Il est à noter que la solution minimisant cette norme peut ne pas être unique. De plus ces algorithmes sont multiples, souvent de complexité computationnelle élevée et requièrent des régularisations.
A titre d'exemple construisons une matrice de notations de films par des utilisateurs que nous allons tâcher de factoriser en une matrice de profils (les goûts de chaque utilisateurs) et une autre de catégories de films (les profils auxquels ces films sont susceptibles de plaire). Ces notations vont de 1 à 9 et on désignera par 0 l'absence de note. Cet exemple est volontairement simpliste et ne peut pas passer à l'échelle ni être généralisé trivialement.
End of explanation
"""
|
igabr/Metis_Projects_Chicago_2017 | 05-project-kojack/.ipynb_checkpoints/Final_Notebook-checkpoint.ipynb | mit | df = unpickle_object("FINAL_DATAFRAME_PROJ_5.pkl")
df.head()
def linear_extrapolation(df, window):
pred_lst = []
true_lst = []
cnt = 0
all_rows = df.shape[0]
while cnt < window:
start = df.iloc[cnt:all_rows-window+cnt, :].index[0].date()
end = df.iloc[cnt:all_rows-window+cnt, :].index[-1].date()
predicting = df.iloc[all_rows-window+cnt, :].name.date()
print("---- Running model from {} to {} and predicting on {} ----".format(start,end,predicting))
training_df = df.iloc[cnt:all_rows-window+cnt, :]
testing_df = df.iloc[all_rows-window+cnt, :]
true_val = testing_df[-1]
first_row_value = training_df.iloc[0, :]['mkt_price']
first_row_date = training_df.iloc[0, :].name
last_row_value = training_df.iloc[-1, :]['mkt_price']
last_row_date = training_df.iloc[-1, :].name
alpha = (last_row_value-first_row_value)/90
prediction = last_row_value + alpha
pred_lst.append(prediction)
true_lst.append(true_val)
cnt += 1
return pred_lst, true_lst
pred_lst, true_lst = linear_extrapolation(df, 30)
r2_score(true_lst, pred_lst)
"""
Explanation: Notebook Overview
In this notebook, I will construct:
- A naive model of bitcoin price prediction
A nested time series model.
What do I mean by a nested time series model?
I will illustrate with a simple example.
Let's say that I wish to predict the mkt_price on 2016-10-30. I could fit a Linear Regression on all the features from 2016-10-26 - 29-10-2016. However, in order to predict the price of mkt_price on 2016-10-30 I need to have values for the features on 2016-10-30. This presents a problem as all my features are time series! That is, I cannot simply plug in a value for all the features because I don't know what their values would be on this future date!
One possible remedy for this is to simply use the values of all the features on 2016-10-29. In fact, it is well know that the best predictor of a variable tomorrow is it's current state today. However, I wish to be more rigorous.
Instead of simply plugging in t-1 values for the features at time t, I construct a time series model for each feature in order to predict its value at time t based on the entire history of data that I have for the features!
These predicted values are then passed as inputs to our linear regression models!
Thus, if I have N features, I am creating N-Time Series models in order to do a single prediction with Linear Regression for the mkt_price variable.
Naive Baseline Model
I will construct a naive baseline model that will most likely outperorm any other model I build below.
The model will work as follows:
When predicting the price on Day 91, I will take the average price change between Day 90 and Day 0. Let's call this average price change alpha.
I will then take the price of Day 90 and add alpha to it. This will serve as the 'predicted' price for day 91.
End of explanation
"""
df = unpickle_object("FINAL_DATAFRAME_PROJ_5.pkl")
df.head()
df.corr()
plot_corr_matrix(df)
beta_values, pred, true = master(df, 30)
r2_score(true, pred)#blows our Prophet TS only model away!
"""
Explanation: Naïve Model Caveats
We can see above that we can use this extremely basic model to obtain an $R^2$ of 0.86. In fact, this should be the baseline model score that we need to beat!
Let me mention some caveats to this result:
I only have 4 months of Bitcoin data. It should be obvious to the reader that such a naive model is NOT the appropriate way to forecast bitcoin price in general. For if it were this simple, we would all be millionaires.
Since I have 120 days worth of day, I am choosing to subset my data in 90 day periods, as such, I will produce 30 predictions. The variability of bitcoin prices around these 30 days will significantly impact the $R^2$ score. Again, more data is needed.
While bitcoin data itself is not hard to come by, twitter data is! It is the twitter data that is limiting a deeper analysis. I hope that this notebook serves as a starting point for further investigation in the relationship between tweets and bitcoin price fluctuations.
Lastly, I have made this notebook in Sept. 2017. The data for this project spans Oct 2016 - Feb 2017. Since that timeframe, bitcoin grew to unprecedented highs of \$4k/coin. Furthermore, media sound bites of CEOs such as James Dimon of JPMorgan have sent bitcoin prices tumbling by as much as $1k/coin. For me, this is what truly lies at the crux of the difficulty of cryptocurrency forecasting. I searched at great length for a free, searchable NEWS API, however, I could not find one. I think I great next step for this project would be to incorporate sentiment of news headlines concerning bitcoin!
Furthermore, with the aforementioned timeframe, the overall bitcoin trend was upward. That is, there was not that much volatility in the price - as such, it is expected that the Naïve Model would outperform the nested time series model. The next step would again, be to collect more data and re-run all the models.
Nested Time Series Model
End of explanation
"""
plt.plot(pred)
plt.plot(true)
plt.legend(["Prediction", 'Actual'], loc='upper left')
plt.xlabel("Prediction #")
plt.ylabel("Price")
plt.title("Nested TS - Price Prediction");
fig, ax = plt.subplots()
ax.scatter(true, pred, edgecolors=(0, 0, 0))
ax.plot([min(true), max(true)], [min(true), max(true)], 'k--', lw=3)
ax.set_xlabel('Actual')
ax.set_ylabel('Predicted')
plotting_dict_1 = {"eth_price": [], "pos_sent": [], "neg_sent": [], "unique_addr": [], "gold_price": [], "tot_num_trans": [], "mempool_trans":[], "hash_rate": [], "avg_trans_per_block":[]}
for index, sub_list in enumerate(beta_values):
for tup in sub_list:
plotting_dict_1[tup[0]].append(tup[1])
plot_key(plotting_dict_1, "pos_sent")# here we say the effect of positive sentiment through time!
plt.title("Positive Sentiment Effect on BTC Price")
plt.ylabel("Beta Value")
plt.xlabel("Model #")
plt.tight_layout()
plot_key(plotting_dict_1, "gold_price")
plt.title("Gold Price Effect on BTC Price")
plt.ylabel("Beta Value")
plt.xlabel("Model #")
plt.tight_layout()
plot_key(plotting_dict_1, "avg_trans_per_block")
plt.title("Avg. Trans per Block Effect on BTC Price")
plt.ylabel("Beta Value")
plt.xlabel("Model #")
plt.tight_layout()
"""
Explanation: Nested TS VS. FB Prophet TS
We see from the above that our model has an $R^2$ of 0.75! This greatly outperforms our baseline model of just using FaceBook Prophet to forecast the price of bitcoin! The RMSE is 1.40
This is quite impressive given that we only have 3 months of training data and are testing on one month!
The output above also shows regression output from statsmodels!
The following features were significant in all 30 models:
Gold Price
Ethereum Price
Positive Sentiment (Yay!)
Average Transactions Per Block
It is important, yet again, to note that this data does NOT take into account the wild fluctuations in price that bitcoin later experienced. We would need more data to affirm the significance of the above variables.
End of explanation
"""
df_pct = df.copy(deep=True)
df_pct = df_pct.pct_change()
df_pct.rename(columns={"mkt_price": "percent_change"}, inplace=True)
df_pct = df_pct.iloc[1:, :] #first row is all NaN's
df_pct.head()
beta_values_p, pred_p, true_p = master(df_pct, 30)
r2_score(true_p, pred_p) # this is expected due to the range of values on the y-axis!
#very good!
plt.plot(pred_p)
plt.plot(true_p)
plt.legend(["Prediction", 'Actual'], loc='upper left')
plt.xlabel("Prediction #")
plt.ylabel("Price")
plt.title("Nested TS - % Change Prediction");
"""
Explanation: Percent change model!
I will now run the same nested TS model as above, however, I will now make my 'target' variable the percent change in bitcoin price. In order to make this a log-og model, I will use the percentage change of all features as inputs into the TS model and thus the linear regression!
Since percent change will 'shift' our dataframe by one row, I omit the first row (which is all NaN's).
Thus, if we were to predict a percent change of $0.008010$ on 28-10-2017, then this would mean that the predicted price would be the price on 27-10-2017 $*predicted_percent_change$.
End of explanation
"""
fig, ax = plt.subplots()
ax.scatter(true_p, pred_p, edgecolors=(0, 0, 0))
ax.plot([min(true), max(true)], [min(true), max(true)], 'k--', lw=3)
ax.set_xlabel('Actual')
ax.set_ylabel('Predicted');
df.set_index('date', inplace=True)
prices_to_be_multiplied = df.loc[pd.date_range(start="2017-01-23", end="2017-02-21"), "mkt_price"]
forecast_price_lst = []
for index, price in enumerate(prices_to_be_multiplied):
predicted_percent_change = 1+float(pred_p[index])
forecasted_price = (predicted_percent_change)*price
forecast_price_lst.append(forecasted_price)
ground_truth_prices = df.loc[pd.date_range(start="2017-01-24", end="2017-02-22"), "mkt_price"]
ground_truth_prices = list(ground_truth_prices)
r2_score(ground_truth_prices, forecast_price_lst)
"""
Explanation: From the above, it seems that our model is not tuned well enough to anticipate the large dip shown above. This is due to a lack of training data. However, while our model might not be the best in predicting percent change how does it fair when we turn the percent change into prices.
End of explanation
"""
plt.plot(forecast_price_lst)
plt.plot(ground_truth_prices)
plt.legend(["Prediction", 'Actual'], loc='upper left')
plt.xlabel("Prediction #")
plt.ylabel("Price")
plt.title("Nested TS - % Change Prediction");
"""
Explanation: We have an $R^2$ of 0.87!
This surpasses the baseline model and the nested TS model!
The caveats of the baseline model also apply here, however, it seems that the addition of additional variables have helped us slightly improve with regards to the $R^2$
End of explanation
"""
|
blehman/Data-Science-45min-Intros | networks-201/network_analysis.ipynb | unlicense | #relatively fast networks package (pip install python-igraph) that I used for these homeworks
import igraph
# slow-and-steady networks package. fewer bugs, easier drawing
import networkx as nx
# plots!
import matplotlib.pyplot as plt
from matplotlib import style
%matplotlib inline
# other packages
from __future__ import division
from random import random, shuffle
from numpy import percentile
from operator import itemgetter
from tabulate import tabulate
from collections import Counter
"""
Explanation: Network Analysis--Using Null Models
Adapted from Professor Clauset's lectures and homeworks for Network Analysis and Modeling // Course page: http://tuvalu.santafe.edu/~aaronc/courses/5352/
End of explanation
"""
real_graph = nx.karate_club_graph()
positions = nx.spring_layout(real_graph)
nx.draw(real_graph, node_color = 'blue', pos = positions)
"""
Explanation: Graphs!
A good example of a real-world graph (because it happens to be one). For now it's just important to know that this is a graph of social interactions between 34 individuals involved in the same karate club. Drawing it less because it's informative, and more because plotting is fun.
End of explanation
"""
# Use the same number of nodes for each example
num_nodes = 500
# list of the sizes of the largest components
big_comp = []
# number of nodes in the graph
#num_nodes = 500
# vector of edge probabilities
p_values = [(1-x*.0001) for x in xrange(9850,10000)]
# try it a few times to get a smoother curve
iterations = 10
for p in p_values:
size_comps = []
for h in xrange(0, iterations):
edge_list = []
for i in xrange(0,num_nodes):
for j in xrange(i,num_nodes):
if (random() < p):
edge_list.append((i,j))
G = igraph.Graph(directed = False)
G.add_vertices(num_nodes)
G.add_edges(edge_list)
comps = [len(x) for x in G.clusters()]
size_comps.append(comps)
big_comp.append((sum([max(x) for x in size_comps])/len(size_comps)/float(num_nodes)))
plt.plot([x*(num_nodes-1) for x in p_values], big_comp, '.')
plt.title("Phase transitions in connectedness")
plt.ylabel("Fraction of nodes in the largest component")
plt.xlabel("Average degree (k = p(n-1)), {} < p < {}".format(p_values[99],p_values[0]))
"""
Explanation: Now. What's the difference between that (^) drawing of nodes and edges and a completely random assembly of dots and lines? How can we quantify the difference between a social network, which we think probably has important structure, and a completely random network, whose structure contains very little useful information? Which aspects of a network can be explained by simple statistics like average degree, the number of nodes, or the degree distribution? Which characteristics of a network depend on a structure or generative process that could reveal an underlying truth about the way the network came about?
The question to ask is: how likely is a specific network characteristic to have been generated by a random process?
Random Graph Models
The Erdös-Rényi Random Graph
The simplest random graph you can think of. For a graph $G$ with $n$ nodes, each pair of nodes gets an (undirected) edge with probability $p$. There are ${n \choose 2}$ pairs of nodes, so ${n \choose 2}$ possible edges. Then the average degree of a node in this random graph is $(n-1)p$, where $(n-1)$ is the number of possible connections for a node $i$ and $p$ is the probability of that connection existing. Call the expected average degree $\bar k = (n-1)p$.
Giant Components
One property that we see all the time in social graphs (and many other graphs) is the emergence of a "giant" connected component. The Erdös-Rényi also develops a giant component for certain parameter spaces. In fact, when the average degree is more than 1 we see a giant component emerging, and when it is more that 3 that giant component is all or almost all of the graph. That means that for a random graph with $p > \frac{1}{n-1}$ we will always start to see a giant component.
To demonstrate why this is true, consider $u$ to be the fraction of vertices not in the giant component. Then where $u$ is also the probability that a randomly chosen vertex $i$ does not belong to the giant component of the graph. For $i$ to not be a part of the giant component, for every other vertex $j$ ($n-1$ vertices), $i$ is either not connected to $j$ (with probability $1-p$), or $j$ is not connected to the giant component (with probability $pu$). Then:
$$ u = ((1-p) + (pu))^{n-1} $$
We can use $ p = \frac{\bar k}{n-1} $ to rewrite the expression as:
$$ u = (1 - \frac{\bar k(1-u)}{n-1})^{n-1} $$
And then taking the limit for large $n$ and using the fact that $\lim_{x\rightarrow\infty}(1-\frac{x}{n})^n = e^{-x}$:
$$ u = e^{-k(1-u)} $$
Now if $u$ is the fraction of vertices not in the giant component, call $S = 1-u$ the fraction of vertices in the giant component. Then:
$$ S = e^{-\bar kS} $$
There is no closed-form solution to this equation, but below we can show a simulation of random graphs and the size of the largest connected component in each one.
End of explanation
"""
# vector of edge probabilities
p_values_clustering = [x*.01 for x in xrange(0,100)]
# try it a few times to get a smoother curve
iterations = 1
# store the clustering coefficient
clustering = []
for p in p_values_clustering:
size_comps = []
for h in xrange(0, iterations):
edge_list = []
for i in xrange(0,num_nodes):
for j in xrange(i,num_nodes):
if (random() < p):
edge_list.append((i,j))
G = igraph.Graph(directed = False)
G.add_vertices(num_nodes)
G.add_edges(edge_list)
clustering.append((p, G.transitivity_undirected(mode="zero")))
plt.plot([x[0]*(num_nodes-1) for x in clustering], [x[1] for x in clustering], '.')
plt.title("Clustering coeff vs avg degree in a random graph")
plt.ylabel("Clustering coefficient")
plt.xlabel("Average degree (k = (n-1)p), 0 < p < 1")
"""
Explanation: Clustering coefficient
The clustering coefficient is a measure of how many trianges (completely connected triples) there are in a graph. You can think about it as the probability that if Alice knows Bob and Charlie, Bob also knows Charlie. The clustering coefficient of a graph is equal to $$ C = \frac{\text{(number of closed triples)}}{\text{number of connected triples}} $$
Finding the expected value of $C$ for a random graph is simple. For any 3 vertices, the probability that they are all connected is $p^3$ and the probability that at least 2 of them are connected is $p^2$. Then the expected values of closed triples (triangles) and connected triples respectiely are ${n \choose 3}p^3 $ and ${n \choose 3}p^2 $, and the expected value for $C$ is then $\frac{p^3}{p^2} = p$. Notice in the above plot that the values for $p$ are very small, even when the graph is fully connected. In a randomly generated sparse graph (a graph where a small fraction of the total possible ${n \choose 2}$ edges exist), the clustering coefficient $C$ is very low.
End of explanation
"""
# list of the average (over X iterations) diameters of the largest components
diam = []
# the degree distribution of the network for each average degree
degrees = {}
# vector of edge probabilities
p_values = [(1-x*.0001) for x in xrange(9850,10000)]
# try it a few times to get a smoother curve
iterations = 10
for p in p_values:
size_comps = []
diameters = []
for h in xrange(0, iterations):
edge_list = []
for i in xrange(0,num_nodes):
for j in xrange(i,num_nodes):
if (random() < p):
edge_list.append((i,j))
G = igraph.Graph(directed = False)
G.add_vertices(num_nodes)
G.add_edges(edge_list)
diameters.append(G.diameter())
degrees[p*(num_nodes-1)] = G.degree()
diam.append(sum(diameters)/len(diameters))
fig, ax1 = plt.subplots(figsize = (8,6))
plt.title("Graph metrics vs avg degree in a random graph", size = 16)
ax1.plot([x*(num_nodes-1) for x in p_values], big_comp, 'o', color = "red", markersize=4)
ax1.set_xlabel('Average degree (k = (n-1)p), 0 < p < 1', size = 16)
ax1.set_ylim(0,1.01)
ax1.set_xlim(0,6)
# Make the y-axis label and tick labels match the line color.
ax1.set_ylabel('Fraction of nodes in giant component', color='red', size = 16)
ax1.grid(True)
for tl in ax1.get_yticklabels():
tl.set_color('red')
tl.set_size(16)
ax2 = ax1.twinx()
ax2.set_xlim(0,6)
ax2.plot([x*(num_nodes-1) for x in p_values], diam, 's', color = "blue", markersize=4)
ax2.set_ylabel('Diameter of the giant component', color='blue', size = 16)
for tl in ax2.get_yticklabels():
tl.set_color('blue')
tl.set_size(16)
avg_degree_near_5 = min(degrees.keys(), key = lambda x: abs(x-5))
xy = Counter(degrees[avg_degree_near_5]).items()
plt.bar([x[0] for x in xy], [x[1] for x in xy], edgecolor = "none", color = "blue")
plt.ylabel("# of nodes with degree X", size = 16)
plt.xlabel("Degree", size = 16)
plt.title("Degree distribution of the random graph", size = 16)
"""
Explanation: Small diameter graphs
So we know that the giant component is very likely, even for sparse graphs, and also that the clustering coefficient is very low, even for relatively dense graphs. This means that the graph is almost completely connected, and that it is, at least locally, pretty similar to a tree graph (acyclic).
Consider that a graph has a mean degree $\bar k$. Now consider the number of vertices reachable from some vertex in the graph, $i$, call the number of vertices that $i$ can reach $l$. Because the clustering coefficient is very low (the graph is locally tree-like), it is likely that any neighbor of $i$'s has a completely new set of neightbors ($k$ neighbors, less $i$, $k-1$ total new neighbors). Then for each step, you reach $k-1$ new vertices. Thus the number of vertices reachable in $l$ steps from some vertex $i$ is $(k-1)^l$.
The diameter of a graph is the maximum number of steps $l$ one would have to take to reach any vetex from any other vertex, or the number of steps needed to make any vertex reachable.
$$ (k-1)^l = n $$
$$ l = \frac{1}{log(k-1)}log(n) \approx O(log(n))$$
Thus the diamater of the graph grows as $O(log(n))$, or shows "small world" characteristics.
End of explanation
"""
print("The number of nodes in the graph (all are connected): {}".format(len(real_graph.nodes())))
print("The number of edges in the graph: {}".format(len(real_graph.edges())))
print("The average degree: {}".format(sum(nx.degree(real_graph).values())/len(real_graph.nodes())))
print("The clustering coefficient: {}".format(nx.average_clustering(real_graph)))
print("The clustering coefficient that a random graph with the same degree would predict (k/(n-1)): {}"
.format(sum(nx.degree(real_graph).values())/len(real_graph.nodes())/(len(real_graph.nodes())-1)))
print("The diameter of the graph: {}".format(nx.diameter(real_graph)))
"""
Explanation: A comparison with a real social graph:
End of explanation
"""
A = []
for v in real_graph.nodes():
for x in range(0, real_graph.degree(v)):
A.append(v)
shuffle(A)
# make the edge list
_E = [(A[2*x], A[2*x+1]) for x in range(0,int(len(A)/2))]
E = set([x for x in _E if x[0]!=x[1]])
# add the edges to a new graph with the name node list
C = real_graph.copy()
C.remove_edges_from(real_graph.edges())
C.add_edges_from(E)
nx.draw(C, node_color = 'blue', pos = positions)
print("The number of nodes in the graph (all are connected): {}".format(len(C.nodes())))
print("The number of edges in the graph: {}".format(len(C.edges())))
print("The average degree: {}".format(sum(nx.degree(C).values())/len(C.nodes())))
print("The clustering coefficient: {}".format(nx.average_clustering(C)))
print("The clustering coefficient that a random graph with the same degree would predict (k/(n-1)): {}"
.format(sum(nx.degree(real_graph).values())/len(C.nodes())/(len(C.nodes())-1)))
print("The diameter of the graph: {}".format(nx.diameter(C)))
"""
Explanation: The Configuration Model
Another random graph model: the configuration model. Instead of generating our own degree sequence, we use a specified degree sequence (say, use the degree sequence of a social graph that we have) and change how the edges are connected. This allows us to ask the question: "how much of this characteristic is completely explained by degree?"
This is an example of using the configuration model to create a null model of our "real graph." Note that the algorithm that I am using works well for creating configuration models for large graphs, but produces more error on this smaller graph.
End of explanation
"""
# get the graph
florentine_families = igraph.Nexus.get("padgett")["PADGB"]
"""
Explanation: Asking questions using a null model
A famous example of centrality measuring on a social network is the Florentine Families graph. Padgett's reseach on this graph claims that the Medicci family's rise to power can be explained by their high centrality on the graph of business interactions between families in Italy during that time. We will use a null model (configuration model) of the graph to rearrange how edges are places without altering any node's degree to discover how much of the Medicci's power is determined by thier degree (ranther than other structural components of the graph).
End of explanation
"""
# degree centrality
d = florentine_families.degree()
d_rank = [(x, florentine_families.vs[x]['name'], d[x]) for x in range(0,len(florentine_families.vs()))]
d_rank.sort(key = itemgetter(2), reverse = True)
# harmonic centrality
distances = florentine_families.shortest_paths_dijkstra()
h = [sum([1/x for x in dist if x != 0])/(len(distances)-1) for dist in distances]
h_rank = [(x, florentine_families.vs[x]['name'], h[x]) for x in range(0,len(florentine_families.vs()))]
h_rank.sort(key = itemgetter(2), reverse = True)
# make the table
d_table = []
d_table.append(["Rank (by degree)", "degree", "Rank (h centrality)", "harmonic"])
for n in xrange(0,len(florentine_families.vs())):
table_row = []
table_row.extend([d_rank[n][1], str(d_rank[n][2])[0:5]])
table_row.extend([h_rank[n][1], str(h_rank[n][2])[0:5]])
#table_row.extend([e_rank[n][1], str(e_rank[n][2])[0:5]])
#table_row.extend([b_rank[n][1], str(b_rank[n][2])[0:5]])
d_table.append(table_row)
print tabulate(d_table)
"""
Explanation: First, let's show the relative rankings of the families with respect to vertex degree in the network and with respect to our chosen centrality measure, harmonic centrality. I won't go into various centrality measures here, beyond to say that harmonic centrality is formulated:
$$ c_i = \frac{1}{n-1}\sum_{i,i\neq j}^{n-1}\frac{1}{d_{ij}} $$
where $d_{ij}$ is the geodesic distance between vertices $i$ and $j$. Basically, harmonic centrality is a measure of how close a vertes is to every other vertex.
End of explanation
"""
config_model_centrality = [[] for x in florentine_families.vs()]
config_model_means = []
hc_differences = [[] for x in range(0,16)]
for i in xrange(0,1000):
# build a random graph based on the configuration model
C = florentine_families.copy()
# graph with the same edge list as G
C.delete_edges(None)
# print C.summary()
# Add random edges
# vertex list A
A = []
for v in florentine_families.vs().indices:
for x in range(0,florentine_families.degree(v)):
A.append(v)
shuffle(A)
# print A
# make the edge list
_E = [(A[2*x], A[2*x+1]) for x in range(0,int(len(A)/2))]
E = set([x for x in _E if x[0]!=x[1]])
# add the edges to C
# print E
C.add_edges(E)
# rank the vertices by harmonic centrality
C_distances = C.shortest_paths_dijkstra()
C_h = [sum([1/x for x in dist if x != 0])/(len(C_distances)-1) for dist in C_distances]
del C
for vertex in range(0,16):
hc_differences[vertex].append(h[vertex] - C_h[vertex])
plt.plot([percentile(diff, 50) for diff in hc_differences], '--')
plt.plot([percentile(diff, 25) for diff in hc_differences], 'r--')
plt.plot([percentile(diff, 75) for diff in hc_differences], 'g--')
plt.xticks(range(0,16))
plt.gca().set_xticklabels(florentine_families.vs()['name'])
plt.xticks(rotation = 90)
plt.gca().grid(True)
plt.ylabel("(centrality) - (centrality on the null model)")
plt.title("How much of harmonic centrality is explained by degree?")
"""
Explanation: Now the fun (?) part. Create a bunch of different random configuration models based on the florentine families graph, then measure the harmonic centrality on those graphs. The harmonic centality of a node on the null model will deend only on its degree (as the graph structure is now ranom).
End of explanation
"""
|
tolaoniyangi/dmc | notebooks/week-6/01-training a RNN model in Keras.ipynb | apache-2.0 | import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.callbacks import ModelCheckpoint
from keras.utils import np_utils
from time import gmtime, strftime
import os
import re
import pickle
import random
import sys
"""
Explanation: Lab 6.1 - Keras for RNN
In this lab we will use the Keras deep learning library to construct a simple recurrent neural network (RNN) that can learn linguistic structure from a piece of text, and use that knowledge to generate new text passages. To review general RNN architecture, specific types of RNN networks such as the LSTM networks we'll be using here, and other concepts behind this type of machine learning, you should consult the following resources:
http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
http://ml4a.github.io/guides/recurrent_neural_networks/
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
http://karpathy.github.io/2015/05/21/rnn-effectiveness/
This code is an adaptation of these two examples:
http://machinelearningmastery.com/text-generation-lstm-recurrent-neural-networks-python-keras/
https://github.com/fchollet/keras/blob/master/examples/lstm_text_generation.py
You can consult the original sites for more information and documentation.
Let's start by importing some of the libraries we'll be using in this lab:
End of explanation
"""
# load ascii text from file
filename = "data/obama.txt"
raw_text = open(filename).read()
# get rid of any characters other than letters, numbers,
# and a few special characters
raw_text = re.sub('[^\nA-Za-z0-9 ,.:;?!-]+', '', raw_text)
# convert all text to lowercase
raw_text = raw_text.lower()
n_chars = len(raw_text)
print "length of text:", n_chars
print "text preview:", raw_text[:500]
"""
Explanation: The first thing we need to do is generate our training data set. In this case we will use a recent article written by Barack Obama for The Economist newspaper. Make sure you have the obama.txt file in the /data folder within the /week-6 folder in your repository.
End of explanation
"""
# extract all unique characters in the text
chars = sorted(list(set(raw_text)))
n_vocab = len(chars)
print "number of unique characters found:", n_vocab
# create mapping of characters to integers and back
char_to_int = dict((c, i) for i, c in enumerate(chars))
int_to_char = dict((i, c) for i, c in enumerate(chars))
# test our mapping
print 'a', "- maps to ->", char_to_int["a"]
print 25, "- maps to ->", int_to_char[25]
"""
Explanation: Next, we use python's set() function to generate a list of all unique characters in the text. This will form our 'vocabulary' of characters, which is similar to the categories found in typical ML classification problems.
Since neural networks work with numerical data, we also need to create a mapping between each character and a unique integer value. To do this we create two dictionaries: one which has characters as keys and the associated integers as the value, and one which has integers as keys and the associated characters as the value. These dictionaries will allow us to do translation both ways.
End of explanation
"""
# prepare the dataset of input to output pairs encoded as integers
seq_length = 100
inputs = []
outputs = []
for i in range(0, n_chars - seq_length, 1):
inputs.append(raw_text[i:i + seq_length])
outputs.append(raw_text[i + seq_length])
n_sequences = len(inputs)
print "Total sequences: ", n_sequences
"""
Explanation: Now we need to define the training data for our network. With RNN's, the training data usually takes the shape of a three-dimensional matrix, with the size of each dimension representing:
[# of training sequences, # of training samples per sequence, # of features per sample]
The training sequences are the sets of data subjected to the RNN at each training step. As with all neural networks, these training sequences are presented to the network in small batches during training.
Each training sequence is composed of some number of training samples. The number of samples in each sequence dictates how far back in the data stream the algorithm will learn, and sets the depth of the RNN layer.
Each training sample within a sequence is composed of some number of features. This is the data that the RNN layer is learning from at each time step. In our example, the training samples and targets will use one-hot encoding, so will have a feature for each possible character, with the actual character represented by 1, and all others by 0.
To prepare the data, we first set the length of training sequences we want to use. In this case we will set the sequence length to 100, meaning the RNN layer will be able to predict future characters based on the 100 characters that came before.
We will then slide this 100 character 'window' over the entire text to create input and output arrays. Each entry in the input array contains 100 characters from the text, and each entry in the output array contains the single character that came after.
End of explanation
"""
indeces = range(len(inputs))
random.shuffle(indeces)
inputs = [inputs[x] for x in indeces]
outputs = [outputs[x] for x in indeces]
"""
Explanation: Now let's shuffle both the input and output data so that we can later have Keras split it automatically into a training and test set. To make sure the two lists are shuffled the same way (maintaining correspondance between inputs and outputs), we create a separate shuffled list of indeces, and use these indeces to reorder both lists.
End of explanation
"""
print inputs[0], "-->", outputs[0]
"""
Explanation: Let's visualize one of these sequences to make sure we are getting what we expect:
End of explanation
"""
# create two empty numpy array with the proper dimensions
X = np.zeros((n_sequences, seq_length, n_vocab), dtype=np.bool)
y = np.zeros((n_sequences, n_vocab), dtype=np.bool)
# iterate over the data and build up the X and y data sets
# by setting the appropriate indices to 1 in each one-hot vector
for i, example in enumerate(inputs):
for t, char in enumerate(example):
X[i, t, char_to_int[char]] = 1
y[i, char_to_int[outputs[i]]] = 1
print 'X dims -->', X.shape
print 'y dims -->', y.shape
"""
Explanation: Next we will prepare the actual numpy datasets which will be used to train our network. We first initialize two empty numpy arrays in the proper formatting:
X --> [# of training sequences, # of training samples, # of features]
y --> [# of training sequences, # of features]
We then iterate over the arrays we generated in the previous step and fill the numpy arrays with the proper data. Since all character data is formatted using one-hot encoding, we initialize both data sets with zeros. As we iterate over the data, we use the char_to_int dictionary to map each character to its related position integer, and use that position to change the related value in the data set to 1.
End of explanation
"""
# define the LSTM model
model = Sequential()
model.add(LSTM(128, return_sequences=False, input_shape=(X.shape[1], X.shape[2])))
model.add(Dropout(0.50))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
"""
Explanation: Next, we define our RNN model in Keras. This is very similar to how we defined the CNN model, except now we use the LSTM() function to create an LSTM layer with an internal memory of 128 neurons. LSTM is a special type of RNN layer which solves the unstable gradients issue seen in basic RNN. Along with LSTM layers, Keras also supports basic RNN layers and GRU layers, which are similar to LSTM. You can find full documentation for recurrent layers in Keras' documentation
As before, we need to explicitly define the input shape for the first layer. Also, we need to tell Keras whether the LSTM layer should pass its sequence of predictions or its internal memory as the output to the next layer. If you are connecting the LSTM layer to a fully connected layer as we do in this case, you should set the return_sequences parameter to False to have the layer pass the value of its hidden neurons. If you are connecting multiple LSTM layers, you should set the parameter to True in all but the last layer, so that subsequent layers can learn from the sequence of predictions of previous layers.
We will use dropout with a probability of 50% to regularize the network and prevent overfitting on our training data. The output of the network will be a fully connected layer with one neuron for each character in the vocabulary. The softmax function will convert this output to a probability distribution across all characters.
End of explanation
"""
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
"""
Explanation: Next, we define two helper functions: one to select a character based on a probability distribution, and one to generate a sequence of predicted characters based on an input (or 'seed') list of characters.
The sample() function will take in a probability distribution generated by the softmax() function, and select a character based on the 'temperature' input. The temperature (also often called the 'diversity') effects how strictly the probability distribution is sampled.
Lower values (closer to zero) output more confident predictions, but are also more conservative. In our case, if the model has overfit the training data, lower values are likely to give back exactly what is found in the text
Higher values (1 and above) introduce more diversity and randomness into the results. This can lead the model to generate novel information not found in the training data. However, you are also likely to see more errors such as grammatical or spelling mistakes.
End of explanation
"""
def generate(sentence, prediction_length=50, diversity=0.35):
print '----- diversity:', diversity
generated = sentence
sys.stdout.write(generated)
# iterate over number of characters requested
for i in range(prediction_length):
# build up sequence data from current sentence
x = np.zeros((1, X.shape[1], X.shape[2]))
for t, char in enumerate(sentence):
x[0, t, char_to_int[char]] = 1.
# use trained model to return probability distribution
# for next character based on input sequence
preds = model.predict(x, verbose=0)[0]
# use sample() function to sample next character
# based on probability distribution and desired diversity
next_index = sample(preds, diversity)
# convert integer to character
next_char = int_to_char[next_index]
# add new character to generated text
generated += next_char
# delete the first character from beginning of sentance,
# and add new caracter to the end. This will form the
# input sequence for the next predicted character.
sentence = sentence[1:] + next_char
# print results to screen
sys.stdout.write(next_char)
sys.stdout.flush()
print
"""
Explanation: The generate() function will take in:
input sentance ('seed')
number of characters to generate
and target diversity or temperature
and print the resulting sequence of characters to the screen.
End of explanation
"""
filepath="-basic_LSTM.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=0, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
"""
Explanation: Next, we define a system for Keras to save our model's parameters to a local file after each epoch where it achieves an improvement in the overall loss. This will allow us to reuse the trained model at a later time without having to retrain it from scratch. This is useful for recovering models incase your computer crashes, or you want to stop the training early.
End of explanation
"""
epochs = 50
prediction_length = 100
for iteration in range(epochs):
print 'epoch:', iteration + 1, '/', epochs
model.fit(X, y, validation_split=0.2, batch_size=256, nb_epoch=1, callbacks=callbacks_list)
# get random starting point for seed
start_index = random.randint(0, len(raw_text) - seq_length - 1)
# extract seed sequence from raw text
seed = raw_text[start_index: start_index + seq_length]
print '----- generating with seed:', seed
for diversity in [0.5, 1.2]:
generate(seed, prediction_length, diversity)
"""
Explanation: Now we are finally ready to train the model. We want to train the model over 50 epochs, but we also want to output some generated text after each epoch to see how our model is doing.
To do this we create our own loop to iterate over each epoch. Within the loop we first train the model for one epoch. Since all parameters are stored within the model, training one epoch at a time has the same exact effect as training over a longer series of epochs. We also use the model's validation_split parameter to tell Keras to automatically split the data into 80% training data and 20% test data for validation. Remember to always shuffle your data if you will be using validation!
After each epoch is trained, we use the raw_text data to extract a new sequence of 100 characters as the 'seed' for our generated text. Finally, we use our generate() helper function to generate text using two different diversity settings.
Warning: because of their large depth (remember that an RNN trained on a 100 long sequence effectively has 100 layers!), these networks typically take a much longer time to train than traditional multi-layer ANN's and CNN's. You shoud expect these models to train overnight on the virtual machine, but you should be able to see enough progress after the first few epochs to know if it is worth it to train a model to the end. For more complex RNN models with larger data sets in your own work, you should consider a native installation, along with a dedicated GPU if possible.
End of explanation
"""
pickle_file = '-basic_data.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'X': X,
'y': y,
'int_to_char': int_to_char,
'char_to_int': char_to_int,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print 'Unable to save data to', pickle_file, ':', e
raise
statinfo = os.stat(pickle_file)
print 'Saved data to', pickle_file
print 'Compressed pickle size:', statinfo.st_size
"""
Explanation: That looks pretty good! You can see that the RNN has learned alot of the linguistic structure of the original writing, including typical length for words, where to put spaces, and basic punctuation with commas and periods. Many words are still misspelled but seem almost reasonable, and it is pretty amazing that it is able to learn this much in only 50 epochs of training.
You can see that the loss is still going down after 50 epochs, so the model can definitely benefit from longer training. If you're curious you can try to train for more epochs, but as the error decreases be careful to monitor the output to make sure that the model is not overfitting. As with other neural network models, you can monitor the difference between training and validation loss to see if overfitting might be occuring. In this case, since we're using the model to generate new information, we can also get a sense of overfitting from the material it generates.
A good indication of overfitting is if the model outputs exactly what is in the original text given a seed from the text, but jibberish if given a seed that is not in the original text. Remember we don't want the model to learn how to reproduce exactly the original text, but to learn its style to be able to generate new text. As with other models, regularization methods such as dropout and limiting model complexity can be used to avoid the problem of overfitting.
Finally, let's save our training data and character to integer mapping dictionaries to an external file so we can reuse it with the model at a later time.
End of explanation
"""
|
geilerloui/deep-learning | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
print(sorted_vocab)
int_to_vocab = {ii: text for ii, text in enumerate(sorted_vocab)}
vocab_to_int = {text: ii for ii, text in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
dictionary = {'.': '||Period||',
',': '||Comma||',
'\"': '||QuotationMark||',
';': '||Semicolon||',
'!': '||ExclamationMark||',
'?': '||QuestionMark||',
'(': '||LeftParentheses||',
')': '||RightParentheses||',
'--': '||Dash||',
'\n': '||Return||'
}
return dictionary
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, shape=[None, None], name="input")
targets = tf.placeholder(tf.int32, shape=[None, None], name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
return (inputs, targets, learning_rate)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches, "the review of reviews to feed to the network in one training pass.
Typically this should be set as high as you go without runing out of memory
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
rnn_layers = 3
rnn = tf.contrib.rnn.BasicLSTMCell(rnn_size)
Cell = tf.contrib.rnn.MultiRNNCell([rnn] * rnn_layers)
# Getting an initial state of all zeros
InitialState = tf.identity(Cell.zero_state(batch_size, tf.int32), name="initial_state")
return (Cell, InitialState)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
n_vocab = len(int_to_vocab)
# remarque: inputs de tf.nn.dynamic_rnn corresponds à ce qu'il y'a en input dans le bloc
# LSTM, donc entre Embed et LSTM
Outputs, FinalState = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(FinalState, name = 'final_state')
return (Outputs, final_state)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
inputs = get_embed(input_data, vocab_size, embed_dim)
outputs, FinalState = build_rnn(cell, inputs)
# default: ReLU
Logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return Logits, FinalState
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
n_batches = len(int_text) // (batch_size * seq_length)
# print("int_text {}".format(int_text))
print("n_batches {}".format(n_batches))
batches = []
for i in range(n_batches):
inputs, targets = [], []
for j in range(batch_size):
idx = i * seq_length + j * seq_length
inputs.append(int_text[idx:idx + seq_length])
targets.append(int_text[idx + 1: idx + 1 + seq_length])
batches.append([inputs, targets])
return np.array(batches)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 50
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 200
# Sequence Length
seq_length = int(np.ceil(np.average(word_count_sentence)))
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 20
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
InputTensor = loaded_graph.get_tensor_by_name("input:0")
InitialStateTensor = loaded_graph.get_tensor_by_name("initial_state:0")
FinalStateTensor = loaded_graph.get_tensor_by_name("final_state:0")
ProbsTensor = loaded_graph.get_tensor_by_name("probs:0")
return (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
pred_word = int_to_vocab[np.argmax(probabilities)]
return pred_word
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
mit-eicu/eicu-code | notebooks/demo/02-demographics-and-severity-of-illness.ipynb | mit | # Import libraries
import pandas as pd
import matplotlib.pyplot as plt
import psycopg2
import os
# Plot settings
%matplotlib inline
plt.style.use('ggplot')
fontsize = 20 # size for x and y ticks
plt.rcParams['legend.fontsize'] = fontsize
plt.rcParams.update({'font.size': fontsize})
# Connect to the database - which is assumed to be in the current directory
fn = 'eicu_demo.sqlite3'
con = sqlite3.connect(fn)
cur = con.cursor()
"""
Explanation: eICU Collaborative Research Database
Notebook 2: Demographics and severity of illness in a single patient
The aim of this notebook is to introduce high level admission details relating to a single patient stay, using the following tables:
patient
admissiondx
apacheapsvar
apachepredvar
apachepatientresult
Before starting, you will need to copy the eicu demo database file ('eicu_demo.sqlite3') to the data directory.
Documentation on the eICU Collaborative Research Database can be found at: http://eicu-crd.mit.edu/.
1. Getting set up
End of explanation
"""
query = \
"""
SELECT type, name
FROM sqlite_master
WHERE type='table'
ORDER BY name;
"""
list_of_tables = pd.read_sql_query(query,con)
list_of_tables
"""
Explanation: 2. Display a list of tables
End of explanation
"""
# select a single ICU stay
patientunitstayid = 141296
# query to load data from the patient table
query = \
"""
SELECT *
FROM patient
WHERE patientunitstayid = {}
""".format(patientunitstayid)
print(query)
# run the query and assign the output to a variable
patient = pd.read_sql_query(query,con)
patient.head()
# display a complete list of columns
patient.columns
# select a limited number of columns to view
columns = ['uniquepid','patientunitstayid','gender','age','unitdischargestatus']
patient[columns]
"""
Explanation: 3. Selecting a single patient stay
3.1. The patient table
The patient table includes general information about the patient admissions (for example, demographics, admission and discharge details). See: http://eicu-crd.mit.edu/eicutables/patient/
Questions
Use your knowledge from the previous notebook and the online documentation (http://eicu-crd.mit.edu/) to answer the following questions:
Which column in the patient table is distinct for each stay in the ICU (similar to icustay_id in MIMIC-III)?
Which column is unique for each patient (similar to subject_id in MIMIC-III)?
End of explanation
"""
# query to load data from the patient table
query = \
"""
SELECT *
FROM admissiondx
WHERE patientunitstayid = {}
""".format(patientunitstayid)
print(query)
# run the query and assign the output to a variable
admissiondx = pd.read_sql_query(query,con)
admissiondx.head()
admissiondx.columns
"""
Explanation: Questions
What year was the patient admitted to the ICU? Which year was he or she discharged?
What was the status of the patient upon discharge from the unit?
3.2. The admissiondx table
The admissiondx table contains the primary diagnosis for admission to the ICU according to the APACHE scoring criteria. For more detail, see: http://eicu-crd.mit.edu/eicutables/admissiondx/
End of explanation
"""
# query to load data from the patient table
query = \
"""
SELECT *
FROM apacheapsvar
WHERE patientunitstayid = {}
""".format(patientunitstayid)
print(query)
# run the query and assign the output to a variable
apacheapsvar = pd.read_sql_query(query,con)
apacheapsvar.head()
apacheapsvar.columns
"""
Explanation: Questions
What was the primary reason for admission?
How soon after admission to the ICU was the diagnoses recorded in eCareManager?
3.3. The apacheapsvar table
The apacheapsvar table contains the variables used to calculate the Acute Physiology Score (APS) III for patients. APS-III is an established method of summarizing patient severity of illness on admission to the ICU.
The score is part of the Acute Physiology Age Chronic Health Evaluation (APACHE) system of equations for predicting outcomes for ICU patients. See: http://eicu-crd.mit.edu/eicutables/apacheApsVar/
End of explanation
"""
# query to load data from the patient table
query = \
"""
SELECT *
FROM apachepredvar
WHERE patientunitstayid = {}
""".format(patientunitstayid)
print(query)
# run the query and assign the output to a variable
apachepredvar = pd.read_sql_query(query,con)
apachepredvar.head()
apachepredvar.columns
apachepredvar.ventday1
"""
Explanation: Questions
What was the 'worst' heart rate recorded for the patient during the scoring period?
Was the patient oriented and able to converse normally on the day of admission? (hint: the verbal element refers to the Glasgow Coma Scale).
3.4. The apachepredvar table
The apachepredvar table provides variables underlying the APACHE predictions. Acute Physiology Age Chronic Health Evaluation (APACHE) consists of a groups of equations used for predicting outcomes in critically ill patients. See: http://eicu-crd.mit.edu/eicutables/apachePredVar/
End of explanation
"""
# query to load data from the patient table
query = \
"""
SELECT *
FROM apachepatientresult
WHERE patientunitstayid = {}
""".format(patientunitstayid)
print(query)
# run the query and assign the output to a variable
apachepatientresult = pd.read_sql_query(query,con)
apachepatientresult.head()
apachepatientresult.columns
"""
Explanation: Questions
Was the patient ventilated during (APACHE) day 1 of their stay?
Did the patient have diabetes?
3.5. The apachepatientresult table
The apachepatientresult table provides predictions made by the APACHE score (versions IV and IVa), including probability of mortality, length of stay, and ventilation days. See: http://eicu-crd.mit.edu/eicutables/apachePatientResult/
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td1a/td1a_cenonce_session4.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 1A.2 - Modules, fichiers, expressions régulières
Le langage Python est défini par un ensemble de règle, une grammaire. Seul, il n'est bon qu'à faire des calculs. Les modules sont des collections de fonctionnalités pour interagir avec des capteurs ou des écrans ou pour faire des calculs plus rapides ou plus complexes.
End of explanation
"""
mat = [[1.0, 0.0],[0.0,1.0] ] # matrice de type liste de listes
with open ("mat.txt", "w") as f : # création d'un fichier en mode écriture
for i in range (0,len (mat)) : #
for j in range (0, len (mat [i])) : #
s = str (mat [i][j]) # conversion en chaîne de caractères
f.write (s + "\t") #
f.write ("\n") #
# on vérifie que le fichier existe :
import os
print([ _ for _ in os.listdir(".") if "mat" in _ ] )
# la ligne précédente utilise le symbole _ : c'est une variable
# le caractère _ est une lettre comme une autre
# on pourrait écrire :
# print([ fichier for fichier in os.listdir(".") if "mat" in fichier ] )
# on utilise cette convention pour dire que cette variable n'a pas vocation à rester
"""
Explanation: Fichiers
Les fichiers permettent deux usages principaux :
récupérer des données d'une exécution du programme à l'autre (lorsque le programme s'arrête, toutes les variables sont perdues)
échanger des informations avec d'autres programmes (Excel par exemple).
Le format le plus souvent utilisé est le fichier plat, texte, txt, csv, tsv. C'est un fichier qui contient une information structurée sous forme de matrice, en ligne et colonne car c'est comme que les informations numériques se présentent le plus souvent. Un fichier est une longue séquence de caractères. Il a fallu choisir une convention pour dire que deux ensembles de caractères ne font pas partie de la même colonne ou de la même ligne. La convention la plus répandue est :
\t : séparateur de colonnes
\n : séparateur de lignes
Le caractère \ indique au langage python que le caractère qui suit fait partie d'un code. Vous trouverez la liste des codes : String and Bytes literals.
Aparté : aujourd'hui, lire et écrire des fichiers est tellement fréquent qu'il existe des outils qui font ça dans une grande variété de formats. Vous découvrirez cela lors de la séance 10. Il est utile pourtant de le faire au moins une fois soi-même pour comprendre la logique des outils et pour ne pas être bloqué dans les cas non prévus.
Ecrire et lire des fichiers est beaucoup plus long que de jouer avec des variables. Ecrire signifie qu'on enregistre les données sur le disque dur : elles passent du programme au disque dur (elles deviennent permanentes). Elles font le chemin inverse lors de la lecture.
Ecriture
Il est important de retenir qu'un fichier texte ne peut recevoir que des chaînes de caractères.
End of explanation
"""
mat = [[1.0, 0.0],[0.0,1.0] ] # matrice de type liste de listes
with open ("mat.txt", "w") as f : # création d'un fichier
s = '\n'.join ( '\t'.join( str(x) for x in row ) for row in mat )
f.write ( s )
# on vérifie que le fichier existe :
print([ _ for _ in os.listdir(".") if "mat" in _ ] )
"""
Explanation: Le même programme mais écrit avec une écriture condensée :
End of explanation
"""
import pyensae
%load_ext pyensae
%head mat.txt
"""
Explanation: On regare les premières lignes du fichier mat2.txt :
End of explanation
"""
with open ("mat.txt", "r") as f : # ouverture d'un fichier
mat = [ row.strip(' \n').split('\t') for row in f.readlines() ]
print(mat)
"""
Explanation: Lecture
End of explanation
"""
with open ("mat.txt", "r") as f : # ouverture d'un fichier
mat = [ [ float(x) for x in row.strip(' \n').split('\t') ] for row in f.readlines() ]
print(mat)
"""
Explanation: On retrouve les mêmes informations à ceci près qu'il ne faut pas oublier de convertir les nombres initiaux en float.
End of explanation
"""
import os
for f in os.listdir('.'):
print (f)
"""
Explanation: Voilà qui est mieux. Le module os.path propose différentes fonctions pour manipuler les noms de fichiers. Le module os propose différentes fonctions pour manipuler les fichiers :
End of explanation
"""
with open("exemple_fichier.txt", "w") as f:
f.write("something")
f = open("exemple_fichier.txt", "w")
f.write("something")
f.close()
"""
Explanation: with
De façon pragmatique, l'instruction with permet d'écrire un code plus court d'une instruction : close. Les deux bouts de code suivant sont équivalents :
End of explanation
"""
import math
print (math.cos(1))
from math import cos
print (cos(1))
from math import * # cette syntaxe est déconseillée car il est possible qu'une fonction
print (cos(1)) # porte le même nom qu'une des vôtres
"""
Explanation: L'instruction close ferme le fichier. A l'ouverture, le fichier est réservé par le programme Python, aucune autre application ne peut écrire dans le même fichier. Après l'instruction close, une autre application pour le supprimer, le modifier. Avec le mot clé with, la méthode close est implicitement appelée.
à quoi ça sert ?
On écrit très rarement un fichier texte. Ce format est le seul reconnu par toutes les applications. Tous les logiciels, tous les langages proposent des fonctionnalités qui exportent les données dans un format texte. Dans certaines circonstances, les outils standards ne fonctionnent pas - trop grops volumes de données, problème d'encoding, caractère inattendu -. Il faut se débrouiller.
Exercice 1 : Excel $\rightarrow$ Python $\rightarrow$ Excel
Il faut télécharger le fichier seance4_excel.xlsx qui contient une table de trois colonnes. Il faut :
enregistrer le fichier au format texte,
le lire sous python
créer une matrice carrée 3x3 où chaque valeur est dans sa case (X,Y),
enregistrer le résultat sous format texte,
le récupérer sous Excel.
Autres formats de fichiers
Les fichiers texte sont les plus simples à manipuler mais il existe d'autres formats classiques~:
html : les pages web
xml : données structurées
[zip](http://fr.wikipedia.org/wiki/ZIP_(format_de_fichier), gz : données compressées
wav, mp3, ogg : musique
mp4, Vorbis : vidéo
...
Modules
Les modules sont des extensions du langages. Python ne sait quasiment rien faire seul mais il bénéficie de nombreuses extensions. On distingue souvent les extensions présentes lors de l'installation du langage (le module math) des extensions externes qu'il faut soi-même installer (numpy). Deux liens :
modules officiels
modules externes
Le premier réflexe est toujours de regarder si un module ne pourrait pas vous être utile avant de commencer à programmer. Pour utiliser une fonction d'un module, on utilise l'une des syntaxes suivantes :
End of explanation
"""
# fichier monmodule.py
import math
def fonction_cos_sequence(seq) :
return [ math.cos(x) for x in seq ]
if __name__ == "__main__" :
print ("ce message n'apparaît que si ce programme est le point d'entrée")
"""
Explanation: Exercice 2 : trouver un module (1)
Aller à la page modules officiels (ou utiliser un moteur de recherche) pour trouver un module permettant de générer des nombres aléatoires. Créer une liste de nombres aléatoires selon une loi uniforme puis faire une permutation aléatoire de cette séquence.
Exercice 3 : trouver un module (2)
Trouver un module qui vous permette de calculer la différence entre deux dates puis déterminer le jour de la semaine où vous êtes nés.
Module qu'on crée soi-même
Il est possible de répartir son programme en plusieurs fichiers. Par exemple, un premier fichier monmodule.py qui contient une fonction :
End of explanation
"""
code = """
# -*- coding: utf-8 -*-
import math
def fonction_cos_sequence(seq) :
return [ math.cos(x) for x in seq ]
if __name__ == "__main__" :
print ("ce message n'apparaît que si ce programme est le point d'entrée")
"""
with open("monmodule.py", "w", encoding="utf8") as f :
f.write(code)
"""
Explanation: La cellule suivante vous permet d'enregistrer le contenu de la cellule précédente dans un fichier appelée monmodule.py.
End of explanation
"""
import monmodule
print ( monmodule.fonction_cos_sequence ( [ 1, 2, 3 ] ) )
"""
Explanation: Le second fichier :
End of explanation
"""
import sys
list(sorted(sys.modules))[:10]
"""
Explanation: Note : Si le fichier monmodule.py est modifié, python ne recharge pas automatiquement le module si celui-ci a déjà été chargé. On peut voir la liste des modules en mémoire dans la variable sys.modules :
End of explanation
"""
import pyensae.datasource
discours = pyensae.datasource.download_data('voeux.zip', website = 'xd')
"""
Explanation: Pour retirer le module de la mémoire, il faut l'enlever de sys.modules avec l'instruction del sys.modules['monmodule']. Python aura l'impression que le module monmodule.py est nouveau et il l'importera à nouveau.
Exercice 4 : son propre module
Que se passe-t-il si vous remplacez if __name__ == "__main__": par if True :, ce qui équivaut à retirer la ligne if __name__ == "__main__": ?
Expressions régulières
Pour la suite de la séance, on utilise comme préambule les instructions suivantes :
End of explanation
"""
import re # les expressions régulières sont accessibles via le module re
expression = re.compile("[0-9]{2}/[0-9]{2}/[0-9]{4}")
texte = """Je suis né le 28/12/1903 et je suis mort le 08/02/1957. Ma seconde femme est morte le 10/11/63."""
cherche = expression.findall(texte)
print(cherche)
"""
Explanation: La documentation pour les expressions régulières est ici : regular expressions. Elles permettent de rechercher des motifs dans un texte :
4 chiffres / 2 chiffres / 2 chiffres correspond au motif des dates, avec une expression régulière, il s'écrit : [0-9]{4}/[0-9]{2}/[0-9]{2}
la lettre a répété entre 2 et 10 fois est un autre motif, il s'écrit : a{2,10}.
End of explanation
"""
|
DS-100/sp17-materials | sp17/labs/lab06/lab06_solution.ipynb | gpl-3.0 | !pip install ipython-sql
%load_ext sql
%sql sqlite:///./lab06.sqlite
import sqlalchemy
engine = sqlalchemy.create_engine("sqlite:///lab05.sqlite")
connection = engine.connect()
!pip install -U okpy
from client.api.notebook import Notebook
ok = Notebook('lab06.ok')
"""
Explanation: Lab 6: SQL
End of explanation
"""
%%sql
DROP TABLE IF EXISTS users;
DROP TABLE IF EXISTS follows;
CREATE TABLE users (
USERID INT NOT NULL,
NAME VARCHAR (256) NOT NULL,
YEAR FLOAT NOT NULL,
PRIMARY KEY (USERID)
);
CREATE TABLE follows (
USERID INT NOT NULL,
FOLLOWID INT NOT NULL,
PRIMARY KEY (USERID, FOLLOWID)
);
%%capture
count = 0
users = ["Ian", "Daniel", "Sarah", "Kelly", "Sam", "Alison", "Henry", "Joey", "Mark", "Joyce", "Natalie", "John"]
years = [1, 3, 4, 3, 4, 2, 5, 2, 1, 3, 4, 2]
for username, year in zip(users, years):
count += 1
%sql INSERT INTO users VALUES ($count, '$username', $year);
%%capture
follows = [0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1,
0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1,
0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1,
1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1,
0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0,
0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1,
1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0,
1, 1, 0, 1]
for i in range(12):
for j in range(12):
if i != j and follows[i + j*12]:
%sql INSERT INTO follows VALUES ($i+1, $j+1);
"""
Explanation: Rapidgram
The date: March, 2017. All of the students at Berkeley are obsessed with the hot new social networking app, Rapidgram, where users can share text and image posts. You've been hired as Rapidgram's very first Data Scientist, in charge of analyzing their petabyte-scale user data, in order to sell it to credit card companies (I mean, they had to monetize somehow). But before you get into that, you need to learn more about their database schema.
First, run the next few cells to generate a snapshot of their data. It will be saved locally as the file lab05.sqlite.
End of explanation
"""
q1 = """
...
"""
%sql $q1
#SOLUTION
q1 = """
SELECT COUNT(*) FROM follows, users
WHERE users.name="Joey"
AND (users.userid=follows.followid)
"""
%sql $q1
q1_answer = connection.execute(q1).fetchall()
_ = ok.grade('q1')
_ = ok.backup()
"""
Explanation: Question 1: Joey's Followers
How many people follow Joey?
End of explanation
"""
q2 = """
...
"""
%sql $q2
#SOLUTION
q2 = """
SELECT COUNT(*) FROM follows, users
WHERE users.name="Joey"
AND (users.userid=follows.userid)
"""
%sql $q2
q2_answer = connection.execute(q2).fetchall()
_ = ok.grade('q2')
_ = ok.backup()
"""
Explanation: Question 2: I Ain't no Followback Girl
How many people does Joey follow?
End of explanation
"""
q3 = """
...
"""
%sql $q3
#SOLUTION
q3 = """
SELECT u1.name
FROM follows, users as u1, users as u2
WHERE follows.userid=u1.userid
AND follows.followid=u2.userid
AND u2.name="Joey"
"""
%sql $q3
q3_answer = connection.execute(q3).fetchall()
_ = ok.grade('q3')
_ = ok.backup()
"""
Explanation: Question 3: Know your Audience
What are the names of Joey's followers?
End of explanation
"""
q4 = """
...
"""
%sql $q4
#SOLUTION
q4 = """
SELECT name, COUNT(*) as friends
FROM follows, users
WHERE follows.followid=users.userid
GROUP BY name
ORDER BY friends DESC
LIMIT 5
"""
%sql $q4
q4_answer = connection.execute(q4).fetchall()
_ = ok.grade('q4')
_ = ok.backup()
"""
Explanation: Question 4: Popularity Contest
How many followers does each user have? You'll need to use GROUP BY to solve this. List only the top 5 users by number of followers.
End of explanation
"""
q5a = """
SELECT u1.name as follower, u2.name as followee
FROM follows, users as u1, users as u2
WHERE follows.userid=u1.userid
AND follows.followid=u2.userid
AND RANDOM() < 0.33
"""
"""
Explanation: Question 5: Randomness
Rapidgram wants to get a random sample of their userbase. Specifically, they want to look at exactly one-third of the follow-relations in their data. A Rapidgram engineer suggests the following SQL query:
End of explanation
"""
q5b = """
...
"""
%sql $q5b
#SOLUTION
q5b = """
SELECT u1.name as follower, u2.name as followee
FROM follows, users as u1, users as u2
WHERE follows.userid=u1.userid
AND follows.followid=u2.userid
ORDER BY RANDOM() LIMIT 72*1/3
"""
%sql $q5b
q5_answers = [connection.execute(q5b).fetchall() for _ in range(100)]
_ = ok.grade('q5')
_ = ok.backup()
"""
Explanation: Do you think this query will work as intended? Why or why not? Try designing a better query below:
End of explanation
"""
q6a = """
WITH RECURSIVE generate_series(value) AS (
SELECT 0
UNION ALL
SELECT value+1 FROM generate_series
WHERE value+1<=10
)
SELECT value
FROM generate_series
"""
%sql $q6a
"""
Explanation: Question 6: More Randomness
Rapidgram leadership wants to give more priority to more experienced users, so they decide to weight a survey of users towards students who have spend a greater number of years at berkeley. They want to take a sample of 10 students, weighted such that a student's chance of being in the sample is proportional to their number of years spent at berkeley - for instance, a student with 6 years has three times the chance of a student with 2 years, who has twice the chance of a student with only one year.
To take this sample, they've provided you with a helpful temporary view. You can run the cell below to see its functionality.
End of explanation
"""
q6b = """
WITH RECURSIVE generate_series(value) AS (
SELECT 0
UNION ALL
SELECT value+1 FROM generate_series
WHERE value+1<=12
)
SELECT name
FROM ...
WHERE ...
ORDER BY ...
LIMIT 10
"""
%sql $q6b
#SOLUTION
q6b = """
WITH RECURSIVE generate_series(value) AS (
SELECT 0
UNION ALL
SELECT value+1 FROM generate_series
WHERE value+1<=12
)
SELECT name
FROM generate_series, users
WHERE value < year
ORDER BY RANDOM()
LIMIT 10
"""
%sql $q6b
q6_answers = [connection.execute(q6b).fetchall() for _ in range(100)]
_ = ok.grade('q6')
_ = ok.backup()
"""
Explanation: Using the generate_series view, get a sample of ten students, weighted in this manner.
End of explanation
"""
q7 = """
SELECT name FROM (
SELECT ...
)
WHERE year > avg_follower_years
"""
%sql $q7
#SOLUTION
q7 = """
SELECT name FROM
(SELECT u1.name, u1.year, AVG(u2.year) as avg_follower_years
FROM follows, users as u1, users as u2
WHERE follows.userid=u1.userid
AND follows.followid=u2.userid
GROUP BY u1.name)
WHERE year > avg_follower_years
"""
%sql $q7
q7_answer = connection.execute(q7).fetchall()
_ = ok.grade('q7')
_ = ok.backup()
_ = ok.grade_all()
_ = ok.submit()
"""
Explanation: Question 7: Older and Wiser (challenge)
List every person who has been at Berkeley longer - that is, their year is greater - than their average follower.
End of explanation
"""
|
dshean/iceflow | Iceflow visualization.ipynb | mit | %matplotlib inline
import os
import matplotlib.pyplot as plt
# The two statements below are used mainly to set up a plotting
# default style that's better than the default from matplotlib
#import seaborn as sns
plt.style.use('bmh')
from shapely.geometry import Point
#import pandas as pd
import geopandas as gpd
from geopandas import GeoSeries, GeoDataFrame
"""
Explanation: Visualizing output from the Mass Balance workflow
This notebook is designed to work with output from the Mass Balance workflow [iceflow] developed during Geohackweek2016 at the University of Washington (https://github.com/dshean/iceflow).
1. Viewing the specific mass balance of glacier polygons
Set up the environment
This notebook requires the following packages:
matplotlib
shapely
geopandas
End of explanation
"""
file_pth = 'rgi_centralasia/13_rgi32_CentralAsia.shp'
rgi_glac = gpd.read_file(file_pth)
timeframe='[time between DEMs]'
rgi_glac.head()
"""
Explanation: Set file names and directories
End of explanation
"""
# test data set-up
gdf = rgi_glac
gdf.plot()
# test data set-up
import random
my_randoms = random.sample(xrange(-50,50), 15)
gdf["spec"]= my_randoms
gdf.to_file("rgi_test.shp")
f, ax = plt.subplots(1, figsize=(6, 4))
rgi_glac.plot(column='[spec mb]', scheme='fisher_jenks', k=7,
alpha=0.9, cmap=plt.cm.Blues, legend=True, ax=ax)
plt.axis('equal')
ax.set_title('Specific Mass Balance'+timeframe)
"""
Explanation: Plot the glacier outlines based on their specific mass balance
End of explanation
"""
|
snegirigens/DLND | embeddings/Skip-Gram_word2vec.ipynb | mit | import time
import numpy as np
import tensorflow as tf
import utils
"""
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
"""
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
"""
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
"""
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
"""
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
"""
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
"""
## Your code here
from collections import Counter
import random
word_counts = Counter(int_words)
t = 1e-5
total_words = len(int_words)
frequency = { word : float(count) / total_words for word, count in word_counts.items() }
p_drop = {word : 1 - np.sqrt(float(t)/frequency[word]) for word in word_counts }
train_words = [w for w in int_words if p_drop[w] < random.random()] # The final subsampled word list
#print (len(train_words))
#print(train_words[:30])
"""
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
"""
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
R = random.randint (1, window_size) # or window_size + 1?
start = idx - R if idx >= R else 0
end = idx + R + 1
return list (set(words[start:idx] + words[idx+1:end]))
"""
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
"""
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
"""
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
"""
train_graph = tf.Graph()
with train_graph.as_default():
# with tf.name_scope('input'):
inputs = tf.placeholder (tf.int32, shape=[None], name='inputs')
# with tf.name_scope('targets'):
labels = tf.placeholder (tf.int32, shape=[None,None], name='labels')
"""
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
"""
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
# with tf.name_scope('embeddings'):
embedding = tf.Variable (tf.random_uniform ([n_vocab, n_embedding], -1.0, 1.0, dtype=tf.float32), name='embedding') # create embedding weight matrix here
embed = tf.nn.embedding_lookup (embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
tf.summary.histogram ('embedding', embedding)
"""
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
"""
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable (tf.truncated_normal ([n_vocab, n_embedding], stddev=0.1, dtype=tf.float32), name='softmax_w') # create softmax weight matrix here
softmax_b = tf.Variable (tf.zeros (n_vocab, dtype=tf.float32), name='softmax_b') # create softmax biases here
tf.summary.histogram ('softmax_w', softmax_w)
tf.summary.histogram ('softmax_b', softmax_b)
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss (softmax_w, softmax_b, labels, embed, n_sampled, n_vocab, name='loss')
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
tf.summary.scalar ('cost', cost)
"""
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
"""
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
"""
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
"""
epochs = 1
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
merged = tf.summary.merge_all()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter ("./logs/2/train", sess.graph)
test_writer = tf.summary.FileWriter ("./logs/2/test")
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
summary, train_loss, _ = sess.run([merged, cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
train_writer.add_summary (summary, iteration)
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
"""
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
"""
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
"""
Explanation: Restore the trained network if you need to:
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
"""
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation
"""
|
hadim/public_notebooks | Analysis/MSD_Bayes/notebook.ipynb | mit | %matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
from scipy import io
from scipy import optimize
import pymc3 as pm
import theano
import theano.tensor as t
import matplotlib.pyplot as plt
"""
Explanation: Classify particle motion from MSD anaysis and bayesian inference (in construction)
This analysis is largely inspired from the following paper Monnier, N. (2012). Bayesian Approach to MSD-Based Analysis of Particle Motion in Live Cells. Biophysical Journal.
The idea is to classify particle motion in different biophysical model : diffusion, confined movement, direct, and so forth.
The input of the analysis is MSD curves of several particles (under same condition) and the output is a set of probability for different models.
For more details, the papier is available here : http://www.cell.com/biophysj/abstract/S0006-3495(12)00718-7
For a complete introduction of bayesian statistic, I strongly encourage you to read this excellent book : Bayesian Methods for Hackers.
TODO: introduce the theory
Monnier, N. (2012). Bayesian Approach to MSD-Based Analysis of Particle Motion in Live Cells. Biophysical Journal
End of explanation
"""
# Chromosomes traj
mat = io.loadmat('chromosomes.mat')
msds = mat['MSD_curves_chromosomes']
msds = pd.DataFrame(msds)
msds["delay"] = mat['timelags'].T[0]
msds.set_index("delay", drop=True, inplace=True)
msds.head()
"""
Explanation: Load chromosomes MSD curves
Corresponds to Fig. 4 A-C in the paper.
Monnier, N. (2012). Bayesian Approach to MSD-Based Analysis of Particle Motion in Live Cells. Biophysical Journal
End of explanation
"""
fig, ax = plt.subplots(figsize=(10, 8))
msds.plot(ax=ax, legend=False)
ax.set_xlabel('Delay (s)')
ax.set_ylabel('MSD ($\mu m^2.s^{-1}$)')
"""
Explanation: Display all the MSD curves
End of explanation
"""
msd_mean = msds.mean(axis=1)
msd_std = msds.std(axis=1)
msd_sem = msds.sem(axis=1)
fig, ax = plt.subplots(figsize=(10, 8))
msd_mean.plot(ax=ax, lw=2)
# std
ax.fill_between(msd_mean.index, msd_mean, msd_mean + msd_std, alpha=0.1)
ax.fill_between(msd_mean.index, msd_mean, msd_mean - msd_std, alpha=0.1)
# sem
ax.fill_between(msd_mean.index, msd_mean, msd_mean + msd_sem, alpha=0.2)
ax.fill_between(msd_mean.index, msd_mean, msd_mean - msd_sem, alpha=0.2)
ax.set_xlabel('Delay (s)')
ax.set_ylabel('MSD ($\mu m^2.s^{-1}$)')
"""
Explanation: Display the average MSD (with std and sem)
End of explanation
"""
# Get the average MSD
msd_mean = msds.mean(axis=1)
# Get difference between each individual curve and the mean curve
errors = msds.copy()
for i, col in msds.iteritems():
errors.loc[:, i] = col - msd_mean
# Calculate raw covariance matrix
error_cov_raw = np.cov(errors)
# Regularize covariance matrix (TODO)
error_cov = error_cov_raw.copy()
# Covariance of the mean curve
error_cov_raw /= errors.shape[0]
error_cov /= errors.shape[0]
"""
Explanation: Naive implementation from Matlab code
Matlab code is available here : http://msd-bayes.org/
Covariance matrix
In msd_curves_bayes.m.
End of explanation
"""
plt.figure(figsize=(8, 8))
plt.imshow(error_cov)
"""
Explanation: Display the covariance matrix.
End of explanation
"""
# Purely diffusive model
def msd_model(tau, D_coeff):
return 6 * D_coeff * tau
msd_observed = msd_mean.copy()
tau = msd_mean.index
popt, pcov = optimize.curve_fit(msd_model, tau, msd_observed)
errors = np.sqrt(np.diag(pcov))
print("Estimate for D coeff is {:.2f} with variance = {:.5f}".format(popt[0], errors[0]))
# Constant model
def msd_model(tau, sigma_e):
return 6 * sigma_e ** 2
msd_observed = msd_mean.copy()
tau = msd_mean.index
popt, pcov = optimize.curve_fit(msd_model, tau, msd_observed)
errors = np.sqrt(np.diag(pcov))
print("Estimate for sigma_e is {:.2f} with variance = {:.5f}".format(popt[0], errors[0]))
# Diffusive + error model
def msd_model(tau, D_coeff, sigma_e):
return 6 * D_coeff * tau + 6 * sigma_e ** 2
msd_observed = msd_mean.copy()
tau = msd_mean.index
popt, pcov = optimize.curve_fit(msd_model, tau, msd_observed)
errors = np.sqrt(np.diag(pcov))
print("Estimate for D coeff is {:.2f} with variance = {:.5f}".format(popt[0], errors[0]))
print("Estimate for sigma_e is {:.2f} with variance = {:.5f}".format(popt[1], errors[1]))
"""
Explanation: Fitting
In msd_fitting.m.
Brownian diffusion
Fit the following equation : $MSD(\tau) = 6D\tau$
End of explanation
"""
# Purely diffusive model
msd_observed = msd_mean.copy()
with pm.Model() as model:
D_coeff = pm.Uniform("D_coeff", lower=0, upper=1000)
tau = msd_mean.index
msd_model = 6 * D_coeff * tau
observation = pm.Normal("observation", mu=msd_model, observed=msd_observed)
step = pm.NUTS()
trace = pm.sample(1000, step, )
print("\nEstimate for D coeff is {:.2f} with variance unknown".format(trace["D_coeff"][-1]))
pm.traceplot(trace)
# Constant model
msd_observed = msd_mean.copy()
with pm.Model() as model:
sigma_e = pm.Uniform("sigma_e", lower=0, upper=10)
tau = msd_mean.index
msd_model = 6 * sigma_e ** 2
observation = pm.Normal("observation", mu=msd_model, observed=msd_observed)
step = pm.NUTS()
trace = pm.sample(1000, step)
print("\nEstimate for sigma_e is {:.2f} with variance unknown".format(trace["sigma_e"][-1]))
pm.traceplot(trace)
# Diffusive + error model
msd_observed = msd_mean.copy()
with pm.Model() as model:
D_coeff = pm.Uniform("D_coeff", lower=0, upper=1)
sigma_e = pm.Uniform("sigma_e", lower=0, upper=10)
tau = msd_mean.index
msd_model = 6 * sigma_e ** 2 + 6 * D_coeff * tau
observation = pm.Normal("observation", mu=msd_model, observed=msd_observed)
step = pm.NUTS()
trace = pm.sample(1000, step)
print("\nEstimate for D coeff is {:.2f} with variance unknown".format(trace["D_coeff"][-1]))
print("Estimate for sigma_e is {:.2f} with variance unknown".format(trace["sigma_e"][-1]))
pm.traceplot(trace)
"""
Explanation: Implementation with PyMC3
See https://pymc-devs.github.io/pymc3/getting_started/#a-motivating-example-linear-regression for an introduction to PyMC3.
Brownian diffusion
Fit the following equation : $MSD(\tau) = 6D\tau$
End of explanation
"""
%matplotlib qt
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
from scipy import io
from scipy import optimize
import pymc3 as pm
import theano
import theano.tensor as t
import matplotlib.pyplot as plt
count_data = np.loadtxt("txtdata.csv")
plt.bar(np.arange(len(count_data)), count_data, color="#348ABD", edgecolor="none")
count_data = np.loadtxt("txtdata.csv")
alpha = 1.0 / count_data.mean() # Recall count_data is the variable that holds our txt counts
with pm.Model() as model:
lambda_1 = pm.Exponential("lambda_1", alpha)
lambda_2 = pm.Exponential("lambda_2", alpha)
tau = pm.DiscreteUniform("tau", lower=0, upper=1000)
days = np.arange(len(count_data))
lambda_ = pm.switch(tau >= days, lambda_1, lambda_2)
observation = pm.Poisson("observation", mu=lambda_, observed=count_data)
step = pm.Metropolis()
trace = pm.sample(1000, step)
print()
print("tau", trace['tau'][-1])
print("lambda_1", trace['lambda_1'][-1])
print("lambda_2", trace['lambda_2'][-1])
pm.traceplot(trace)
tau = trace['tau'][-1]
lambda_1 = trace['lambda_1'][-1]
lambda_2 = trace['lambda_2'][-1]
mcount = np.zeros(count_data.shape)
mcount[:tau] = lambda_1
mcount[tau:] = lambda_2
plt.figure(figsize=(10, 6))
plt.bar(np.arange(len(count_data)), count_data, color="#348ABD", edgecolor="none")
plt.plot(mcount, lw=4, color="#E24A33")
count_data = np.loadtxt("txtdata.csv")
@theano.compile.ops.as_op(itypes=[t.lscalar, t.lscalar, t.dscalar, t.dscalar, t.dscalar], otypes=[t.dvector])
def lambda_(tau1, tau2, lambda_1, lambda_2, lambda_3):
out = np.zeros(len(count_data))
out[:tau1] = lambda_1 # lambda before tau is lambda1
out[tau1:tau2] = lambda_2 # lambda before tau is lambda1
out[tau2:] = lambda_3 # lambda after (and including) tau is lambda2
return out
alpha = 1.0 / count_data.mean() # Recall count_data is the variable that holds our txt counts
with pm.Model() as model:
lambda_1 = pm.Exponential("lambda_1", alpha)
lambda_2 = pm.Exponential("lambda_2", alpha)
lambda_3 = pm.Exponential("lambda_3", alpha)
tau1 = pm.DiscreteUniform("tau1", lower=0, upper=len(count_data))
tau2 = pm.DiscreteUniform("tau2", lower=0, upper=len(count_data))
observation = pm.Poisson("observation", mu=lambda_(tau1, tau2, lambda_1, lambda_2, lambda_3),
observed=count_data)
step = pm.Metropolis()
trace = pm.sample(500, step)
print()
print("tau1", trace['tau1'].mean())
print("tau2", trace['tau2'].mean())
print("lambda_1", trace['lambda_1'].mean())
print("lambda_2", trace['lambda_2'].mean())
print("lambda_3", trace['lambda_3'].mean())
pm.traceplot(trace)
tau1 = trace['tau1'].mean()
tau2 = trace['tau2'].mean()
lambda_1 = trace['lambda_1'].mean()
lambda_2 = trace['lambda_2'].mean()
lambda_3 = trace['lambda_3'].mean()
mcount = np.zeros(count_data.shape)
mcount[:tau1] = lambda_1
mcount[tau1:tau2] = lambda_2
mcount[tau2:] = lambda_3
plt.figure(figsize=(10, 6))
plt.bar(np.arange(len(count_data)), count_data, color="#348ABD", edgecolor="none")
plt.plot(mcount, lw=4, color="#E24A33")
"""
Explanation: WIP
IN CONSTRUCTION
based on http://nbviewer.ipython.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter1_Introduction/Chapter1.ipynb#Introducing-our-first-hammer:-PyMC
End of explanation
"""
|
cdawei/digbeta | dchen/music/aotm2011_subset.ipynb | gpl-3.0 | %matplotlib inline
%load_ext autoreload
%autoreload 2
import os, sys
import gzip
import pickle as pkl
import numpy as np
import pandas as pd
from scipy.optimize import check_grad
from scipy.sparse import lil_matrix, issparse
from collections import Counter
import matplotlib.pyplot as plt
import seaborn as sns
sys.path.append('src')
from models import BinaryRelevance
from models import PCMLC, objective
from tools import calc_precisionK, calc_rank, f1_score_nowarn, evalPred, calc_RPrecision_HitRate
data_dir = 'data/aotm2011'
fplaylist = os.path.join(data_dir, 'aotm2011-playlist.pkl.gz')
ffeature = 'data/msd/song2feature.pkl.gz'
fgenre = 'data/msd/song2genre.pkl.gz'
TOPs = [5, 10, 20, 30, 50, 100]#, 200, 300, 500#, 1000]
"""
Explanation: A representative subset of AotM-2011 Playlists with MSD Audio Features
End of explanation
"""
all_playlists = pkl.load(gzip.open(fplaylist, 'rb'))
all_users = sorted(set({user for _, user in all_playlists}))
print('#user :', len(all_users))
print('#playlist:', len(all_playlists))
pl_lengths = [len(pl) for pl, _ in all_playlists]
#plt.hist(pl_lengths, bins=100)
print('Average playlist length: %.1f' % np.mean(pl_lengths))
"""
Explanation: Load playlists
Load playlists.
End of explanation
"""
song2feature = pkl.load(gzip.open(ffeature, 'rb'))
#mean_age = (np.sum(song_ages) - np.sum(song_ages[missing_ix])) / (len(song_ages) - len(missing_ix))
#mean_age
"""
Explanation: Load song features
Load song_id --> feature array mapping: map a song to the audio features of one of its corresponding tracks in MSD.
End of explanation
"""
user_playlists = dict()
for pl, u in all_playlists:
try:
user_playlists[u].append(pl)
except KeyError:
user_playlists[u] = [pl]
u_npl = sorted([(u, len(user_playlists[u])) for u in all_users], key=lambda x: x[1])
#u_npl
step = 1000 # sample 0.1%
subset = [u_npl[ix] for ix in np.arange(0, len(u_npl), step)]
subset
len(subset)
uid_subset = [t[0] for t in subset]
#udf[uid_subset] # tuple are used as multiindex in pandas
#udf[[uid_subset]]
playlists_subset = [(pl, u) for u in uid_subset for pl in user_playlists[u]]
len(playlists_subset)
song_set = sorted([(sid, song2feature[sid][-1]) for sid in {sid for pl, _ in playlists_subset for sid in pl}],
key=lambda x: (x[1], x[0]))
print(len(song_set))
#song_set
"""
Explanation: Subset of data
The user whose playlists cover a proper number of playlists, e.g. 50.
End of explanation
"""
dev_nsongs = int(len(song_set) * 0.2)
dev_song_set = song_set[:dev_nsongs]
train_song_set = song_set[dev_nsongs:]
print('#songs in training set:', len(train_song_set))
print('#songs in test set :', len(dev_song_set))
#dev_song_set
#train_song_set
song2index = {sid: ix for ix, (sid, _) in enumerate(song_set)}
song_pl_mat = lil_matrix((len(song_set), len(playlists_subset)), dtype=np.int8)
for j in range(len(playlists_subset)):
pl = playlists_subset[j][0]
ind = [song2index[sid] for sid in pl]
song_pl_mat[ind, j] = 1
song_pop = np.sum(song_pl_mat, axis=1)
song2pop = {sid: song_pop[song2index[sid], 0] for (sid, _) in song_set}
"""
Explanation: Split songs for setting I
Split songs (80/20 split) the latest released (year) songs are in dev set ~~such that the distributions of song popularity (the number of occurrence in playlists) in training and dev set are similiar~~.
End of explanation
"""
train_song_pop = [song2pop[sid] for (sid, _) in train_song_set]
ax = plt.subplot(111)
ax.hist(train_song_pop, bins=30)
ax.set_yscale('log')
ax.set_xlim(0, song_pop.max()+1)
print(len(train_song_set))
"""
Explanation: Histogram of song popularity in training set.
End of explanation
"""
dev_song_pop = [song2pop[sid] for (sid, _) in dev_song_set]
ax = plt.subplot(111)
ax.hist(dev_song_pop, bins=30)
ax.set_yscale('log')
ax.set_xlim(0, song_pop.max()+1)
print(len(dev_song_set))
"""
Explanation: Histogram of song popularity in dev set.
End of explanation
"""
train_playlists = []
dev_playlists = []
dev_ratio = 0.2
np.random.seed(987654321)
for u in uid_subset:
playlists_u = [(pl, u) for pl in user_playlists[u]]
if len(user_playlists[u]) < 5:
train_playlists += playlists_u
else:
npl_dev = int(dev_ratio * len(user_playlists[u]))
pl_indices = np.random.permutation(len(user_playlists[u]))
dev_playlists += playlists_u[:npl_dev]
train_playlists += playlists_u[npl_dev:]
print(len(train_playlists), len(dev_playlists))
xmax = np.max([len(pl) for pl, _ in playlists_subset]) + 1
"""
Explanation: Split playlists
Split playlists (80/20 split) uniformly at random ~~such that the distributions of playlist length (the number of songs in playlists) for each user in training and dev set are similiar~~.
End of explanation
"""
ax = plt.subplot(111)
ax.hist([len(pl) for pl, _ in train_playlists], bins=50)
ax.set_yscale('log')
ax.set_xlim(0, xmax)
print(len(train_playlists))
"""
Explanation: Histogram of playlist length in training set.
End of explanation
"""
ax = plt.subplot(111)
ax.hist([len(pl) for pl, _ in dev_playlists], bins=50)
ax.set_yscale('log')
ax.set_xlim(0, xmax)
print(len(dev_playlists))
"""
Explanation: Histogram of playlist length in training set.
End of explanation
"""
dev_playlists_obs = [pl[:-int(len(pl)/2)] for (pl, _) in dev_playlists]
dev_playlists_held = [pl[-int(len(pl)/2):] for (pl, _) in dev_playlists]
for i in range(len(dev_playlists)):
assert np.all(dev_playlists[i][0] == dev_playlists_obs[i] + dev_playlists_held[i])
print('obs: %d, held: %d' % (np.sum([len(ppl) for ppl in dev_playlists_obs]),
np.sum([len(ppl) for ppl in dev_playlists_held])))
"""
Explanation: Hold part of songs in the dev set of playlists
Hold the last half of songs for each playlist in dev set.
End of explanation
"""
song2genre = pkl.load(gzip.open(fgenre, 'rb'))
"""
Explanation: Load genres
End of explanation
"""
np.sum([sid in song2genre for sid, _ in song_set])
"""
Explanation: Check if all songs have genre info.
End of explanation
"""
def gen_dataset(playlists, song2feature, song2genre, train_song_set,
dev_song_set=[], test_song_set=[], song2pop_train=None):
"""
Create labelled dataset: rows are songs, columns are users.
Input:
- playlists: a set of playlists
- train_song_set: a list of songIDs in training set
- dev_song_set: a list of songIDs in dev set
- test_song_set: a list of songIDs in test set
- song2feature: dictionary that maps songIDs to features from MSD
- song2genre: dictionary that maps songIDs to genre
- song2pop_train: a dictionary that maps songIDs to its popularity
Output:
- (Feature, Label) pair (X, Y)
X: #songs by #features
Y: #songs by #users
"""
song_set = train_song_set + dev_song_set + test_song_set
N = len(song_set)
K = len(playlists)
genre_set = sorted({v for v in song2genre.values()})
genre2index = {genre: ix for ix, genre in enumerate(genre_set)}
def onehot_genre(songID):
"""
One-hot encoding of genres.
Data imputation:
- one extra entry for songs without genre info
- mean imputation
- sampling from the distribution of genre popularity
"""
num = len(genre_set) # + 1
vec = np.zeros(num, dtype=np.float)
if songID in song2genre:
genre_ix = genre2index[song2genre[songID]]
vec[genre_ix] = 1
else:
vec[:] = np.nan
#vec[-1] = 1
return vec
#X = np.array([features_MSD[sid] for sid in song_set]) # without using genre
#Y = np.zeros((N, K), dtype=np.bool)
X = np.array([np.concatenate([song2feature[sid], onehot_genre(sid)], axis=-1) for sid in song_set])
Y = lil_matrix((N, K), dtype=np.bool)
song2index = {sid: ix for ix, sid in enumerate(song_set)}
for k in range(K):
pl = playlists[k]
indices = [song2index[sid] for sid in pl if sid in song2index]
Y[indices, k] = True
# genre imputation
genre_ix_start = -len(genre_set)
genre_nan = np.isnan(X[:, genre_ix_start:])
genre_mean = np.nansum(X[:, genre_ix_start:], axis=0) / (X.shape[0] - np.sum(genre_nan, axis=0))
#print(np.nansum(X[:, genre_ix_start:], axis=0))
#print(genre_set)
#print(genre_mean)
for j in range(len(genre_set)):
X[genre_nan[:,j], j+genre_ix_start] = genre_mean[j]
# normalise the sum of all genres per song to 1
# X[:, -len(genre_set):] /= X[:, -len(genre_set):].sum(axis=1).reshape(-1, 1)
# NOTE: this is not necessary, as the imputed values are guaranteed to be normalised (sum to 1)
# due to the above method to compute mean genres.
# the log of song popularity
if song2pop_train is not None:
# for sid in song_set:
# assert sid in song2pop_train # trust the input
logsongpop = np.log([song2pop_train[sid]+1 for sid in song_set]) # deal with 0 popularity
X = np.hstack([X, logsongpop.reshape(-1, 1)])
#return X, Y
Y = Y.tocsr()
train_ix = [song2index[sid] for sid in train_song_set]
X_train = X[train_ix, :]
Y_train = Y[train_ix, :]
dev_ix = [song2index[sid] for sid in dev_song_set]
X_dev = X[dev_ix, :]
Y_dev = Y[dev_ix, :]
test_ix = [song2index[sid] for sid in test_song_set]
X_test = X[test_ix, :]
Y_test = Y[test_ix, :]
if len(dev_song_set) > 0:
if len(test_song_set) > 0:
return X_train, Y_train, X_dev, Y_dev, X_test, Y_test
else:
return X_train, Y_train, X_dev, Y_dev
else:
if len(test_song_set) > 0:
return X_train, Y_train, X_test, Y_test
else:
return X_train, Y_train
def mean_normalised_reciprocal_rank(Y_true, Y_pred):
"""
Compute the mean of normalised reciprocal rank (reciprocal rank are normalised by the best possible ranks)
"""
normalised_reci_rank = []
npos = np.sum(Y_true, axis=0)
for k in range(Y_true.shape[1]):
ranks = calc_rank(Y_pred[:, k])[Y_true[:, k]]
if len(ranks) > 0:
ideal = np.sum([1./nk for nk in range(1, npos[k]+1)])
real = np.sum([1./r for r in ranks])
normalised_reci_rank.append(real / ideal) # normalise the reciprocal ranks by the best possible ranks
return np.mean(normalised_reci_rank)
def calc_RP_HR(Y_true, Y_pred, useLoop=False, tops=[]):
rps = []
hitrates = {top: [] for top in tops} if len(tops) > 0 else None
if useLoop is True:
assert type(Y_true) == type(Y_pred) == list
assert len(Y_true) == len(Y_pred)
for j in range(len(Y_true)):
y_true = np.asarray(Y_true[j]).reshape(-1)
y_pred = np.asarray(Y_pred[j]).reshape(-1)
npos = y_true.sum()
if npos > 0:
rp, hr_dict = calc_RPrecision_HitRate(y_true, y_pred, tops=tops)
rps.append(rp)
if len(tops) > 0:
for top in tops:
hitrates[top].append(hr_dict[top])
else:
assert Y_true.shape == Y_pred.shape
for j in range(Y_true.shape[1]):
y_true = Y_true[:, j]
if issparse(y_true):
y_true = y_true.toarray().reshape(-1)
else:
y_true = y_true.reshape(-1)
y_pred = Y_pred[:, j].reshape(-1)
npos = y_true.sum()
if npos > 0:
rp, hr_dict = calc_RPrecision_HitRate(y_true, y_pred, tops=tops)
rps.append(rp)
if len(tops) > 0:
for top in tops:
hitrates[top].append(hr_dict[top])
return rps, hitrates
"""
Explanation: Create song-playlist matrix
Songs as rows, playlists as columns.
End of explanation
"""
user_of_playlists = [u for (_, u) in train_playlists + dev_playlists]
clique_list = []
for u in sorted(set(user_of_playlists)):
clique = np.where(u == np.array(user_of_playlists, dtype=np.object))[0]
if len(clique) > 1:
clique_list.append(clique)
"""
Explanation: Build the adjacent matrix of playlists (nodes), playlists of the same user form a clique.
End of explanation
"""
X_train, Y_train, X_dev, Y_dev = gen_dataset(playlists = [pl for pl, _ in playlists_subset],
song2feature = song2feature, song2genre = song2genre,
train_song_set = [sid for sid, _ in train_song_set],
dev_song_set = [sid for sid, _ in dev_song_set])
"""
Explanation: Setting I: hold a subset of songs, use all playlists
End of explanation
"""
X_train_mean = np.mean(X_train, axis=0).reshape((1, -1))
X_train_std = np.std(X_train, axis=0).reshape((1, -1)) + 10 ** (-6)
X_train -= X_train_mean
X_train /= X_train_std
X_dev -= X_train_mean
X_dev /= X_train_std
print('Train: %15s %15s' % (X_train.shape, Y_train.shape))
print('Dev : %15s %15s' % (X_dev.shape, Y_dev.shape))
print(np.mean(np.mean(X_train, axis=0)))
print(np.mean( np.std(X_train, axis=0)) - 1)
print(np.mean(np.mean(X_dev, axis=0)))
print(np.mean( np.std(X_dev, axis=0)) - 1)
"""
Explanation: Feature normalisation.
End of explanation
"""
%%script false
w0 = 0.001 * np.random.randn(Y_dev.shape[1] * X_dev.shape[1] + 1)
dw = np.zeros_like(w0)
cliques=clique_list
cliques=None
bs=100
loss='example'
check_grad(\
lambda w: objective(w, dw, X_dev, Y_dev, C1=2, C2=3, C3=5, p=3, cliques=cliques, loss_type=loss, batch_size=bs),
lambda w: dw, w0)
"""
Explanation: Check gradient.
End of explanation
"""
br1 = BinaryRelevance(C=1, n_jobs=4)
br1.fit(X_train, Y_train)
"""
Explanation: M1. BR - Independent logistic regression
End of explanation
"""
print('Dev set:')
rps_br1, hr_br1 = calc_RP_HR(Y_dev, br1.predict(X_dev), tops=TOPs)
print('R-Precision:', np.mean(rps_br1))
for top in TOPs:
print(top, np.mean(hr_br1[top]))
print('Training set:')
rps_brtrn, hr_brtrn = calc_RP_HR(Y_train, br1.predict(X_train), tops=TOPs)
print('R-Precision:', np.mean(rps_brtrn))
for top in TOPs:
print(top, np.mean(hr_brtrn[top]))
"""
Explanation: Evaluation: normalise per playlist.
End of explanation
"""
mlr = PCMLC(C1=1, C2=1, C3=1, loss_type='example')
mlr.fit(X_train, Y_train, user_playlist_indices=clique_list)
print('Dev set:')
rps_mlr, hr_mlr = calc_RP_HR(Y_dev, mlr.predict(X_dev), tops=TOPs)
print('R-Precision:', np.mean(rps_mlr))
for top in TOPs:
print(top, np.mean(hr_mlr[top]))
print('Training set:')
rps_mlrtrn, hr_mlrtrn = calc_RP_HR(Y_train, mlr.predict(X_train), tops=TOPs)
print('R-Precision:', np.mean(rps_mlrtrn))
for top in TOPs:
print(top, np.mean(hr_mlrtrn[top]))
"""
Explanation: M2. PC - Multilabel p-classification
P-Classification ~ P-norm push ranking.
End of explanation
"""
%%script false
aucs = []
npl = []
for u in set(user_of_playlists2):
uind = np.where(np.array(user_of_playlists2, dtype=np.object) == u)[0]
uind_train = uind[uind < col_split]
uind_test = uind[uind >= col_split]
#print(uind)
#if len(uind_test) < 1: continue
print('--------------------')
print('USER:', u)
print('#train: %d, #test: %d' % (len(uind_train), len(uind_test)))
#preds_test = [Y_pla[isnanmat[:, col], col] for col in uind_test]
#gts_test = [Y[isnanmat[:, col], col] for col in uind_test]
print('Dev set:')
#auc, ncol = eval_pl(gts_test, preds_test, useLoop=True)
#aucs.append(auc)
#npl.append(ncol)
print('Training set:')
auc_trn, ncol_trn = eval_pl(Y[:, uind_train], Y_pla[:, uind_train])
aucs.append(auc_trn)
npl.append(ncol_trn)
print()
%%script false
plt.plot(npl, aucs, 'b.')
plt.ylim(0.4, 1.0)
split = 'train'
plt.xlabel('#playlist in training set')
plt.ylabel('AUC (%s)' % split)
plt.title('New Song Recommednation Task: 1/#playlists reg. (%s)' % split)
"""
Explanation: Performance per user
End of explanation
"""
song2pop_train = song2pop.copy()
for ppl in dev_playlists_held:
for sid in ppl:
song2pop_train[sid] -= 1
X, Y = gen_dataset(playlists = [pl for pl, _ in train_playlists + dev_playlists],
song2feature = song2feature, song2genre = song2genre,
train_song_set = [sid for sid, _ in song_set], song2pop_train=song2pop_train)
dev_cols = np.arange(len(train_playlists), Y.shape[1])
col_split = len(train_playlists)
"""
Explanation: Setting II: hold a subset of songs in a subset of playlists, use all songs
End of explanation
"""
Y_train = Y.copy().astype(np.float).toarray() # note: np.nan is float
Y_train[:, dev_cols] = np.nan
song_indices = {sid: ix for ix, (sid, _) in enumerate(song_set)}
assert len(dev_cols) == len(dev_playlists) == len(dev_playlists_obs)
num_known = 0
for j in range(len(dev_cols)):
rows = [song_indices[sid] for sid in dev_playlists_obs[j]]
Y_train[rows, dev_cols[j]] = 1
num_known += len(rows)
isnanmat = np.isnan(Y_train)
print('#unknown: {:,} | {:,}'.format(np.sum(isnanmat), len(dev_playlists) * len(song_set) - num_known))
print('#unknown in setting I: {:,}'.format(len(dev_song_set) * Y.shape[1]))
print(np.sum(isnanmat[:, :col_split])) # number of NaN in training playlists, should be 0
X_train = X
"""
Explanation: Set all entries corresponding to playlists in dev set to NaN, except those songs in dev playlists that we observed.
End of explanation
"""
X_train_mean = np.mean(X_train, axis=0).reshape((1, -1))
X_train_std = np.std(X_train, axis=0).reshape((1, -1)) + 10 ** (-6)
X_train -= X_train_mean
X_train /= X_train_std
print('Train: %15s %15s' % (X_train.shape, Y_train.shape))
print(np.mean(np.mean(X_train, axis=0)))
print(np.mean( np.std(X_train, axis=0)) - 1)
len(dev_cols)
"""
Explanation: Feature normalisation.
End of explanation
"""
br2 = BinaryRelevance(C=1, n_jobs=4)
br2.fit(X_train, np.nan_to_num(Y_train))
"""
Explanation: M3. Independent logistic regression
End of explanation
"""
gts = [Y[isnanmat[:, col], col].toarray() for col in dev_cols]
print('Dev set:')
Y_br2 = br2.predict(X_train)
preds = [Y_br2[isnanmat[:, col], col] for col in dev_cols]
rps_br2, hr_br2 = calc_RP_HR(gts, preds, useLoop=True, tops=TOPs)
print('R-Precision:', np.mean(rps_br2))
for top in TOPs:
print(top, np.mean(hr_br2[top]))
#rps
#Y_train[:, -1]
"""
Explanation: Evaluation: normalise per playlist.
End of explanation
"""
rps_pop = []
hr_pop = {top: [] for top in TOPs}
index2song = {ix: sid for ix, (sid, _) in enumerate(song_set)}
for col in dev_cols:
indices = np.arange(Y.shape[0])[isnanmat[:, col]]
y_true = Y[indices, col]
if issparse(y_true):
y_true = y_true.toarray().reshape(-1)
else:
y_true = y_true.reshape(-1)
y_pred = np.asarray([song2pop_train[index2song[ix]] for ix in indices])
npos = y_true.sum()
if npos > 0:
rp, hr_dict = calc_RPrecision_HitRate(y_true, y_pred, tops=TOPs)
rps_pop.append(rp)
for top in TOPs:
hr_pop[top].append(hr_dict[top])
print('R-Precision:', np.mean(rps_pop))
for top in TOPs:
print(top, np.mean(hr_pop[top]))
"""
Explanation: Popularity based ranking
End of explanation
"""
pla = PCMLC(C1=10, C2=1, C3=10, p=1, loss_type='both')
pla.fit(X_train, np.nan_to_num(Y_train), batch_size=256, user_playlist_indices=clique_list)
print('Dev set:')
Y_pla = pla.predict(X_train)
preds_pla = [Y_pla[isnanmat[:, col], col] for col in dev_cols]
rps_pla, hr_pla = calc_RP_HR(gts, preds_pla, useLoop=True, tops=TOPs)
print('R-Precision:', np.mean(rps_pla))
for top in TOPs:
print(top, np.mean(hr_pla[top]))
#rps_pla
#preds_pla[0]
"""
Explanation: M4. Multilabel p-classification with some playlist fully observed
End of explanation
"""
%%script false
aucs = []
npl = []
for u in set(user_of_playlists2):
uind = np.where(np.array(user_of_playlists2, dtype=np.object) == u)[0]
uind_train = uind[uind < col_split]
uind_test = uind[uind >= col_split]
#print(uind)
#if len(uind_test) < 1: continue
print('--------------------')
print('USER:', u)
print('#train: %d, #test: %d' % (len(uind_train), len(uind_test)))
#preds_test = [Y_pla[isnanmat[:, col], col] for col in uind_test]
#gts_test = [Y[isnanmat[:, col], col] for col in uind_test]
#print('Dev set:')
#auc, ncol = eval_pl(gts_test, preds_test, useLoop=True)
#aucs.append(auc)
#npl.append(ncol)
print('Training set:')
auc_trn, ncol_trn = eval_pl(Y[:, uind_train], Y_pla[:, uind_train])
aucs.append(auc_trn)
npl.append(ncol_trn)
print()
%%script false
plt.plot(npl, aucs, 'b.')
plt.ylim(0.4, 1.0)
split = 'train'
plt.xlabel('#playlist in training set')
plt.ylabel('AUC (%s)' % split)
plt.title('Playlist augmentation task: No reg. (%s)' % split)
"""
Explanation: Check performance per user
End of explanation
"""
%%script false
rows, cols = np.nonzero(same_user_mat)
for row, col in zip(rows, cols):
diff = pla.W[row] - pla.W[col]
print('%g' % np.sqrt(np.dot(pla.W[row], pla.W[row])))
print('%g' % np.sqrt(np.dot(pla.W[col], pla.W[col])))
print('%g' % np.sqrt(np.dot(diff, diff)))
print('------------------------------')
"""
Explanation: Check the if the regulariser is effective
End of explanation
"""
%%script false
A = np.dot(pla.W, pla.W.T)
B = np.tile(np.diag(A), (A.shape[0], 1))
M = np.sqrt(-2 * A + (B + B.T))
"""
Explanation: Compute matrix $M$ such that $M_{jk} = \sqrt{(w_j - w_k)^\top (w_j - w_k)}, \forall j, k$.
End of explanation
"""
#aa = np.arange(6).reshape(3, 2)
#np.einsum('ij,ij->i', aa, aa)
%%script false
denorm = np.sqrt(np.einsum('ij,ij->i', pla.W, pla.W)) # compute the norm for each row in W
M1 = M / np.max(denorm)
%%script false
plt.matshow(M1)
%%script false
rows, cols = np.nonzero(same_user_mat)
M2 = M1[rows, cols]
print('Min: %.5f, Max: %.5f, Mean: %.5f, Std: %.5f' % (np.min(M2), np.max(M2), np.mean(M2), np.std(M2)))
%%script false
mat = same_user_mat.copy()
np.fill_diagonal(mat, 1) # remove the diagnoal from consideration
rows, cols = np.where(mat == 0)
M3 = M1[rows, cols]
print('Min: %.5f, Max: %.5f, Mean: %.5f, Std: %.5f' % (np.min(M3), np.max(M3), np.mean(M3), np.std(M3)))
"""
Explanation: Normalise $M$ by the vector with maximum norm in $W$.
End of explanation
"""
|
cloudmesh/book | notebooks/machinelearning/crossvalidation.ipynb | apache-2.0 | from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
# read in the iris data
iris = load_iris()
# create X (features) and y (response)
X = iris.data
y = iris.target
# use train/test split with different random_state values
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=4)
# check classification accuracy of KNN with K=5
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred))
"""
Explanation: Cross-validation
In the machine learning examples, we have already shown the importance of split training and splitting data. However, a rough trial is not enough. Because if we randomly assign a training and testing data, it could be bias. We could improve it by cross validation.
First, let's review how we split training and spliting:
End of explanation
"""
from sklearn.cross_validation import cross_val_score
# 10-fold cross-validation with K=5 for KNN (the n_neighbors parameter)
# First we initialize a knn model
knn = KNeighborsClassifier(n_neighbors=5)
# Secondly we use cross_val_scores to get all possible accuracies.
# It works like this, first we make the data into 10 chunks.
# Then we run KNN for 10 times and we make each chunk as testing data for each iteration.
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
print(scores)
# use average accuracy as an estimate of out-of-sample accuracy
print(scores.mean())
"""
Explanation: Question 1: If you haven't learn KNN, please first search it and please answer this simple question: Is it a supervised learning or an unsupervised learning? Also please answer, in the previous example, what does n_neighbors=5 mean?
Answer: Double click this cell and input your answer here.
Steps for K-fold cross-validation
Split the dataset into K equal partitions (or "folds").
Use fold 1 as the testing set and the union of the other folds as the training set.
Calculate testing accuracy.
Cross-validation example:
End of explanation
"""
# search for an optimal value of K for KNN
# Suppose we set the range of K is from 1 to 31.
k_range = list(range(1, 31))
# An list that stores different accuracy scores.
k_scores = []
for k in k_range:
# Your code:
# First, initilize a knn model with number k
# Second, use 10-fold cross validation to get 10 scores with that model.
k_scores.append(scores.mean())
# Make a visuliazaion for it, and please check what is the best k for knn
import matplotlib.pyplot as plt
%matplotlib inline
# plot the value of K for KNN (x-axis) versus the cross-validated accuracy (y-axis)
plt.plot(k_range, k_scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Cross-Validated Accuracy')
"""
Explanation: From this example, we could see that if we just split training and testing data for just once, sometimes we could get a very "good model and sometimes we may have got a very ""bad" model. From the example, we could know that it is not about model itself. It is just because we use different set of training data and test data.
Your exercise for cross-validation
From the previous example, we may have question, how we choose the parameter for knn? (n_neighbors=?). A good way to do it is called tuning parameters. In this exercise, we could learn how to tune your parameter by taking the advantage of corss-validation.
Goal: Select the best tuning parameters (aka "hyperparameters") for KNN on the iris dataset
Your programming task:
From the above example, we know that, if we set number of neighbors as K=5, we could get an average accuracy as 0.97. However, if we want to find a better number, what should we do? It is very straight forward, we could iteratively set different numbers for K and find what K could bring us the best accuaracy.
End of explanation
"""
# 10-fold cross-validation with the best KNN model
knn = KNeighborsClassifier(n_neighbors=20)
print(cross_val_score(knn, X, y, cv=10, scoring='accuracy').mean())
# How about logistic regression? Please finish the code below and make a comparison.
# Hint, please check how we make it by knn.
from sklearn.linear_model import LogisticRegression
# initialize a logistic regression model here.
# Then print the average score of logistic model.
"""
Explanation: A new Cross-validation task: model selection
We already apply cross-validation to knn model. How about other models? Please continue to read the notes and do another exercise.
End of explanation
"""
|
karlstroetmann/Formal-Languages | Ply/Html2Text.ipynb | gpl-2.0 | data = \
'''
<html>
<head>
<meta charset="utf-8">
<title>Homepage of Prof. Dr. Karl Stroetmann</title>
<link type="text/css" rel="stylesheet" href="style.css" />
<link href="http://fonts.googleapis.com/css?family=Rochester&subset=latin,latin-ext"
rel="stylesheet" type="text/css">
<link href="http://fonts.googleapis.com/css?family=Pacifico&subset=latin,latin-ext"
rel="stylesheet" type="text/css">
<link href="http://fonts.googleapis.com/css?family=Cabin+Sketch&subset=latin,latin-ext" rel="stylesheet" type="text/css">
<link href="http://fonts.googleapis.com/css?family=Sacramento" rel="stylesheet" type="text/css">
</head>
<body>
<hr/>
<div id="table">
<header>
<h1 id="name">Prof. Dr. Karl Stroetmann</h1>
</header>
<div id="row1">
<div class="right">
<a id="dhbw" href="http://www.ba-stuttgart.de">Duale Hochschule Baden-Württemberg</a>
<br/>Coblitzallee 1-9
<br/>68163 Mannheim
<br/>Germany
<br>
<br/>Office: Raum 344B
<br/>Phone: +49 621 4105-1376
<br/>Fax: +49 621 4105-1194
<br/>Skype: karlstroetmann
</div>
<div id="links">
<strong class="some">Some links:</strong>
<ul class="inlink">
<li class="inlink">
My <a class="inlink" href="https://github.com/karlstroetmann?tab=repositories">lecture notes</a>,
as well as the programs presented in class, can be found
at <br>
<a class="inlink" href="https://github.com/karlstroetmann?tab=repositories">https://github.com/karlstroetmann</a>.
</li>
<li class="inlink">Most of my papers can be found at <a class="inlink" href="https://www.researchgate.net/">researchgate.net</a>.</li>
<li class="inlink">The programming language SetlX can be downloaded at <br>
<a href="http://randoom.org/Software/SetlX"><tt class="inlink">http://randoom.org/Software/SetlX</tt></a>.
</li>
</ul>
</div>
</div>
</div>
<div id="intro">
As I am getting old and wise, I have to accept the limits of
my own capabilities. I have condensed these deep philosophical
insights into a most beautiful pearl of poetry. I would like
to share these humble words of wisdom:
<div class="poetry">
I am a teacher by profession, <br>
mostly really by obsession; <br>
But even though I boldly try, <br>
I just cannot teach <a href="flying-pig.jpg" id="fp">pigs</a> to fly.</br>
Instead, I slaughter them and fry.
</div>
<div class="citation">
<div class="quote">
Any sufficiently advanced poetry is indistinguishable from divine wisdom.
</div>
<div id="sign">His holiness Pope Hugo Ⅻ.</div>
</div>
</div>
</div>
</body>
</html>
'''
HTML(data)
"""
Explanation: Converting <span style="font-variant:small-caps;">Html</span> to Text
This notebook shows how we can use the package ply
to extract the text that is embedded in an <span style="font-variant:small-caps;">Html</span> file.
In order to be concise, it only supports a small subset of
<span style="font-variant:small-caps;">Html</span>. Below is the content of my old
<a href="http://wwwlehre.dhbw-stuttgart.de/~stroetma/">web page</a> that I had used when I still worked at the DHBW Stuttgart. The goal of this notebook is to write
a scanner that is able to extract the text from this web page.
End of explanation
"""
import ply.lex as lex
"""
Explanation: Imports
We will use the package ply to remove the
<span style="font-variant:small-caps;">Html</span> tags and extract the text that
is embedded in the <span style="font-variant:small-caps;">Html</span> shown above.
In this example, we will only use the scanner that is provided by the module ply.lex.
Hence we import the module ply.lex that contains the scanner generator from ply.
End of explanation
"""
tokens = [ 'HEAD_START',
'HEAD_END'
'SCRIPT_START',
'SCRIPT_END',
'TAG',
'LINEBREAK',
'NAMED_ENTITY',
'UNICODE',
'ANY'
]
"""
Explanation: Token Declarations
We begin by declaring the tokens. Note that the variable tokens is a keyword of ply to define the names of the token classes. In this case, we have declared nine different tokens.
- HEAD_START will match the tag <head> that starts the definition of the
<span style="font-variant:small-caps;">Html</span> header.
- HEAD_END will match the tag </head> that ends the definition of the
<span style="font-variant:small-caps;">Html</span> header.
- SCRIPT_START will match the tag <script> that starts embedded JavaScript code.
- SCRIPT_END will match the tag </script> that ends embedded JavaScript code.
- TAG is a token that represents arbitrary <span style="font-variant:small-caps;">Html</span> tags.
- LINEBREAK is a token that will match the newline character \n at the end of a line.
- NAMED_ENTITY is a token that represents named
<span style="font-variant:small-caps;">Html5</span> entities.
- UNICODE is a token that represents a unicode entity.
- ANY is a token that matches any character.
End of explanation
"""
states = [ ('header', 'exclusive'),
('script', 'exclusive')
]
"""
Explanation: Definition of the States
Once we are inside an <span style="font-variant:small-caps;">Html</span> header or inside of some
JavaScript code the rules of the scanning game change. Therefore, we declare two new <em style="color:blue">exclusive scanner states</em>:
- header is the state the scanner is in while it is scanning an
<span style="font-variant:small-caps;">Html</span> header.
- script is the state of the scanner while scanning JavaScript code.
These states are exclusive states and hence the other token definitions do not apply in these
states.
End of explanation
"""
def t_HEAD_START(t):
r'<head>'
t.lexer.begin('header')
"""
Explanation: Token Definitions
We proceed to give the definition of the tokens. Note that none of the function defined below
returns a token. Rather all of these function print the transformation of the
<span style="font-variant:small-caps;">Html</span> that they have matched.
The Definition of the Token HEAD_START
Once the scanner reads the opening tag <head> it switches into the state header. The function begin of the lexer can be used to switch into a different scanner state. In the state header, the scanner continues to read and discard characters until the closing tag </head> is encountered. Note that this token is only recognized in the state INITIAL. The state INITIAL is the initial state of the scanner, i.e. the scanner always starts in this state.
End of explanation
"""
def t_SCRIPT_START(t):
r'<script>'
t.lexer.begin('script')
"""
Explanation: The Definition of the Token SCRIPT_START
Once the scanner reads the opening tag <script> it switches into the state script. In this state it will continue to read and discard characters until it sees the closing tag </script>.
End of explanation
"""
def t_LINEBREAK(t):
r'(\s*\n\s*)+'
print()
"""
Explanation: The Definition of the Token `LINEBREAK``
Groups of newline characters are condensed into a single newline character.
As we are not interested in the variable t.lexer.lineno in this example, we don't have to count the newlines.
This token is active in the INITIAL state.
End of explanation
"""
def t_TAG(t):
r'<[^>]+>'
pass
"""
Explanation: The Definition of the Token TAG
The token TAG is defined as any string that starts with the character < and ends with the character
>. Betweens these two characters there has to be a nonzero number of characters that are different from
the character >. The text of the token is discarded.
End of explanation
"""
from html.entities import html5
html5['auml']
"""
Explanation: The Definition of the Token NAMED_ENTITY
In order to support named <span style="font-variant:small-caps;">Html</span> entities we need to import
the dictionary html5 from the module html.entities. For every named
<span style="font-variant:small-caps;">Html</span> entity e, html[e] is the unicode symbol that is specified by e.
End of explanation
"""
def t_NAMED_ENTITY(t):
r'&[a-zA-Z]+;?'
if t.value[-1] == ';': # ';' is not part of the entity name
entity_name = t.value[1:-1] # so chop it off
else:
entity_name = t.value[1:]
unicode_char = html5[entity_name]
print(unicode_char, end='') # don't print a line break
"""
Explanation: The regular expression &[a-zA-Z]+;? searches for <span style="font-variant:small-caps;">Html</span>
entity names. These are strings that start with the character & followed by the name of the entity, optionally followed by the character ;. If a Unicode entity name is found, the corresponding character is printed.
End of explanation
"""
def t_UNICODE(t):
r'&\#[0-9]+;?'
if t.value[-1] == ';':
number = t.value[2:-1]
else:
number = t.value[2:]
print(chr(int(number)), end='')
chr(8555)
chr(128034)
"""
Explanation: The Definition of the Token UNICODE
The regular expression &\#[0-9]+;? searches for <span style="font-variant:small-caps;">Html</span> entities that specify a unicode character numerically. The corresponding strings start with the character &
followed by the character # followed by digits and are optionally ended by the character ;.
Note that we had to escape the character # with a backslash because otherwise this character would signal the begin of a comment.
Note further that the function chr takes a number and returns the corresponding unicode character.
For example, chr(128034) returns the character '🐢'.
End of explanation
"""
def t_ANY(t):
r'.'
print(t.value, end='')
"""
Explanation: The Definition of the Token ANY
The regular expression . matches any character that is different from a newline character. These characters are printed unmodified. Note that the scanner tries the regular expressions for a given state in the order that they are defined in this notebook. Therefore, it is crucial that the function t_ANY is defined after all other token definitions for the INITIAL state are given. The INITIAL state is the default state of the scanner and therefore the state the scanner is in when it starts scanning.
End of explanation
"""
def t_header_HEAD_END(t):
r'</head>'
t.lexer.begin('INITIAL')
"""
Explanation: The Definition of the Token HEAD_END
The regular expression </head> matches the closing head tag. Note that is regular expression is only
active in state header as the name of this function starts with t_header. Once the closing tag has been found, the function lexer.begin switches the lexer back into the state INITIAL, which is the
<em style="color:blue">start state</em> of the scanner. In the state INITIAL, all token definitions are active, that do not start with either t_header or t_script.
End of explanation
"""
def t_script_SCRIPT_END(t):
r'</script>'
t.lexer.begin('INITIAL')
"""
Explanation: The Definition of the Token SCRIPT_END
If the scanner is in the state script, the function t_script_SCRIPT_END recognizes the matching closing tag and switches back to the state
INITIAL.
The regular expression </script> matches the closing script tag. Note that this regular expression is only
active in state script. Once the closing tag has been found, the function lexer.begin switches the lexer back into the state INITIAL, which is the start state of the scanner.
End of explanation
"""
def t_header_script_ANY(t):
r'.|\n'
pass
"""
Explanation: The Definition of the Token ANY
If the scanner is either in the state header or the state script, the function
t_header_script_ANY eats up all characters without echoing them.
End of explanation
"""
def t_error(t):
print(f"Illegal character: '{t.value[0]}'")
t.lexer.skip(1)
"""
Explanation: Error Handling
The function t_error is called when a substring at the beginning of the input can not be matched by any of the regular expressions defined in the various tokens. In our implementation we print the first character that could not be matched, discard this character and continue.
End of explanation
"""
def t_header_error(t):
print(f"Illegal character in state 'header': '{t.value[0]}'")
t.lexer.skip(1)
"""
Explanation: The function t_header_error is called when a substring at the beginning of the input can not be matched by any of the regular expressions defined in the various tokens and the scanner is in state header. Actually, this function can never be called.
End of explanation
"""
def t_script_error(t):
print(f"Illegal character in state 'script': '{t.value[0]}'")
t.lexer.skip(1)
"""
Explanation: The function t_script_error is called when a substring at the beginning of the input can not be matched by any of the regular expressions defined in the various tokens and the scanner is in state script. Actually, this function can never be called.
End of explanation
"""
__file__ = 'main'
"""
Explanation: Running the Scanner
The line below is necessary to trick ply.lex into assuming this program is written in an ordinary python file instead of a Jupyter notebook.
End of explanation
"""
lexer = lex.lex(debug=True)
"""
Explanation: The line below generates the scanner. Because the option debug=True is set, we can see the regular expression that is generated for scanning.
End of explanation
"""
lexer.input(data)
"""
Explanation: Next, we feed our input string into the generated scanner.
End of explanation
"""
def scan(lexer):
for t in lexer:
pass
scan(lexer)
"""
Explanation: In order to scan the data that we provided in the last line, we iterate over all tokens generated by our scanner.
End of explanation
"""
|
tleonhardt/LearningCython | Learning_Cython_video/Chapter11/dates/dateobject-withC.ipynb | mit | import numpy as np
import pandas as pd
def make_sample_data(size):
d = dict(
# Years: 1980 - 2015
year=np.random.randint(1980, 2016, int(size)),
# Months 1 - 12
month=np.random.randint(1, 13, int(size)),
# Day number: 1 - 28
day=np.random.randint(1, 28, int(size)),
)
return pd.DataFrame(d)
"""
Explanation: Case Study: Slow Pandas dates
Batches of data are collected from field instruments. These instruments capture the date in three separate columns: day, month and year.
Data is processed in Pandas, but currently it is <u>slow to convert the three columns into datetimes</u>.
Example (randomised) data
End of explanation
"""
df = make_sample_data(5)
df
"""
Explanation: Start with few data
End of explanation
"""
import datetime
def create_datetime_py(year, month, day):
""" Take year, month, day and return a datetime """
return datetime.datetime(year, month, day, 0, 0, 0, 0, None)
"""
Explanation: Goal: make single datetime column
Let's see the Python code first:
End of explanation
"""
# Refer to fields by name! Very cool 👍
df.apply(lambda x : create_datetime_py(
x['year'], x['month'], x['day']), axis=1)
"""
Explanation: Use the Python conversion function
Pandas has an apply() method that runs your function on a bunch of columns.
You must provide a function that receives a row, and your function must return a value. All the output values get put into a new Pandas series.
End of explanation
"""
def make_datetime_py(df):
return df.apply(lambda x : create_datetime_py(
x['year'], x['month'], x['day']), axis=1)
"""
Explanation: Note: the type is "datetime64[ns]".
Awkward to type that all out each time. Let's make a convenient function.
End of explanation
"""
make_datetime_py(df)
"""
Explanation: Then we can just call it like so:
End of explanation
"""
df_big = make_sample_data(100000)
%timeit make_datetime_py(df_big)
"""
Explanation: Problem: this is slow
With lots of data, the conversion to a datetime column takes a very long time! Let's try a bunch of data:
End of explanation
"""
%%cython
# cython: boundscheck = False
# cython: wraparound = False
from cpython.datetime cimport (
import_datetime, datetime_new, datetime, timedelta)
from pandas import Timestamp
import_datetime()
cpdef convert_arrays_ts(
long[:] year, long[:] month, long[:] day,
long long[:] out):
""" Result goes into `out` """
cdef int i, n = year.shape[0]
cdef datetime dt
for i in range(n):
dt = <datetime>datetime_new(
year[i], month[i], day[i], 0, 0, 0, 0, None)
out[i] = Timestamp(dt).value
"""
Explanation: What to do?
The first thing is to check whether there is a low-level PXD interface file for the Python datetime object.
Let's use Cython!
End of explanation
"""
def make_datetime_cy(df, method):
s = pd.Series(np.zeros(len(df), dtype='datetime64[ns]'))
method(df['year'].values, df['month'].values, df['day'].values,
s.values.view('int64'))
return s
# Test it out
make_datetime_cy(df, convert_arrays_ts)
"""
Explanation: Utility function for applying our conversion
End of explanation
"""
df_big = make_sample_data(100000)
%timeit make_datetime_py(df_big)
%timeit make_datetime_cy(df_big, convert_arrays_ts)
"""
Explanation: Speed Test
End of explanation
"""
%%cython -a
# cython: boundscheck = False
# cython: wraparound = False
from cpython.datetime cimport (
import_datetime, datetime_new, datetime, timedelta,
timedelta_seconds, timedelta_days)
import_datetime() # <-- Pretty important
cpdef convert_arrays_dt(long[:] year, long[:] month, long[:] day,
long long[:] out):
""" Result goes into `out` """
cdef int i, n = year.shape[0]
cdef datetime dt, epoch = datetime_new(1970, 1, 1, 0, 0, 0, 0, None)
cdef timedelta td
cdef long seconds
for i in range(n):
dt = <datetime>datetime_new(
year[i], month[i], day[i], 0, 0, 0, 0, None)
td = <timedelta>(dt - epoch)
seconds = timedelta_days(td) * 86400
out[i] = seconds * 1000000000 # Nanoseconds, remember?
"""
Explanation: XX / XX
Check annotation
Eliminate the Python overhead
End of explanation
"""
make_datetime_cy(df, convert_arrays_dt)
"""
Explanation: Test it out
End of explanation
"""
df_big = make_sample_data(100000)
%timeit make_datetime_py(df_big)
%timeit make_datetime_cy(df_big, convert_arrays_ts)
%timeit make_datetime_cy(df_big, convert_arrays_dt)
"""
Explanation: Speed Test
End of explanation
"""
%%cython -a
# cython: boundscheck = False
# cython: wraparound = False
from libc.time cimport mktime, tm, timezone
cdef inline long to_unix(long year, long month, long day):
""" month: 1 - 12, day: 1 - 31
Result is in UTC. """
cdef tm tms
tms.tm_year = year - 1900 # years since 1900 !!
tms.tm_mon = month - 1 # 0 to 11 !!
tms.tm_mday = day # 1 - 31
tms.tm_hour, tms.tm_min, tms.tm_sec = 0, 0, 0
return mktime(&tms) - timezone
cpdef convert_arrays_libc(
long[:] year, long[:] month, long[:] day,
long long[:] out):
""" Result goes into `out` """
cdef int i, n = year.shape[0]
cdef long unix
for i in range(n):
unix = to_unix(year[i], month[i], day[i])
#print(unix)
#out[i] = to_unix(year[i], month[i], day[i]) * 1000000000
out[i] = unix * 1000000000
make_datetime_cy(df, convert_arrays_libc)
df_big = make_sample_data(100000)
%timeit make_datetime_py(df_big)
%timeit make_datetime_cy(df_big, convert_arrays_dt)
%timeit make_datetime_cy(df_big, convert_arrays_ts)
%timeit make_datetime_cy(df_big, convert_arrays_libc)
"""
Explanation: XX / XX
Using C standard library
End of explanation
"""
|
paix120/DataScienceLearningClubActivities | Activity05/Mushroom Edibility Classification - Naive Bayes.ipynb | gpl-2.0 | #import pandas and numpy libraries
import pandas as pd
import numpy as np
import sys #sys needed only for python version
#import gaussian naive bayes from scikit-learn
import sklearn as sk
#seaborn for pretty plots
import seaborn as sns
#display versions of python and packages
print('\npython version ' + sys.version)
print('pandas version ' + pd.__version__)
print('numpy version ' + np.__version__)
print('sk-learn version ' + sk.__version__)
print('seaborn version ' + sns.__version__)
"""
Explanation: Mushroom Classification - Edible or Poisonous?
by Renee Teate
Using Gaussian Naive Bayes Classification from scikit-learn
For Activity 5 of the Data Science Learning Club: http://www.becomingadatascientist.com/learningclub/forum-13.html
Dataset from UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Mushroom
End of explanation
"""
#read in data. it's comma-separated with no column names.
df = pd.read_csv('agaricus-lepiota.data', sep=',', header=None,
error_bad_lines=False, warn_bad_lines=True, low_memory=False)
# set pandas to output all of the columns in output
pd.options.display.max_columns = 25
#show the first 5 rows
print(df.sample(n=5))
"""
Explanation: The dataset doesn't include column names, and the values are text characters
End of explanation
"""
#manually add column names from documentation (1st col is class: e=edible,p=poisonous; rest are attributes)
df.columns = ['class','cap-shape','cap-surface','cap-color','bruises','odor','gill-attachment',
'gill-spacing','gill-size','gill-color','stalk-shape','stalk-root',
'stalk-surf-above-ring','stalk-surf-below-ring','stalk-color-above-ring','stalk-color-below-ring',
'veil-type','veil-color','ring-number','ring-type','spore-color','population','habitat']
print("Example values:\n")
print(df.iloc[3984]) #this one has a ? value - how are those treated by classifier?
"""
Explanation: Added column names from the UCI documentation
End of explanation
"""
#show plots in notebook
%matplotlib inline
#bar chart of classes using pandas plotting
print(df['class'].value_counts())
df['class'].value_counts().plot(kind='bar')
"""
Explanation: The dataset is split fairly evenly between the edible and poison classes
End of explanation
"""
#seaborn factorplot to show edible/poisonous breakdown by different factors
df_forplot = df.loc[:,('class','cap-shape','gill-color')]
g = sns.factorplot("class", col="cap-shape", data=df_forplot,
kind="count", size=2.5, aspect=.8, col_wrap=6)
"""
Explanation: Edibility by Mushroom Cap Shape
note that none of the cap shapes seem particularly predictive of edibility
End of explanation
"""
g = sns.factorplot("class", col="gill-color", data=df_forplot,
kind="count", size=2.5, aspect=.8, col_wrap=6)
"""
Explanation: Edibility by Mushroom Gill Color
note that buff gills (b) appear to always indicate poison, and the others aren't as clear-cut
End of explanation
"""
#put the features into X (everything except the 0th column)
X = pd.DataFrame(df, columns=df.columns[1:len(df.columns)], index=df.index)
#put the class values (0th column) into Y
Y = df['class']
#encode the text category labels as numeric
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(Y)
#print(le.classes_)
#print(np.array(Y))
#Y values now boolean values; poison = 1
y = le.transform(Y)
#print(y_train)
#have to initialize or get error below
x = pd.DataFrame(X,columns=[X.columns[0]])
#encode each feature column and add it to x_train
for colname in X.columns:
le.fit(X[colname])
print(colname, le.classes_)
x[colname] = le.transform(X[colname])
print('\nExample Feature Values - row 1 in X:')
print(X.iloc[1])
print('\nExample Encoded Feature Values - row 1 in x:')
print(x.iloc[1])
print('\nClass Values (Y):')
print(np.array(Y))
print('\nEncoded Class Values (y):')
print(y)
#split the dataset into training and test sets
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.33)
#initialize and fit the naive bayes classifier
from sklearn.naive_bayes import GaussianNB
skgnb = GaussianNB()
skgnb.fit(x_train,y_train)
train_predict = skgnb.predict(x_train)
#print(train_predict)
#see how accurate the training data was fit
from sklearn import metrics
print("Training accuracy:",metrics.accuracy_score(y_train, train_predict))
#use the trained model to predict the test values
test_predict = skgnb.predict(x_test)
print("Testing accuracy:",metrics.accuracy_score(y_test, test_predict))
print("\nClassification Report:")
print(metrics.classification_report(y_test, test_predict, target_names=['edible','poisonous']))
print("\nConfusion Matrix:")
skcm = metrics.confusion_matrix(y_test,test_predict)
#putting it into a dataframe so it prints the labels
skcm = pd.DataFrame(skcm, columns=['predicted-edible','predicted-poisonous'])
skcm['actual'] = ['edible','poisonous']
skcm = skcm.set_index('actual')
#NOTE: NEED TO MAKE SURE I'M INTERPRETING THE ROWS & COLS RIGHT TO ASSIGN THESE LABELS!
print(skcm)
print("\nScore (same thing as test accuracy?): ", skgnb.score(x_test,y_test))
"""
Explanation: Let's see how well our classifier can identify poisonous mushrooms by combinations of features
End of explanation
"""
|
kazzz24/deep-learning | tensorboard/Anna_KaRNNa_Summaries.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
"""
Explanation: First we'll load the text file and convert it into integers for our network to use.
End of explanation
"""
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
"""
Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
"""
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_cells"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN outputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
tf.summary.histogram('softmax_w', softmax_w)
tf.summary.histogram('softmax_b', softmax_b)
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
tf.summary.histogram('predictions', preds)
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
tf.summary.scalar('cost', cost)
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
merged = tf.summary.merge_all()
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer', 'merged']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
"""
Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
"""
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
End of explanation
"""
!mkdir -p checkpoints/anna
epochs = 1
save_every_n = 100
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter('./logs/2/train', sess.graph)
test_writer = tf.summary.FileWriter('./logs/2/test')
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
summary, batch_loss, new_state, _ = sess.run([model.merged, model.cost,
model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
train_writer.add_summary(summary, iteration)
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
summary, batch_loss, new_state = sess.run([model.merged, model.cost,
model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
test_writer.add_summary(summary, iteration)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
#saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
"""
Explanation: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
|
vravishankar/Jupyter-Books | Lists.ipynb | mit | vowels = ['a','e','i','o','u']
print(vowels)
"""
Explanation: Lists
Lists are constructed with square brackets with elements separated by a comma.
Lists are mutable, meaning the individual items in the list can be changed.
Example 1
End of explanation
"""
list1 = [1,'a',"This is a list",5.25]
print(list1)
"""
Explanation: Example 2
Lists can also hold multiple object types
End of explanation
"""
# Find the length of the list
len(list1)
"""
Explanation: Example 3
End of explanation
"""
# Get the element using the index
print(list1[0])
print(list1[2])
# Grab index 1 and everything after it
print(list1[1:])
# Grab the element from index position 1 to 3 (1 less than given)
print(list1[1:3])
# Grab elements upto 3rd item
print(list1[:3])
# Grab the last item in the list
print(list1[-1])
print(list1[-1:])
# Third parameter is the jump parameter
list2 = ['a','b','c','d','e','f','g']
list2[1:4:2]
# Concatenate Elements
list2 + ["added"]
# unless reassigned the added item is not permanently added
list2
list2 = list2 + ["added permanently"]
list2
list3 = list2 * 2
list3
"""
Explanation: Example 4 - Slicing & Indexing
End of explanation
"""
list4 = ['a','b','d','e']
list4.insert(2,'c')
list4
list4.append(['f','g'])
list4
popped_item = list4.pop()
popped_item
print(list4)
# sort elements
list4.sort()
list4
# reverse elements
list4.reverse()
list4
list4.remove('a')
list4
list5 = ['a','f']
list4.extend(list5)
list4
del list4[2]
list4
# count the items
list4.count('a')
# check if item exists in a list
if 'a' in list4:
print('found')
print(list4.index('a'))
"""
Explanation: List Methods
End of explanation
"""
|
pylada/pylada-light | notebooks/IPython high-throughput interface.ipynb | gpl-3.0 | %load_ext pylada
"""
Explanation: Manipulating job-folders
IPython is an ingenious combination of a bash-like terminal with a python shell. It can be used for both bash related affairs such as copying files around creating directories, and for actual python programming. In fact, the
two can be combined to create a truly powerfull shell.
Alternatively, Jupyter provide an attractive graphical interface for performing data analysis. Or for demonstrating pylada, as in this notebook.
Pylada puts these tools to good use by providing a command-line approach to manipulate job-folders (see the relevant notebook for more information), launch actual calculations, and collect the result. When used in conjunction with python plotting libraries, e.g. matplotlib, it can provide rapid turnaround from conceptualization to result analysis.
Assuming that pylada is installed, the IPython module can be loaded in ipython/Jupyter with:
End of explanation
"""
%%writefile dummy.py
def functional(structure, outdir=None, value=False, **kwargs):
""" A dummy functional """
from copy import deepcopy
from pickle import dump
from random import random
from py.path import local
structure = deepcopy(structure)
structure.value = value
outdir = local(outdir)
outdir.ensure(dir=True)
dump((random(), structure, value, functional), outdir.join('OUTCAR').open('wb'))
return Extract(outdir)
def Extract(outdir=None):
""" An extraction function for a dummy functional """
from os import getcwd
from collections import namedtuple
from pickle import load
from py.path import local
if outdir == None:
outdir = local()()
Extract = namedtuple('Extract', ['success', 'directory',
'structure', 'energy', 'value', 'functional'])
outdir = local(outdir)
if not outdir.check():
return Extract(False, str(outdir), None, None, None, None)
if not outdir.join('OUTCAR').check(file=True):
return Extract(False, str(outdir), None, None, None, None)
with outdir.join('OUTCAR').open('rb') as file:
structure, energy, value, functional = load(file)
return Extract(True, outdir, energy, structure, value, functional)
functional.Extract = Extract
"""
Explanation: Prep
Pylada's IPython interface revolves around job-folders. In order to explore its features, we first need to create job-folders, preferably some which do not involve heavy calculations. The following creates a dummy.py file in the current directory. It contains a dummy functional that does very little work. In actual runs, everything dummy should be replaced with wrappers to VASP, or Quantum Espresso.
End of explanation
"""
from dummy import functional
from pylada.jobfolder import JobFolder
from pylada.crystal.binary import zinc_blende
root = JobFolder()
structures = ['diamond', 'diamond/alloy', 'GaAs']
stuff = [0, 1, 2]
species = [('Si', 'Si'), ('Si', 'Ge'), ('Ga', 'As')]
for name, value, species in zip(structures, stuff, species):
job = root / name
job.functional = functional
job.params['value'] = value
job.params['structure'] = zinc_blende()
for atom, specie in zip(job.structure, species):
atom.type = specie
"""
Explanation: The notebook about creating job folders has more details about this functional. For now, let us create a jobfolder with a few jobs:
End of explanation
"""
%mkdir -p tmp
%savefolders tmp/dummy.dict root
"""
Explanation: Saving and Loading a job-folder
At this point we have job-folder stored in memory in a python variable. If you were to exit ipython, the job-folder would be lost for ever and ever. We can save it do disk with:
End of explanation
"""
%explore tmp/dummy.dict
"""
Explanation: The next time ipython is entered, the job-folder can be loaded from disk with:
End of explanation
"""
%explore --help
"""
Explanation: Once a folder has been explored from disk, savefolder can be called
without arguments.
The percent(%) sign indicates that these commands are ipython
magic-functions. To get
more information about what Pylada magic functions do, call them with "--help".
End of explanation
"""
%listfolders all
"""
Explanation: Tip: The current job-folder and the current job-folder path are stored in pylada.interactive.jobfolder and pylada.interactive.jobfolder_path. In practice, accessing those directly is rarely needed.
Listing job-folders
The executable content of the current job-folder (the one loaded via %explore) can be examined with:
End of explanation
"""
%listfolders diamond/*
"""
Explanation: This prints out the executable jobs. It can also be used to examine the content of specific subfolders.
End of explanation
"""
%goto /diamond
"""
Explanation: The syntax is the same as for the bash command-line. When given an argument
other than "all", %listfolders list only the matching subfolders, including those which are not
executable. In practice, it works like "ls -d".
Executable job-folders are those that are set to go with a functional.
Navigating the job-folders
The %goto command reproduces the functionality of the "cd" unix command.
End of explanation
"""
%listfolders
"""
Explanation: The current job-folder is now diamond. Were there a corresponding sub-directory on disk, the current working directory would also be diamond. As it is, we have not yet launched the calculations, so no such directory exist. This feature makes it easy to navigate both job-folders and output directories simulteneously.
We can check the subfolders contained within /diamond.
End of explanation
"""
%goto
"""
Explanation: And calling %goto without an argument will print out the current location (much like pwd does for directories).
End of explanation
"""
%goto ..
%goto
%listfolders
"""
Explanation: We can also use relative paths, as well as .. to navigate around the tree structure. Most any path that works for cd will work with %goto as well.
End of explanation
"""
%goto /diamond/alloy/
assert jobparams.current.functional == functional
"""
Explanation: Examining the executable content of a jobfolder
It is always possible to change the executable data of a job-folder, whether
the functional or its parameters. To do this, we must first navigate to the
specific subfolder of interest, and then use the object jobparams.current.
End of explanation
"""
jobparams.current.params.keys()
"""
Explanation: Parameters can be accessed either throught the params dictionary:
End of explanation
"""
assert jobparams.current.value == 1
"""
Explanation: Or directly as attributes of jobparams.current:
End of explanation
"""
%goto /
jobparams.structure.name
"""
Explanation: Simultaneously examining/modify parameters for many jobs at a time
It is likely that a whole group of calculations will share parameters in common, and that these parameters need be the same. It is possible to examine the computational parameter for any number of jobs simultaneously:
End of explanation
"""
jobparams.structure.name = 'hello'
jobparams.structure.name
"""
Explanation: There are two things to note here:
The return is an object that duct-types for dictionaries. The keys are the job-names and the values are the property of interest.
It is possible to access attributes (here name) of attributes (here structure) to any degree of nesting. If the parameter of a given job does not contain the nested attribute, then that job is ignored.
We can set parameters much the same way:
End of explanation
"""
jobparams['*/alloy'].structure.name
"""
Explanation: By default, it is only possible to modify existing attributes, as opposed to add new attributes.
Finally, it is possible to focus on a specific sub-set of jobfolders. By default the syntax is that of a unix-shell. However, the syntax can be switched to regular exppressions via the Pylada parameter pylada.unix_re. Only the former syntax is illustrated here:
End of explanation
"""
for key, value in jobparams['diamond/*'].structure.name.items():
print(key, value)
"""
Explanation: Note that one only item is left in the dictionary, that item is returned directly. Indeed, there is only one job-folder which corresponds to "*/alloy". This behavior can be turned on and off using the parameters jobparams_naked_end and/or JobParams.naked_end. The unix shell-like syntax can be either absolute paths, when preceded with '/', or relative. In that last case, they are relative to the current position in the job-folder, as changed by %goto.
When the return looks like a dictionary, it behaves like a dictionary. Hence it can be iterated over:
End of explanation
"""
%goto /
jobparams['diamond/alloy'].onoff = 'on'
jobparams.onoff
"""
Explanation: Launching calculations
Turning job-folders on and off
Using jobparams, it is possible to turn job-folders on and off:
End of explanation
"""
%savefolders
"""
Explanation: When "off", a job-folder is ignored by jobparams (and collect, described below). Furthermore, it will not be executed. The only way to access it again is to turn it back on. Groups of calculations can be turned on and off
using the unix shell-like syntax previously.
WARNING: You should always save the job-folder after messing with it's on/off status. This is because the computations will re-read the dictionary from disk.
End of explanation
"""
%launch scattered --help
"""
Explanation: Submitting job-folder calculations
Once job-folders are ready, it takes all of one line to launch the calculations:
IPython
%launch scattered
This will create one pbs/slurm job per executable job-folder. A number of options are possible to select the number of processors, the account or queue, the walltime, etc. To examine them, do %launch scattered --help:
End of explanation
"""
%launch --help
"""
Explanation: Most default values should be contained in pylada.default_pbs. The number of processors is by default equal to the even number closest to the number of atoms in the structure (apparently, this is a recommended VASP default). The number of processes can be given both as an integer, or as function which takes a job-folder as the only argument, and returns an integer.
Other possibilities for lauching jobs can be obtained as follows:
End of explanation
"""
%launch interactive
"""
Explanation: In this notebook, we will be using %launch interactive since the jobs are simple and since we cannot be sure that pylada has been configured for PBS, Slurm, or other queueing systems.
End of explanation
"""
%%bash
[ ! -e tree ] || tree
"""
Explanation: At this juncture, we should find that jobs have been created a number of output files in the directory where the file dummy.dict is located. You may remember from the start of this lesson that we loaded the dictionary with %explore /tmp/dummy.dict. The location of this file is what matters. The current working directory does not.
End of explanation
"""
%goto /diamond
print("current location: ", jobparams.current.name)
%%bash
[ ! -e tree ] || tree
"""
Explanation: You will notice that the job in alloy/diamond did not run since it is off. If you were to go back up a few cells and set it to on, and then rerun via %launch interactive, you should see that it will be computed.
We can now navigate using %goto, simultaneously through the jobfolder and the disk
End of explanation
"""
%goto /
collect.success
"""
Explanation: Collecting results
The first thing one wants to know from calculations is whether they ran successfully:
End of explanation
"""
collect.energy
"""
Explanation: Our dummy functional is too simple to fail... However, if you delete any given calculation directory, and try it again, you will find some false results. Beware that some collected results are cached so they can be retrieved faster the second time around, so redoing %explore some.dict might be necessary.
Warning Success means that the calculations ran to completion. It does not mean that they are not garbage.
Results from the calculation can be retrieved in much the same way as parameters were examined. This time, however, we use an object called collect (still without preceding "%" sign). Assuming the job-folders created earlier were launched, the random energies created by our fake functional could be retrieved as in:
End of explanation
"""
|
yandexdataschool/gumbel_lstm | binary_lstm.ipynb | mit | %env THEANO_FLAGS="device=gpu2"
import numpy as np
import theano
import theano.tensor as T
import lasagne
import os
"""
Explanation: Contents
We train an LSTM with gumbel-sigmoid gates on a toy language modelling problem.
Such LSTM can than be binarized to reach signifficantly greater speed.
End of explanation
"""
start_token = " "
with open("mtg_card_names.txt") as f:
names = f.read()[:-1].split('\n')
names = [start_token+name for name in names]
print 'n samples = ',len(names)
for x in names[::1000]:
print x
"""
Explanation: Generate mtg cards
Regular RNN language modelling done by LSTM with "binary" gates
End of explanation
"""
#all unique characters go here
token_set = set()
for name in names:
for letter in name:
token_set.add(letter)
tokens = list(token_set)
print 'n_tokens = ',len(tokens)
#!token_to_id = <dictionary of symbol -> its identifier (index in tokens list)>
token_to_id = {t:i for i,t in enumerate(tokens) }
#!id_to_token = < dictionary of symbol identifier -> symbol itself>
id_to_token = {i:t for i,t in enumerate(tokens)}
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(map(len,names),bins=25);
# truncate names longer than MAX_LEN characters.
MAX_LEN = min([60,max(list(map(len,names)))])
#ADJUST IF YOU ARE UP TO SOMETHING SERIOUS
"""
Explanation: Text processing
End of explanation
"""
names_ix = list(map(lambda name: list(map(token_to_id.get,name)),names))
#crop long names and pad short ones
for i in range(len(names_ix)):
names_ix[i] = names_ix[i][:MAX_LEN] #crop too long
if len(names_ix[i]) < MAX_LEN:
names_ix[i] += [token_to_id[" "]]*(MAX_LEN - len(names_ix[i])) #pad too short
assert len(set(map(len,names_ix)))==1
names_ix = np.array(names_ix)
"""
Explanation: Cast everything from symbols into identifiers
End of explanation
"""
from agentnet import Recurrence
from lasagne.layers import *
from agentnet.memory import *
from agentnet.resolver import ProbabilisticResolver
from gumbel_sigmoid import GumbelSigmoid
sequence = T.matrix('token sequence','int64')
inputs = sequence[:,:-1]
targets = sequence[:,1:]
l_input_sequence = InputLayer(shape=(None, None),input_var=inputs)
"""
Explanation: Input variables
End of explanation
"""
###One step of rnn
class rnn:
n_hid = 100
temp = theano.shared(np.float32(1.0))
#inputs
inp = InputLayer((None,),name='current character')
prev_cell = InputLayer((None,n_hid),name='previous lstm cell')
prev_hid = InputLayer((None,n_hid),name='previous ltsm output')
#recurrent part
emb = EmbeddingLayer(inp, len(tokens), 30,name='emb')
new_cell,new_hid = LSTMCell(prev_cell,prev_hid,emb,
inputgate_nonlinearity=GumbelSigmoid(temp),
forgetgate_nonlinearity=GumbelSigmoid(temp),
#outputgate_nonlinearity=GumbelSigmoid(temp),
name="rnn")
next_token_probas = DenseLayer(new_hid,len(tokens),nonlinearity=T.nnet.softmax)
#pick next token from predicted probas
next_token = ProbabilisticResolver(next_token_probas)
"""
Explanation: Build NN
You'll be building a model that takes token sequence and predicts next tokens at each tick
This is basically equivalent to how rnn step was described in the lecture
End of explanation
"""
training_loop = Recurrence(
state_variables={rnn.new_hid:rnn.prev_hid,
rnn.new_cell:rnn.prev_cell},
input_sequences={rnn.inp:l_input_sequence},
tracked_outputs=[rnn.next_token_probas,],
unroll_scan=False,
)
# Model weights
weights = lasagne.layers.get_all_params(training_loop,trainable=True)
print weights
predicted_probabilities = lasagne.layers.get_output(training_loop[rnn.next_token_probas])
#If you use dropout do not forget to create deterministic version for evaluation
loss = lasagne.objectives.categorical_crossentropy(predicted_probabilities.reshape((-1,len(tokens))),
targets.reshape((-1,))).mean()
#<Loss function - a simple categorical crossentropy will do, maybe add some regularizer>
updates = lasagne.updates.adam(loss,weights)
#training
train_step = theano.function([sequence], loss,
updates=training_loop.get_automatic_updates()+updates)
"""
Explanation: Loss && Training
End of explanation
"""
n_steps = T.scalar(dtype='int32')
feedback_loop = Recurrence(
state_variables={rnn.new_cell:rnn.prev_cell,
rnn.new_hid:rnn.prev_hid,
rnn.next_token:rnn.inp},
tracked_outputs=[rnn.next_token_probas,],
batch_size=1,
n_steps=n_steps,
unroll_scan=False,
)
generated_tokens = get_output(feedback_loop[rnn.next_token])
generate_sample = theano.function([n_steps],generated_tokens,updates=feedback_loop.get_automatic_updates())
def generate_string(length=MAX_LEN):
output_indices = generate_sample(length)[0]
return ''.join(tokens[i] for i in output_indices)
generate_string()
"""
Explanation: generation
here we re-wire the recurrent network so that it's output is fed back to it's input
End of explanation
"""
def sample_batch(data, batch_size):
rows = data[np.random.randint(0,len(data),size=batch_size)]
return rows
print("Training ...")
#total N iterations
n_epochs=100
# how many minibatches are there in the epoch
batches_per_epoch = 500
#how many training sequences are processed in a single function call
batch_size=32
loss_history = []
for epoch,t in enumerate(np.logspace(0,-2,n_epochs)):
rnn.temp.set_value(np.float32(t))
avg_cost = 0;
for _ in range(batches_per_epoch):
avg_cost += train_step(sample_batch(names_ix,batch_size))
loss_history.append(avg_cost)
print("\n\nEpoch {} average loss = {}".format(epoch, avg_cost / batches_per_epoch))
print "Generated names"
for i in range(10):
print generate_string(),
plt.plot(loss_history)
"""
Explanation: Model training
Here you can tweak parameters or insert your generation function
Once something word-like starts generating, try increasing seq_length
End of explanation
"""
|
BoasWhip/Black | Notebook/M269 Unit 4 Notes -- Search.ipynb | mit | def quickSelect(k, aList):
if len(aList) == 1:
return aList[0] # Base case
pivotValue = aList[0]
leftPart = []
rightPart = []
for item in aList[1:]:
if item < pivotValue:
leftPart.append(item)
else:
rightPart.append(item)
if len(leftPart) >= k:
return quickSelect(k, leftPart)
elif len(leftPart) == k - 1:
return pivotValue
else:
return quickSelect(k - len(leftPart) -1, rightPart)
print("Median:", quickSelect(6, [2, 36, 5, 21, 8, 13, 11, 20, 4, 1]))
def quickSelect(k, aList):
if len(aList) == 1: return aList[0]
pivotValue = aList[0]
leftPart = [x for x in aList[1:] if x < pivotValue]
rightPart = [x for x in aList[1:] if not x < pivotValue]
if len(leftPart) >= k: return quickSelect(k, leftPart)
elif len(leftPart) == k - 1: return pivotValue
else: return quickSelect(k - len(leftPart) -1, rightPart)
print("Median:", quickSelect(6, [2, 36, 5, 21, 8, 13, 11, 20, 4, 1]))
"""
Explanation: 4. Searching
4.1 Searching Lists
Algorithm: Selection
Finding the median of a collection of numbers is selection problem with $k=(n+1)/2$ if $n$ is odd (and, if $n$ is even, the median is the mean of the $k$th and $(k+1)$th smallest items, where $k=n/2$).
Initial Insight
Choose a value from $S$, to be used as <b>pivotValue</b>. Then divide the list into two partitions, <b>leftPart</b> (containing the list items that are smaller than <b>pivotValue</b>) and <b>rightPart</b> (containing the list items that are greater than <b>pivotValue</b>).
If the $k$th smallest item has been found, stop. Otherwise, select the partition that must contain the $k$th smallest item, and do the whole thing again with this partition.
Specification
<table>
<tr>
<th>Name:</th>
<td><b>Selection</b></td>
</tr>
<tr>
<th>Inputs:</th>
<td>A sequence of integers $S = \{s_1, s_2, s_3, ..., s_n\}$<br/>An integer $k$</td>
</tr>
<tr>
<th>Outputs:</th>
<td>An integer $x$</td>
</tr>
<tr>
<th>Preconditions:</th>
<td>Length of $S>0$ and $k>0$ and $k\le n$</td>
</tr>
<tr>
<th>Postcondition:</th>
<td>$x$ is the $k$th smallest item in $S$</td>
</tr>
</table>
Code
End of explanation
"""
def basicStringSearch(searchString, target):
searchIndex = 0
lenT = len(target)
lenS = len(searchString)
while searchIndex + lenT <= lenS:
targetIndex = 0
while targetIndex < lenT and target[targetIndex] == searchString[ targetIndex + searchIndex]:
targetIndex += 1
if targetIndex == lenT:
return searchIndex
searchIndex += 1
return -1
# Test Code
for target, index in [('per', 0), ('lta', 14), ('ad', 10), ('astra', -1)]:
print(basicStringSearch('per ardua ad alta', target)==index)
"""
Explanation: Remarks
The crucial step (<i>cf.</i> <b>Quick Sort</b>) that determines whether we have best case or worst case performance is the choice of the pivot – if we are really lucky we will get a value that cuts down the list the algorithm needs to search very substantially at each step.<br/><br/>
The algorithm is divide-and-conquer and each iteration makes the sub-problem substantially smaller. In <b>Quick Sort</b>, both partitions are sorted recursively and provided that the pivot, at each stage, divides the list up into equal parts, we achieve $O(n $log$ n)$ complexity.<br/><br/>
However, in the <b>Selection</b> algorithm we know which partition to search, so we only deal with one of them on each recursive call and as a result it is even more efficient. Hence, it can be shown that its complexity is $O(n)$.
4.2 Searching for patterns
It often happens that we need to search through a string of characters to find an occurrence (if there is one) of a given pattern, e.g. genetics and DNA searches, keyword searches.
Basic string search
Algorithm: StringMatch
We are representing the sequence to be searched simply as a string of characters, referred to as the search string $S$, a shorter sequence is the target string $T$ and we are trying to find where the first occurrence of $T$ is, if it is present in $S$.
Initial Insight
Repeatedly shift $T$ one place along $S$ and then compare the characters of $T$ with those of $S$. Do this until a match of $T$ in $S$ is found, or the end of $S$ is reached.
Specification
<table>
<tr>
<th>Name:</th>
<td><b>StringMatch</b></td>
</tr>
<tr>
<th>Inputs:</th>
<td>A search string $S = (s_1, s_2, s_3, ..., s_n)$<br/>A target string $T = (t_1, t_2, t_3, ..., t_m)$</td>
</tr>
<tr>
<th>Outputs:</th>
<td>An integer $x$</td>
</tr>
<tr>
<th>Preconditions:</th>
<td>$m\le n$, $m>0$ and $n>0$</td>
</tr>
<tr>
<th>Postcondition:</th>
<td>If there is an occurrence of $T$ in $S$, $x$ is the start position of the first occurrence of $T$ in $S$; otherwise $x = -1$</td>
</tr>
</table>
Code
End of explanation
"""
def buildShiftTable(target, alphabet):
shiftTable = {}
for character in alphabet:
shiftTable[character] = len(target) + 1
for i in range(len(target)):
char = target[i]
shift = len(target) - i
shiftTable[char] = shift
return shiftTable
def quickSearch (searchString, target, alphabet):
shiftTable = buildShiftTable(target, alphabet)
searchIndex = 0
while searchIndex + len(target) <= len(searchString):
targetIndex = 0
# Compares the strings
while targetIndex < len(target) and target[targetIndex] == searchString[searchIndex + targetIndex]:
targetIndex = targetIndex + 1
# Return index if target found
if targetIndex == len(target): return searchIndex
# Continue search with new shivt value or exit
if searchIndex + len(target) < len(searchString):
next = searchString[searchIndex + len(target)]
shift = shiftTable[next]
searchIndex = searchIndex + shift
else:
return -1
return -1
"""
Explanation: Remarks
It becomes immediately apparent when implement that this algorithm would consist of two nested loops leading to complexity $O(mn) > O(m^2)$.<br/><br/>
We know that if the character in $S$ following the failed comparison with $T$ is not in $T$ then there is no need to slide along one place to do another comparison. We should slide to the next point beyond it. This gives us the basis for an improved algorithm.
Quick search
Initial Insight
For each character in $T$ calculate the number of positions to shift $T$ if a comparison fails, according to where (if at all) that character appears in $T$.<br/><br/>
Repeatedly compare the characters of $T$ with those of $S$. If a comparison fails, examine the next character along in $S$ and shift $T$ by the calculated shift distance for that character.<br/><br/>
Do this until an occurrence of $T$ in $S$ is found, or the end of $S$ is reached.
Remarks
An important point to note first of all is that the part of the algorithm calculating the shifts depends entirely on an analysis of the target string $T$ – there is no need to examine the search string $S$ at all because for any character in $S$ that is not in $T$, the shift is a fixed distance.<br/><br/>
The database is called a <b>shift table</b> and it stores a <b>shift distance</b> for each character in the domain of $S$ – e.g. for each character of the alphabet, or say, all upper and lower case plus punctuation.<br/><br/>
The <b>shift distance</b> is calculated according to the following rules:
<ol>
<li>If the character does not appear in T, the shift distance is one more than the length of T.</li>
<li>If the character does appear in T, the shift distance is the first position at which it appears, counting from right to left and starting at 1. (Hence when a character appears more than once in $T$ keeps the lowest position.)</li>
</ol>
Suppose $S = $'GGGGGAGGCGGCGGT'. Then for target string $T = $'TCCACC', we have:
<table>
<tr>
<th>G</th>
<th>A</th>
<th>C</th>
<th>T</th>
</tr>
<tr>
<td>7</td>
<td>3</td>
<td>1</td>
<td>6</td>
</tr>
</table>
and if $T = $'TGGCG', we have:
<table>
<tr>
<th>G</th>
<th>A</th>
<th>C</th>
<th>T</th>
</tr>
<tr>
<td>1</td>
<td>6</td>
<td>2</td>
<td>5</td>
</tr>
</table>
<br/>
Once the shift table has been computed, the search part of the quick search algorithm is similar to the basic string search algorithm, except that at the end of each failed attempt we look at the next character along in $S$ that is beyond $T$ and use this to look up in the shift table how many steps to slide $T$.<br/>
We implement the <b>shift table</b> as a dictionary in Python:
Code
End of explanation
"""
theAlphabet = {'G', 'A', 'C', 'T'}
stringToSearch = 'ATGAATACCCACCTTACAGAAACCTGGGAAAAGGCAATAAATATTATAAAAGGTGAACTTACAGAAGTAA'
for thetarget in ['ACAG', 'AAGTAA', 'CCCC']:
print(quickSearch(stringToSearch, thetarget, theAlphabet))
"""
Explanation: Tests
End of explanation
"""
prefixTable = [0, 1, 0, 0, 0, 1, 2, 3, 4, 0, 0, 0, 1, 2]
"""
Explanation: Remarks
The basic brute-force algorithm we wrote first will work fine with relatively short search strings but, as with all algorithms, inputs of huge size may overwhelm it. For example, DNA strings can be billions of bases long, so algorithmic efficiency can be vital. We noted already that the complexity of the basic string search can be as bad as O(nm) in the worst case.<br/><br/>
As for the quick search algorithm, research has shown that its average-case performance is good but, unfortunately, its worst case behaviour is still O(mn).<br/><br/>
Knuth–Morris–Pratt (KMP)
Better algorithms have been developed. One of the best-known efficient search algorithms is the <b>Knuth–Morris–Pratt (KMP)</b> algorithm. A full description of the precise details of the KMP algorithm is beyond the scope of this text.
Algorithm: Knuth–Morris–Pratt (KMP)
The <b>KMP</b> algorithm is in two parts:
<ol>
<li>Build a table of the lengths of prefix matches up to every character in the target string, $T$.</li>
<li>Move along the search string, $S$, using the information in the table to do the shifting and compare.</li>
</ol>
Once the prefix table has been built, the actual search in the second step proceeds like the other string-searching algorithms above, but when a mismatch is detected the algorithm uses the prefix table to decide how to shift $T$. The problem is to know if these prefix matches exist and – if they do – how long the matching substrings are.</br>
The prefix will then be aligned as shown in Figure 4.17 and comparison can continue at the next character in S.
If you want to take the trouble, you can verify that the final table will be:
End of explanation
"""
# Helper function for kmpSearch()
def buildPrefixTable(target):
#The first line of code just builds a list that has len(target)
#items all of which are given the default value 0
prefixTable = [0] * len(target)
q = 0
for p in range(1, len(target)):
while q > 0 and target[q] != target[p]:
q = prefixTable[q - 1]
if target[q] == target[p]:
q = q + 1
prefixTable[p] = q
return prefixTable
def kmpSearch(searchString, target):
n = len(searchString)
m = len(target)
prefixTable = buildPrefixTable(target)
q = 0
for i in range(n):
while q > 0 and target[q] != searchString[i]:
q = prefixTable[q - 1]
if target[q] == searchString[i]:
q = q + 1
if q == m:
return i - m + 1
return -1
"""
Explanation: Code
End of explanation
"""
stringToSearch = 'ATGAATACCCACCTTACAGAAACCTGGGAAAAGGCAATAAATATTATAAAAGGTGAACTTACAGAAGTAA'
for thetarget in ['ACAG', 'AAGTAA', 'CCCC']:
print(kmpSearch(stringToSearch, thetarget))
"""
Explanation: Tests
End of explanation
"""
set_of_integers = [54, 26, 93, 17, 77, 31]
hash_function = lambda x: [y % 11 for y in x]
hash_vals = hash_function(set_of_integers)
hash_vals
"""
Explanation: Remarks
What about the complexity of the KMP algorithm? Computing the prefix table takes significant effort but in fact there is an efficient algorithm for doing it. Overall, the KMP algorithm has complexity $O(m + n)$. Since $n$ is usually enormously larger than $m$ (think of searching a DNA string of billions of bases), $m$ is usually dominated by $n$, so this means that KMP has effective complexity $O(n)$.
Other Algorithms
String search is an immensely important application in modern computing, and at least 30 efficient algorithms have been developed for the task. Many of these depend on the principle embodied in the quick search and KMP algorithms – shifting the target string an appropriate distance along the search string at each step, based on information in a table. The <b>Boyer–Moore</b> algorithm, for example, combines elements of both these two algorithms. This algorithm is widely used in practical applications.
There are also string-search algorithms that work in entirely different ways from the examples we have looked at. Generally, these are beyond the scope of this text, but some are based on hashing functions, which we now move on to discuss next.
4.3 Hashing and Hash Tables
Hashing
We have seen how we are able to make improvements in search algorithms by taking advantage of information about where items are stored in the collection with respect to one another. For example, by knowing that a list was ordered, we could search in logarithmic time using a binary search. In this section we will attempt to go one step further by building a data structure that can be searched in $O(1)$ time. This concept is referred to as <b>hashing</b>.
In order to do this, we will need to know even more about where the items might be when we go to look for them in the collection. If every item is where it should be, then the search can use a single comparison to discover the presence of an item.
A hash table is a collection of items which are stored in such a way as to make it easy to find them later. Each position of the hash table, often called a slot, can hold an item and is named by an integer value starting at 0.
Below is a hash table of size $m=11$ implemented in Python as a list with empty slots intialized with a default <b>None</b> value:
<img src="http://interactivepython.org/courselib/static/pythonds/_images/hashtable.png">
The mapping between an item and the slot where that item belongs in the hash table is called the <b>hash function</b>. The hash function will take any item in the collection and return an integer in the range of slot names, between $0$ and $m-1$.
Our first hash function, sometimes referred to as the <b>remainder method</b>, simply takes an item and divides it by the table size, returning the remainder as its hash value:
End of explanation
"""
word = 4365554601
word = str(word)
step = 2
slots = 11
folds = [int(word[n: n+2]) for n in range(0, len(word), step)]
print(folds)
print(sum(folds))
print(sum(folds)%slots)
"""
Explanation: Once the hash values have been computed, we can insert each item into the hash table at the designated position:
<img src="http://interactivepython.org/courselib/static/pythonds/_images/hashtable2.png">
Now when we want to search for an item, we simply use the hash function to compute the slot name for the item and then check the hash table to see if it is present. This searching operation is $O(1)$, since a constant amount of time is required to compute the hash value and then index the hash table at that location. If everything is where it should be, we have found a constant time search algorithm.
It immediately becomes apparent that this technique is going to work only if each item maps to a unique location in the hash table. When two or more items would need to be in the same slot. This is referred to as a <b>collision</b> (it may also be called a “clash”). Clearly, collisions create a problem for the hashing technique. We will discuss them in detail later.
Hash Functions
Given a collection of items, a hash function that maps each item into a unique slot is referred to as a <b>perfect hash function</b>.
If we know the items and the collection will never change, then it is possible to construct a perfect hash function (refer to the exercises for more about perfect hash functions). Unfortunately, given an arbitrary collection of items, there is no systematic way to construct a perfect hash function. Luckily, we do not need the hash function to be perfect to still gain performance efficiency.
One way to always have a perfect hash function is to increase the size of the hash table so that each possible value in the item range can be accommodated. This guarantees that each item will have a unique slot. Although this is practical for small numbers of items, it is not feasible when the number of possible items is large. For example, if the items were nine-digit Social Security numbers, this method would require almost one billion slots. If we only want to store data for a class of 25 students, we will be wasting an enormous amount of memory.
Our goal is to create a hash function that minimizes the number of collisions, is easy to compute, and evenly distributes the items in the hash table. There are a number of common ways to extend the simple remainder method. We will consider a few of them here.
The <b>folding method</b> for constructing hash functions begins by dividing the item into equal-size pieces (the last piece may not be of equal size). These pieces are then added together to give the resulting hash value.
For example, if our item was the phone number $436-555-4601$, we would take the digits and divide them into groups of $2$ and sum them; that is $43+65+55+46+01=210$. If we assume our hash table has $11$ slots, then we need to perform the extra step of dividing by $11$ and keeping the remainder. In this case $210 % 11210 % 11 = 1$, so the phone number $436-555-4601$ hashes to slot $1$. (Some folding methods go one step further and reverse every other piece before the addition. For the above example, we get $43+56+55+64+01=219$ which gives $219 % 11=10219 % 11=10$.)
End of explanation
"""
set_of_integers = [54, 26, 93, 17, 77, 31]
hash_function = lambda x: [int(str(y**2)[1:-1])%11 for y in x]
hash_vals = hash_function(set_of_integers)
hash_vals
"""
Explanation: Another numerical technique for constructing a hash function is called the <b>mid-square method</b>. We first square the item, and then extract <i>some portion</i> of the resulting digits. For example, if the item were $44$, we would first compute $44^2=1,936$. By extracting the middle two digits, $93$, and performing the remainder step, we get remainder of $5$ on division by $11$.
End of explanation
"""
word = 'cat'
sum([ord(l) for l in word]) % 11
"""
Explanation: We can also create hash functions for character-based items such as strings. The word “cat” can be thought of as a sequence of ordinal values. Summing these (unicode values), summing and then taking the remainder from division by $11$:
End of explanation
"""
sum([(ord(word[x]) * (x + 1)) for x in range(len(word))]) % 11
"""
Explanation: To avoid conflicts from anagram, we could weights:
End of explanation
"""
set_of_integers = [123456, 431941, 789012, 60375]
print(set_of_integers)
set_of_integers = [((int(str(x)[0:2]) + int(str(x)[2:4]) + int(str(x)[4:])) % 80) -1 for x in set_of_integers]
print(set_of_integers)
"""
Explanation: You may be able to think of a number of additional ways to compute hash values for items in a collection. The important thing to remember is that the hash function has to be efficient so that it does not become the dominant part of the storage and search process. If the hash function is too complex, then it becomes more work to compute the slot name than it would be to simply do a basic sequential or binary search as described earlier. This would quickly defeat the purpose of hashing.
Collision Resolution
If the hash function is perfect, collisions never occur. However, since this is often not possible. When two items hash to the same slot, we must have a systematic method for placing the second item in the hash table. This process is called <b>collision resolution</b>.
One method for resolving collisions looks into the hash table and tries to find another open slot to hold the item that caused the collision. A simple way to do this is to start at the original hash value position and then move in a sequential manner through the slots until we encounter the first slot that is empty.
Note that we may need to go back to the first slot (circularly) to cover the entire hash table. This collision resolution process is referred to as <b>open addressing</b> in that it tries to find the next open slot or address in the hash table. By systematically visiting each slot one at a time, we are performing an open addressing technique called <b>linear probing</b>. Using the hash values from the remainder method example, when add $44$ and $55$ say:
<img src="http://interactivepython.org/courselib/static/pythonds/_images/clustering.png">
Once we have built a hash table using open addressing and linear probing, it is essential that we utilize the same methods to search for items. we are henced forced to do sequential search to find $44$ and $55$.
So, a disadvantage to linear probing is the tendency for <b>clustering</b>; items become clustered in the table. This means that if many collisions occur at the same hash value, a number of surrounding slots will be filled by the linear probing resolution. This will have an impact on other items that are being inserted, as we saw when we tried to add the item 20 above. A cluster of values hashing to 0 had to be skipped to finally find an open position.
One way to deal with clustering is to extend the linear probing technique so that instead of looking sequentially for the next open slot, we skip slots, thereby more evenly distributing the items that have caused collisions. This will potentially reduce the clustering that occurs, e.g. with a “plus 3” probe. This means that once a collision occurs, we will look at every third slot until we find one that is empty.
The general name for this process of looking for another slot after a collision is <b>rehashing</b>. With simple linear probing, in general, $rehash(pos)=(pos+skip)$%$sizeoftable$. It is important to note that the size of the “skip” must be such that all the slots in the table will eventually be visited. Otherwise, part of the table will be unused. To ensure this, it is often suggested that the table size be a prime number. This is the reason we have been using $11$ in our examples.
A variation of the linear probing idea is called <b>quadratic probing</b>. Instead of using a constant “skip” value, we use a rehash function that increments the hash value by 1, 3, 5, 7, 9, and so on. This means that if the first hash value is $h$, the successive values are $h+1$, $h+4$, $h+9$, $h+16$, and so on. In other words, quadratic probing uses a skip consisting of successive <i>perfect squares</i>:
<img src="http://interactivepython.org/courselib/static/pythonds/_images/linearprobing2.png">
An alternative method for handling the collision problem is to allow each slot to hold a reference to a collection (or chain) of items. <b>Chaining</b> allows many items to exist at the same location in the hash table. When collisions happen, the item is still placed in the proper slot of the hash table. As more and more items hash to the same location, the difficulty of searching for the item in the collection increases:
<img src="http://interactivepython.org/courselib/static/pythonds/_images/chaining.png">
When we want to search for an item, we use the hash function to generate the slot where it should reside. Since each slot holds a collection, we use a searching technique to decide whether the item is present. The advantage is that on the average there are likely to be many fewer items in each slot, so the search is perhaps more efficient.
End of explanation
"""
|
UWashington-Astro300/Astro300-W17 | 06_PlottingWithPython.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from astropy.table import QTable
"""
Explanation: Plotting and Fitting with Python
matplotlib is the main plotting library for Python
End of explanation
"""
t = np.linspace(0,2,100) # 100 points linearly spaced between 0.0 and 2.0
s = np.cos(2*np.pi*t) * np.exp(-t) # s if a function of t
plt.plot(t,s)
"""
Explanation: Simple Plotting
End of explanation
"""
plt.style.available
plt.style.use('ggplot')
plt.plot(t,s)
plt.xlabel('time (s)')
plt.ylabel('voltage (mV)')
plt.title('This is a title')
plt.ylim(-1.5,1.5)
plt.plot(t, s, color='b', marker='None', linestyle='--'); # adding the ';' at then suppresses the Out[] line
mask1 = np.where((s>-0.4) & (s<0))
plt.plot(t, s, color='b', marker='None', linestyle='--')
plt.plot(t[mask1],s[mask1],color="g",marker="o",linestyle="None",markersize=8);
"""
Explanation: Simple plotting - with style
The default style of matplotlib is a bit lacking in style. Some would term it ugly. The new version of matplotlib has added some new styles that you can use in place of the default. Changing the style will effect all of the rest of the plots on the notebook.
Examples of the various styles can be found here
End of explanation
"""
from astropy import units as u
from astropy.visualization import quantity_support
quantity_support()
v = 10 * u.m / u.s
t2 = np.linspace(0,10,1000) * u.s
y = v * t2
plt.plot(t2,y)
"""
Explanation: In addition, you can specify colors in many different ways:
Grayscale intensities: color = '0.8'
RGB triplets: color = (0.3, 0.1, 0.9)
RGB triplets (with transparency): color = (0.3, 0.1, 0.9, 0.4)
Hex strings: color = '#7ff00'
HTML color names: color = 'Chartreuse'
a name from the xkcd color survey prefixed with 'xkcd:' (e.g., 'xkcd:sky blue')
matplotlib will work with Astropy units
End of explanation
"""
#Histogram of "h" with 20 bins
np.random.seed(42)
h = np.random.randn(500)
plt.hist(h, bins=20, facecolor='MediumOrchid');
mask2 = np.where(h>0.0)
np.random.seed(42)
j = np.random.normal(2.0,1.0,300) # normal dist, ave = 2.0, std = 1.0
plt.hist(h[mask2], bins=20, facecolor='#b20010', histtype='stepfilled')
plt.hist(j, bins=20, facecolor='#0200b0', histtype='stepfilled', alpha = 0.30);
"""
Explanation: Simple Histograms
End of explanation
"""
fig,ax = plt.subplots(1,1) # One window
fig.set_size_inches(11,8.5) # (width,height) - letter paper landscape
fig.tight_layout() # Make better use of space on plot
ax.set_xlim(0.0,1.5)
ax.spines['bottom'].set_position('zero') # Move the bottom axis line to x = 0
ax.set_xlabel("This is X")
ax.set_ylabel("This is Y")
ax.plot(t, s, color='b', marker='None', linestyle='--')
ax.text(0.8, 0.6, 'Bad Wolf', color='green', fontsize=36) # You can place text on the plot
ax.vlines(0.4, -0.4, 0.8, color='m', linewidth=3) # vlines(x, ymin, ymax)
ax.hlines(0.8, 0.2, 0.6, color='y', linewidth=5) # hlines(y, xmin, xmax)
fig.savefig('fig1.png', bbox_inches='tight')
import glob
glob.glob('*.png')
"""
Explanation: You have better control of the plot with the object oriented interface.
While most plt functions translate directly to ax methods (such as plt.plot() → ax.plot(), plt.legend() → ax.legend(), etc.), this is not the case for all commands.
In particular, functions to set limits, labels, and titles are slightly modified.
For transitioning between matlab-style functions and object-oriented methods, make the following changes:
plt.xlabel() → ax.set_xlabel()
plt.ylabel() → ax.set_ylabel()
plt.xlim() → ax.set_xlim()
plt.ylim() → ax.set_ylim()
plt.title() → ax.set_title()
End of explanation
"""
data_list = glob.glob('./MyData/12_data*.csv')
data_list
fig,ax = plt.subplots(1,1) # One window
fig.set_size_inches(11,8.5) # (width,height) - letter paper landscape
fig.tight_layout() # Make better use of space on plot
ax.set_xlim(0.0,80.0)
ax.set_ylim(15.0,100.0)
ax.set_xlabel("This is X")
ax.set_ylabel("This is Y")
for file in data_list:
data = QTable.read(file, format='ascii.csv')
ax.plot(data['x'], data['y'],marker="o",linestyle="None",markersize=7,label=file)
ax.legend(loc=0,shadow=True);
"""
Explanation: Plotting from multiple external data files
End of explanation
"""
fig, ax = plt.subplots(2,2) # 2 rows 2 columns
fig.set_size_inches(11,8.5) # width, height
fig.tight_layout() # Make better use of space on plot
ax[0,0].plot(t, s, color='b', marker='None', linestyle='--') # Plot at [0,0]
ax[0,1].hist(h, bins=20, facecolor='MediumOrchid') # Plot at [0,1]
ax[1,0].hist(j,bins=20, facecolor='HotPink', histtype='stepfilled') # Plot at [1,0]
ax[1,0].vlines(2.0, 0.0, 50.0, color='xkcd:seafoam green', linewidth=3)
ax[1,1].set_xscale('log') # Plot at [1,1] - x-axis set to log
ax[1,1].plot(t, s, color='r', marker='None', linestyle='--');
"""
Explanation: Legend loc codes:
0 best 6 center left
1 upper right 7 center right
2 upper left 8 lower center
3 lower left 9 upper center
4 lower right 10 center
Subplots
subplot(rows,columns)
Access each subplot like a matrix. [x,y]
For example: subplot(2,2) makes four panels with the coordinates:
End of explanation
"""
T = QTable.read('M15_Bright.csv', format='ascii.csv')
T[0:3]
fig, ax = plt.subplots(1,2) # 1 row, 2 colums
fig.set_size_inches(15,5)
fig.tight_layout()
# The plot for [0]
# Notice that for a single row of plots you do not need to specify the row
ax[0].set_xlim(-40,140)
ax[0].set_ylim(-120,120)
ax[0].set_aspect('equal') # Force intervals in x = intervals in y
ax[0].invert_xaxis() # RA increases to the left!
ax[0].set_xlabel("$\Delta$RA [sec]")
ax[0].set_ylabel("$\Delta$Dec [sec]")
ax[0].plot(T['RA'], T['Dec'],color="g",marker="o",linestyle="None",markersize=5);
# The plot for [1]
BV = T['Bmag'] - T['Vmag']
V = T['Vmag']
ax[1].set_xlim(-0.25,1.5)
ax[1].set_ylim(12,19)
ax[1].set_aspect(1/6) # Make 1 unit in X = 6 units in Y
ax[1].invert_yaxis() # Magnitudes increase to smaller values
ax[1].set_xlabel("B-V")
ax[1].set_ylabel("V")
ax[1].plot(BV,V,color="b",marker="o",linestyle="None",markersize=5);
# overplotting
maskC = np.where((V < 16.25) & (BV < 0.55))
ax[0].plot(T['RA'][maskC], T['Dec'][maskC],color="r",marker="o",linestyle="None",markersize=4, alpha=0.5)
ax[1].plot(BV[maskC], V[maskC],color="r",marker="o",linestyle="None",markersize=4, alpha=0.5);
"""
Explanation: An Astronomical Example - Color Magnitude Diagrams
End of explanation
"""
D1 = QTable.read('data1.csv', format='ascii.csv')
D1[0:2]
plt.plot(D1['x'],D1['y'],marker="o",linestyle="None",markersize=5);
# 1-D fit y = ax + b
Fit1 = np.polyfit(D1['x'],D1['y'],1)
Fit1 # The coefficients of the fit (a,b)
Yfit = np.polyval(Fit1,D1['x']) # The polynomial of Fit1 applied to the points D1['x']
plt.plot(D1['x'], D1['y'], marker="o", linestyle="None", markersize=5)
plt.plot(D1['x'], Yfit, linewidth=4, color='c', linestyle='--')
D2 = QTable.read('data2.csv', format='ascii.csv')
plt.plot(D2['x'],D2['y'],marker="o",linestyle="None",markersize=5);
# 2-D fit y = ax**2 + bx + c
Fit2 = np.polyfit(D2['x'],D2['y'],2)
Fit2
Yfit = np.polyval(Fit2,D2['x'])
plt.plot(D2['x'], D2['y'], marker="o", linestyle="None", markersize=5)
plt.plot(D2['x'], Yfit, linewidth=3, color='y', linestyle='--');
# Be careful, very high-order fits may be garbage
Fit3 = np.polyfit(D1['x'],D1['y'],20)
xx = np.linspace(0,10,200)
Yfit = np.polyval(Fit3,xx)
plt.plot(D1['x'], D1['y'], marker="o", linestyle="None", markersize=8)
plt.plot(xx, Yfit, linewidth=3, color='m', linestyle='--');
plt.ylim(-20,120)
"""
Explanation: Curve Fitting
End of explanation
"""
D3 = QTable.read('data3.csv', format='ascii.csv')
plt.plot(D3['x'],D3['y'],marker="o",linestyle="None",markersize=5);
from scipy.optimize import curve_fit
"""
Explanation: Fitting a specific function
End of explanation
"""
def ringo(x,a,b):
return a*np.sin(b*x)
Aguess = 75
Bguess = 1.0/5.0
fitpars, error = curve_fit(ringo,D3['x'],D3['y'],p0=[Aguess,Bguess])
# Function to fit = ringo
# X points to fit = D3['x']
# Y points to fit = D3['y']
# Initial guess at values for a,b = [Aguess,Bguess]
print(fitpars)
Z = np.linspace(0,100,1000)
plt.plot(Z, ringo(Z, *fitpars), 'r-')
plt.plot(Z, ringo(Z,Aguess,Bguess), 'g--')
plt.plot(D3['x'],D3['y'],marker="o",linestyle="None",markersize=5);
"""
Explanation: $$ \Large f(x) = a \sin(bx) $$
End of explanation
"""
Aguess = 35
Bguess = 1.0
fitpars, error = curve_fit(ringo,D3['x'],D3['y'],p0=[Aguess,Bguess])
print(fitpars)
plt.plot(Z, ringo(Z, *fitpars), 'r-')
plt.plot(Z, ringo(Z,Aguess,Bguess), 'g--')
plt.plot(D3['x'],D3['y'],marker="o",linestyle="None",markersize=5);
"""
Explanation: Bad initial guesses can lead to very bad fits
End of explanation
"""
theta = np.linspace(0,2*np.pi,1000)
fig = plt.figure()
ax = fig.add_subplot(111,projection='polar')
fig.set_size_inches(6,6) # (width,height) - letter paper landscape
fig.tight_layout() # Make better use of space on plot
ax.plot(theta,theta/5.0,label="spiral")
ax.plot(theta,np.cos(4*theta),label="flower")
ax.legend(loc=2, frameon=False);
"""
Explanation: Polar Plots
End of explanation
"""
fig,ax = plt.subplots(1,1) # One window
fig.set_size_inches(6,6) # (width,height) - letter paper landscape
fig.tight_layout() # Make better use of space on plot
ax.set_aspect('equal')
labels = np.array(['John', 'Paul' ,'George' ,'Ringo']) # Name of slices
sizes = np.array([0.3, 0.15, 0.45, 0.10]) # Relative size of slices
colors = np.array(['r', 'g', 'b', 'c']) # Color of Slices
explode = np.array([0, 0, 0.1, 0]) # Offset slide 3
ax.pie(sizes, explode=explode, labels=labels, colors=colors,
startangle=90, shadow=True);
"""
Explanation: Everyone likes Pie
End of explanation
"""
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111,projection='3d')
fig.set_size_inches(9,9)
fig.tight_layout()
xx = np.cos(3*theta)
yy = np.sin(2*theta)
ax.plot(theta, xx, yy, c = "Maroon")
ax.scatter(theta, xx, yy, c = "Navy", s = 15);
ax.view_init(azim = -140, elev = 15)
"""
Explanation: 3D plots
End of explanation
"""
|
cniedotus/Python_scrape | .ipynb_checkpoints/Python3_tutorial-checkpoint.ipynb | mit | width = 20
height = 5*9
width * height
"""
Explanation: <center> Python and MySQL tutorial </center>
<center> Author: Cheng Nie </center>
<center> Check chengnie.com for the most recent version </center>
<center> Current Version: Feb 12, 2016</center>
Python Setup
Since most students in this class use Windows 7, I will use Windows 7 for illustration of the setup. Setting up the environmnet in Mac OS and Linux should be similar. Please note that the code should produce the same results whichever operating system (even on your smart phone) you are using because Python is platform independent.
Download the Python 3.5 version of Anaconda that matches your operating system from this link. You can accept the default options during installation. To see if your Windows is 32 bit or 64 bit, check here
You can save and run this document using the Jupyter notebook (previously known as IPython notebook). Another tool that I recommend would be PyCharm, which has a free community edition.
This is a tutorial based on the official Python Tutorial for Python 3.5.1. If you need a little more motivation to learn this programming language, consider reading this article.
Numbers
End of explanation
"""
tax = 8.25 / 100
price = 100.50
price * tax
price + _
round(_, 2)
"""
Explanation: Calculator
End of explanation
"""
print('spam email')
"""
Explanation: Strings
End of explanation
"""
# This would cause error
print('doesn't')
# One way of doing it correctly
print('doesn\'t')
# Another way of doing it correctly
print("doesn't")
"""
Explanation: show ' and " in a string
End of explanation
"""
print('''
Usage: thingy [OPTIONS]
-h Display this usage message
-H hostname Hostname to connect to
''')
print('''Cheng highly recommends Python programming language''')
"""
Explanation: span multiple lines
End of explanation
"""
word = 'HELP' + 'A'
word
"""
Explanation: slice and index
End of explanation
"""
word[0]
word[4]
# endding index not included
word[0:2]
word[2:4]
# length of a string
len(word)
"""
Explanation: Index in the Python way
End of explanation
"""
a = ['spam', 'eggs', 100, 1234]
a
a[0]
a[3]
a[2:4]
sum(a[2:4])
"""
Explanation: List
End of explanation
"""
a
a[2] = a[2] + 23
a
"""
Explanation: Built-in functions like sum and len are explained in the document too. Here is a link to it.
Mutable
End of explanation
"""
q = [2, 3]
p = [1, q, 4]
p
len(p)
p[1]
p[1][0]
"""
Explanation: Nest lists
End of explanation
"""
x=(1,2,3,4)
x[0]
x[0]=7 # it will raise error since tuple is immutable
"""
Explanation: tuple
similar to list, but immutable (element cannot be changed)
End of explanation
"""
tel = {'jack': 4098, 'sam': 4139}
tel['dan'] = 4127
tel
tel['jack']
del tel['sam']
tel
tel['mike'] = 4127
tel
# Is dan in the dict?
'dan' in tel
for key in tel:
print('key:', key, '; value:', tel[key])
"""
Explanation: dict
End of explanation
"""
x = int(input("Please enter an integer for x: "))
if x < 0:
x = 0
print('Negative; changed to zero')
elif x == 0:
print('Zero')
elif x == 1:
print('Single')
else:
print('More')
"""
Explanation: Quiz: how to print the tel dict sorted by the key?
Control of flow
if
Ask a user to input a number, if it's negative, x=0, else if it's 1
End of explanation
"""
a, b = 0, 1 # multiple assignment
while a < 10:
print(a)
a, b = b, a+b
"""
Explanation: while
Fibonacci series: the sum of two elements defines the next with the first two elements to be 0 and 1.
End of explanation
"""
# Measure some strings:
words = ['cat', 'window', 'defenestrate']
for i in words:
print(i, len(i))
"""
Explanation: for
End of explanation
"""
def fib(n): # write Fibonacci series up to n
"""Print a Fibonacci series up to n."""
a, b = 0, 1
while a < n:
print(a)
a, b = b, a+b
fib(200)
fib(2000000000000000) # do not need to worry about the type of a,b
"""
Explanation: Define function
End of explanation
"""
# output for viewing first
import string
import random
# fix the pseudo-random sequences for easy replication
# It will generate the same random sequences
# of nubmers/letters with the same seed.
random.seed(123)
for i in range(50):
# Data values separated by comma(csv file)
print(i+1,random.choice(string.ascii_uppercase),
random.choice(range(6)), sep=',')
# write the data to a file
random.seed(123)
out_file=open('data.csv','w')
columns=['id','name','age']
out_file.write(','.join(columns)+'\n')
for i in range(50):
row=[str(i+1),random.choice(string.ascii_uppercase),
str(random.choice(range(6)))]
out_file.write(','.join(row)+'\n')
else:
out_file.close()
# read data into Python
for line in open('data.csv', 'r'):
print(line)
"""
Explanation: Data I/O
Create some data in Python and populate the database with the created data. We want to create a table with 3 columns: id, name, and age to store information about 50 kids in a day care.
The various modules that extend the basic Python funtions are indexed here.
End of explanation
"""
# crawl_UTD_reviews
# Author: Cheng Nie
# Email: me@chengnie.com
# Date: Feb 8, 2016
# Updated: Feb 12, 2016
from urllib.request import urlopen
num_pages = 2
reviews_per_page = 20
# the file we will save the rating and date
out_file = open('UTD_reviews.csv', 'w')
# the url that we need to locate the page for UTD reviews
url = 'http://www.yelp.com/biz/university-of-texas-at-dallas-\
richardson?start={start_number}'
# the three string patterns we just explained
review_start_pattern = '<div class="review-wrapper">'
rating_pattern = '<i class="star-img stars_'
date_pattern = '"datePublished" content="'
reviews_count = 0
for page in range(num_pages):
print('processing page', page)
# open the url and save the source code string to page_content
html = urlopen(url.format(start_number = page * reviews_per_page))
page_content = html.read().decode('utf-8')
# locate the beginning of an individual review
review_start = page_content.find(review_start_pattern)
while review_start != -1:
# it means there at least one more review to be crawled
reviews_count += 1
# get the rating
cut_front = page_content.find(rating_pattern, review_start) \
+ len(rating_pattern)
cut_end = page_content.find('" title="', cut_front)
rating = page_content[cut_front:cut_end]
# get the date
cut_front = page_content.find(date_pattern, cut_end) \
+ len(date_pattern)
cut_end = page_content.find('">', cut_front)
date = page_content[cut_front:cut_end]
# save the data into out_file
out_file.write(','.join([rating, date]) + '\n')
review_start = page_content.find(review_start_pattern, cut_end)
print('crawled', reviews_count, 'reviews so far')
out_file.close()
"""
Explanation: MySQL
Install MySQL 5.7 Workbench first following this link. You might also need to install the prerequisites listed here before you can install the Workbench. The Workbench is an interface to interact with MySQL database. The actual MySQL database server requires a second step: run the MySQL Installer, then add and intall the MySQL servers using the Installer. You can accept the default options during installation. Later, you will connect to MySQL using the password you set during the installation and configuration. I set the password to be pythonClass.
The documentation for MySQL is here.
To get comfortable with it, you might find this tutorial of Structured Query Language(SQL) to be helpful.
Crawl the reviews for UT Dallas at Yelp.com
The University of Texas at Dallas is reviewed on Yelp.com. It shows on this page that it attracted 38 reviews so far from various reviewers. You learn from the webpage that Yelp displays at most 20 recommended reviews per page and we need to go to page 2 to see the review 21 to review 38. You notice that the URL in the address box of your browser changed when you click on the Next page. Previouly, on page 1, the URL is:
http://www.yelp.com/biz/university-of-texas-at-dallas-richardson
On page 2, the URL is:
http://www.yelp.com/biz/university-of-texas-at-dallas-richardson?start=20
You learn that probably Yelp use this ?start=20 to skip(or offset in MySQL language) the first 20 records to show you the next 18 reviews. You can use this pattern of going to the next page to enumerate all pages of a business in Yelp.com.
In this exmaple, we are going get the rating (number of stars) and the date for each of these 38 reviews.
The general procedure to crawl any web page is the following:
Look for the string patterns proceeding and succeeding the information you are looking for in the source code of the page (the html file).
Write a program to enumerate (for or while loop) all the pages.
For this example, I did a screenshot with my annotation to illustrate the critical patterns in the Yelp page for UTD reviews.
review_start_pattern is a variable to stroe the string of '<div class="review-wrapper">' to locate the beginning of an individual review.
rating_pattern is a variable to stroe the string of '<i class="star-img stars_' to locate the rating.
date_pattern is a variable to stroe the string of '"datePublished" content="' to locate date of the rating.
It takes some trails and errors to figure out what are good string patterns to use to locate the information you need in an html. For example, I found that '<div class="review-wrapper">' appeared exactly 20 times in the webpage, which is a good indication that it corresponds to the 20 individual reviews on the page (the review-wrapper tag seems to imply that too).
End of explanation
"""
word
# first index default to 0 and second index default to the size
word[:2]
# It's equivalent to
word[0:2]
# Everything except the first two characters
word[2:]
# It's equivalent to
word[2:len(word)]
# start: end: step
word[0::2]
"""
Explanation: Quiz: import the crawled file into a table in your database.
More about index
End of explanation
"""
word[0:len(word):2]
"""
Explanation: Target: "HLA", select every other character
End of explanation
"""
word[-1] # The last character
word[-2] # The last-but-one character
word[-2:] # The last two characters
word[:-2] # Everything except the last two characters
"""
Explanation: Negative index
End of explanation
"""
a
a[-2]
a[1:-1]
a[:2] + ['bacon', 2*2]
3*a[:3] + ['Boo!']
"""
Explanation: More about list
End of explanation
"""
# Replace some items:
a[0:2] = [1, 12]
a
# Remove some:
a[0:2] = [] # or del a[0:2]
a
# Insert some:
a[1:1] = ['insert', 'some']
a
# inserting at one position is not the same as changing one element
# a=[1, 12, 100, 1234]
a = [123, 1234]
sum(a)
a[1] = ['insert', 'some']
a
"""
Explanation: Versatile features
End of explanation
"""
# loop way
cubes = []
for x in range(11):
cubes.append(x**3)
cubes
# map way
def cube(x):
return x*x*x
list(map(cube, range(11)))
# list comprehension way
[x**3 for x in range(11)]
"""
Explanation: Target: Get the third power of integers between 0 and 10.
End of explanation
"""
result = []
for i in range(11):
if i%2 == 0:
result.append(i)
else:
print(result)
[i for i in range(11) if i%2==0]
l=[1,3,5,6,8,10]
[i for i in l if i%2==0]
"""
Explanation: Use if in list comprehension
Target: find the even number below 10
End of explanation
"""
#
# ----------------------- In Python ------------------
# access table from Python
# connect to MySQL in Python
import mysql.connector
cnx = mysql.connector.connect(user='root',
password='pythonClass',
database='test')
# All DDL (Data Definition Language) statements are
# executed using a handle structure known as a cursor
cursor = cnx.cursor()
#cursor.execute("")
# write the same data to the example table
query0 = '''insert into example (id, name, age) \
values ({id_num},"{c_name}",{c_age});'''
random.seed(123)
for i in range(50):
query1 = query0.format(id_num = i+1,
c_name = random.choice(string.ascii_uppercase),
c_age = random.choice(range(6)))
print(query1)
cursor.execute(query1)
cnx.commit()
"""
Explanation: Use Python to access MySQL database
Since the official MySQL 5.7.11 provides support for Python upto Version 3.4, we need to install a package to provide to support the Python 3.5. Execute the following line in Windows command line to install it.
End of explanation
"""
#
# ----------------------- In Python ------------------
#
cursor.execute('select * from e_copy;')
for i in cursor:
print(i)
#
# ----------------------- In Python ------------------
#
# # example for adding new info for existing record
# cursor.execute('alter table e_copy add mother_name varchar(1) default null')
query='update e_copy set mother_name="{m_name}" where id={id_num};'
# random.seed(333)
for i in range(50):
query1=query.format(m_name = random.choice(string.ascii_uppercase),id_num = i+1)
print(query1)
cursor.execute(query1)
cnx.commit()
#
# ----------------------- In Python ------------------
#
# example for insert new records
query2='insert into e_copy (id, name,age,mother_name) \
values ({id_num},"{c_name}",{c_age},"{m_name}")'
for i in range(10):
query3=query2.format(id_num = i+60,
c_name = random.choice(string.ascii_uppercase),
c_age = random.randint(0,6),
m_name = random.choice(string.ascii_uppercase))
print(query3)
cursor.execute(query3)
cnx.commit()
# check if you've updated the data successfully in MySQL
"""
Explanation: To get better understanding of the table we just created. We will use MySQL command line again.
End of explanation
"""
import re
# digits
# find all the numbers
infile=open('digits.txt','r')
content=infile.read()
print(content)
# Find all the numbers in the file
numbers=re.findall(r'\d+',content)
for n in numbers:
print(n)
# find equations
equations=re.findall(r'(\d+)=\d+',content)
for e in equations:
print(e)
# subsitute equations to correct them
print(re.sub(r'(\d+)=\d+',r'\1=\1',content))
# Save to file
print(re.sub(r'(\d+)=\d+',r'\1=\1',content), file = open('digits_corrected.txt', 'w'))
"""
Explanation: Regular expression in Python
End of explanation
"""
|
uliang/First-steps-with-the-Python-language | Day 2 - Unit 3.2.ipynb | mit | PRSA.head()
"""
Explanation: 2. Density based plots with matplotlib
In this section, we will be looking at density based plots. Plots like these address a problem with big data: How does one visualise a plot with 10,000++ data points and avoid overplotting.
End of explanation
"""
plt.plot( PRSA.TEMP, PRSA["pm2.5"], 'o', color="steelblue", alpha=0.5)
plt.ylabel("$\mu g/m^3$")
plt.title("PM 2.5 readings as a function of temperature (Celsius)")
"""
Explanation: Source : https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title("Histogram of dew point readings")
ax.set_xlabel("Dew point (C$^\circ$)")
# Enter plotting code below here: ax.hist(PRSA.DEWP)
"""
Explanation: As one can see, there's not much one can say about the structure of the data because every point below 400 $\mu g/m^3$ is totally filled up with blue color.
It is here, that a density plot helps mitigate this problem. The central idea is that individual data points are not so important in as much as they contribute to revealing the underlying distribution of the data. In other words, for large amounts of data, we want to visualize the distribution instead of visualizing how individual datapoints are placed.
For this lesson, we will look at this data set and others to investigate the use of other plotting functions in matplotlib.
2.1 Learning objectives
To use histograms to visualize distribution of univariate data.
To customize histograms.
To use 2D histogram plots and hexbin plots to plot 2D distributions.
To plot colorbars to annotate such plots
3. Histograms
Histograms are created using the hist command. We illustrate this using the PRSA data set. Let's say that we are interested in plotting the distribution of DEWP.
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title("Histogram of dew point readings")
ax.set_xlabel("Dew point (C$^\circ$)")
# Try typing in ax.hist(PRSA.DEWP, ...) with various customization options.
"""
Explanation: 3.1 Understanding the plotting code
So how was this produced?
We initialized a figure object using plt.figure.
Next we created an axis within the figure. We used the command fig.add_subplot. We passed an argument 111 to the function which is a short hand for creating only one axes.
With an ax object created, we now set a title using the set_title function and also label the x -axis with its label.
The histogram proper is plotting by calling the hist method on the axes instance. We merely need to pass the (1-dimensional) array of data to the function.
Notice the output that is produced. It's not very nice and probably needs some prettying up. But as a quick exploratory plot, it does its job. Notice the extra textual output. These can be suppressed with the ; written at the end of the last statement in the code block.
3.2 Customizing our histogram
The chart above can be customized to our liking by passing keyword arguments to the function.
Number of bins. The number of bins may be adjusted with the bins= keyword argument. However, do note that more bins does not translate into a better chart. With more bins, one tends to pick up much more variation between data (noise) that what is necessary. Therefore, try to choose a value which will give you the best sense of how the data is dristributed between the extremes of no details (small bins value) to a noisy chart (high bins value).
Normalization. Setting normed=True (it is False by default) means that the total area of the histogram is set to 1. This setting is useful to compare distributions of variables on different orders of magnitude.
Is a log scale needed?. log=True may be used if you want the count (or relative counts) to be plotted on a log scale. This may be useful if the counts in different bins differ by huge orders of magnitude. This is especially true for data modelled by to power law distributions.
Cumulative sums. Sometimes, you want to plot the ogive instead. Enable this by setting cumulative=True.
Colors. Selecting the best color, especially when comparing multiple histograms on the same axis is crucial. You have choose different colors using color= argument. You may use any hexadecimal color codes (e.g. ##660000) or CSS color names, matplotlib color abbrevations (e.g. c for cyan, m for magenta, b for blue etc..)
Lines between bars. A histogram is more presentable is one draws lines between bars. Enable this by setting lw= to an appropraite thickness (any value around 0.5 is ok) and giving it a color by setting ec=.
Transparency control. This is useful if there are multiple histograms. Use alpha= and enter any value from 0 (fully transparent) to 1 (fully opaque).
End of explanation
"""
bike_sharing.head()
"""
Explanation: More options can be found of the documentation page.
3.3 Histograms with weights
The hist function is not only used to plot histograms. In essence, it is a function used to plot rectangular patches on an axis. Thus, we use hist to plot bar charts. In fact, we may use it to plot stacked bar charts, which is something seaborn cannot do.
In the following dataset, we want to plot the distribution of daily bike rental counts (variable cnt) on any given day of the week and seperate them by the variable weathersit which is an ordinal variable denoting the severity of the weather. 1 means good weather, 4 means bad.
End of explanation
"""
# Please run this cell before proceeding
group = bike_sharing.groupby("weathersit")
weathers = [w_r for w_r, _ in group]
day_data = [group.get_group(weather).weekday for weather in weathers]
weights_data = [group.get_group(weather).cnt for weather in weathers]
"""
Explanation: Source: https://archive.ics.uci.edu/ml/datasets/bike+sharing+dataset
As you can see in the dataset above, we need to plot weekday on the x-axis and have cnt as the y-axis. How can we plot this using hist? The problem is that if we pass the weekday array to hist, we end up counting the frequency of each day in the dataset!
To solve this, we pass the cnt variable as a seperate parameter to hist through the keyword argument weights.
End of explanation
"""
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
ax2.set_title("Distribution of daily bike rental counts by weather conditions", y=1.05)
ax2.set_xlabel("Day")
ax2.set_ylabel("Bike rental counts")
ax2.hist( , # day_data
label= , # weathers
weights= , # weights_data
log= , # We want the y-axis to be on a log scale
bins=np.linspace(-0.5,6.5,8),
histtype="bar", # this is the default setting.
ec="k", lw=1.0,
)
ax2.legend(loc="lower right", title="Weather\nrating", fontsize="x-small");
"""
Explanation: What we need to do is to group the data set by weather rating and create a list of array data to be passed to hist. We can do this efficiently using the groupby method on data frames and list comprehension statements. Now we have three lists: One for weather rating, one for the day of the week and one more for the daily bike rental counts.
We pass this to the hist function and pass a sequence of floats [-0.5, 0.5,..., 6.5] so that each bar is nicely centered on the tick mark. We also plot this count on a log scale (set log=True) because we expect quite a large difference between bike rental counts in good weather as compared to bad. Without this, it is very difficult to see any variation for rentals during bad weather.
End of explanation
"""
PRSA = PRSA.dropna() #drop missing data
fig3, ax3 = plt.subplots(figsize=(8,6))
ax3.grid(b="off")
ax3.set_facecolor("white")
ax3.set_title("PM2.5 readings distribution by temperature")
ax3.set_ylabel("$\mu g$/$m^3$")
ax3.set_xlabel("Daily temperature (C$^\circ$)")
ax3.hist2d(PRSA.TEMP, PRSA["pm2.5"],
bins=55,
cmin=5,
range=[[-15,40],[0,600]],
cmap="Blues");
"""
Explanation: As expected, bike rentals are low in bad weather. However, notice the variation within a week for the bad weather category. It is quite clear that people do not go biking on a bad weather Sunday since they have a choice not to go out!
Let's rerun the cell above by replacing the ax.hist function with the following code snippet. This helps us create a stacked bar chart.
ax2.set_ylim(*(0,5e5))
ax2.hist(day_data,
label=weathers,
weights=weights_data,
bins=np.linspace(-0.5, 6.5, 8),
ec="k", lw=1.,
histtype="barstacked",
rwidth=0.8
)
A note on this new code: The histtype="barstacked" stacks the data on top each other. The new parameter rwidth= sets the ratio between the bar width and the bin width, which is the way you put spaces between bars. We disable log scale so that the natural totals are more clearly seen. Notice the extreme difference between rentals in different weather conditions.
There is hardly any differences between work days and weekends although we can detect a slight increase through the week. People do love the outdoors!
3.4 Summary of plotting a histogram
To summarize the teaching points above:
Pass either a single array of data to make into a histogram or a list of arrays if you want multiple data on one chart. You do not need to summarize the data. hist will do it for you.
Visual properties can be customize with keyword arguments like color, lw, ec, alpha, etc...
You can control whether to normalize the plot to have unit area by setting normed.
Choose an appropriate chart by setting histtype= to either bar or barstacked.
Pass frequency counts to the weights= parameter if you have a summarized bin count already.
4. hist2d and the hexbin plot
We use hist2d to visualize the joint distribution of bivariate data. When we expect correlation between two variables, a two dimensional histogram helps us reveal the structure of that relationship and avoids overplotting.
4.1 Plotting and customizing a 2D histogram
Let's return to the PRSA data set. Recall that a scatterplot suffers from overplotting. In order to circumvent this and still get useful insight into the data, we use hist2d.
End of explanation
"""
img
"""
Explanation: This chart is more informative that a simple scatterplot. For one, we now know that there are two modes in the distribution of temperature and pollutant. Furthermore, there is more variation in pollution levels in the colder seasons as compared to warmer days.
Let's try to understand how this plot was created.
fig3, ax3 = plt.subplots(figsize=(8,6))
We initialize a figure object of width=8 units and height=6 units and an axis object to contain our histogram.
ax3.grid(b="off")
ax3.set_facecolor("white")
ax3.set_title("PM2.5 readings distribution by temperature")
ax3.set_ylabel("$\mu g$/$m^3$")
ax3.set_xlabel("Daily temperature (C$^\circ$)")
These codes set the axis grid to invisible and the background color to white. The rest are plotting information: plot titles and the units used on each axes.
ax3.hist2d(PRSA.TEMP, PRSA["pm2.5"],
bins=55,
cmin=5,
range=[[-15,40],[0,600]],
cmap="Blues");
Finally, this is the command to plot the histogram. Since this is a 2d histogram, we pass two arrays of data, first for the x-axis and then for the y-axis.
The bins= parameter is set to 55. That means there are 55 bins in each dimension. Again, too high a number leads to overfitting and creates a very noisy chart. So choose a suitable number so as not to lose too much information.
cmin= controls which frequency counts are displayed. If a particular frequency count is below cmin, it is not plotted.
The range parameter sets the expected range of the data in each dimension. Data points which are outside the range are considered outliers and not tallied.
Finally cmap controls the color scheme used to indicate frequency counts. There are many color schemes to choose from and all can be seen here. "Blues" is a type of color scheme known as a sequential color scheme. Use this for measures like frequency counts where the contrast between min and max is important.
4.2 Annotating plots with colorbar
This plot is still not perfect. For example, it would be nice to have an way to tell the frequency counts at each color level. To do this, we add a color bar to the plot.
Modify ax3.hist(PRSA.TEMP, ... to the following:
img = ax3.hist(PRSA.TEMP, ...
This saves the 2d histogram image(along with other supplementary information) in the variable img. Let's see what img contains.
End of explanation
"""
fig4, ax4 = plt.subplots(figsize=(9,6))
ax4.grid(b="off")
ax4.set_facecolor("white")
ax4.set_title("PM2.5 distribution by temperature")
ax4.set_ylabel("$\mu$g/$m^3$")
ax4.set_xlabel("Temperature (C$^\circ$)")
img = ax4.hexbin(PRSA.TEMP, PRSA["pm2.5"],
gridsize=55,
mincnt=5,
# bins="log",
cmap="Blues",
)
cbar = plt.colorbar(img, ax=ax4)
#cbar.set_label("$\log($Frequency)")
cbar.set_label("Frequency")
"""
Explanation: If you observe, img is a tuple of length 4. The last entry of the tuple is the image data. Next, after the last line of ax3.hist command, add in the function
cbar = plt.colorbar(img[3], ax=ax3)
This means that we are now going to plot a color bar in ax3 (that's what ax=ax3 means) using the image data from our 2d histogram (which is why we must pass img[3] as an argument to plt.colorbar. We save the created colorbar instance in a variable named cbar so that we may further customize it.
To add a title to the colorbar, add in the following line:
cbar.set_label("Frequency")
This is what you should see if everything is done correctly.
4.3 hexbin plots
2D histograms create a square grid and visualize frequency counts using colors. Instead of using a square grid, we may also a hexagonal grid. As hexagons have more sides, this smooths out the resulting image. Let's see this effect with the same PRSA data set as in the hist2d plot.
End of explanation
"""
# Paste or type in your code here
"""
Explanation: However, hexbin plots differ from the square 2d histograms in more ways than the type of tiling used. We may use hexbin plots to investigate how a dependant variable depends on 2 independant variables. Just as we passed bin frequencies to the weights parameter in hist, we pass the third dependant variable to the C parameter in hexbin. That means, we can visualize a two dimensional surface embedded in a 3D space as an altitude map.
4.3.1 The hexbin C parameter
Let's investigate how PM2.5 pollutants vary with temperature and atmospheric pressure in the PRSA dataset. To do that type in (or copy paste) the following code
fig5, ax5 = plt.subplots(figsize=(8,6))
ax5.grid(b="off")
ax5.set_facecolor("white")
ax5.set_title("PM2.5 pollutants as a function of temperature\nand atmospheric pressure")
ax5.set_xlabel("Temperature (C$^\circ$)")
ax5.set_ylabel("Pressure (hPa)")
img = ax5.hexbin(PRSA.TEMP, PRSA.PRES, C=PRSA["pm2.5"],
gridsize=(30, 20),
cmap="rainbow")
cbar = plt.colorbar(img, ax=ax5)
cbar.set_label("$\mu$g/m$^3$")
End of explanation
"""
fig5, ax5 = plt.subplots(figsize=(8,6))
ax5.grid(b="off")
ax5.set_facecolor("white")
ax5.set_title("PM2.5 pollutants as a function of temperature\nand atmospheric pressure")
ax5.set_xlabel("Temperature (C$^\circ$)")
ax5.set_ylabel("Pressure (hPa)")
img = ax5.hexbin(PRSA.TEMP, PRSA.PRES, C=PRSA["pm2.5"],
gridsize=(30,20),
,
cmap="rainbow",
)
cbar = plt.colorbar(img, ax=ax5)
# Write your answer below
"""
Explanation: The color for each hexagon is determined by the mean value of each PM2.5 readings corresponding to the pressure and temperature readings contained in each hexagon.
4.3.2 Changing the aggregation function for each hexbin
However, the aggregation function on each hexagon can be changed by specifying another function to argument reduce_C_function. Let's change this by passing the following code to hexbin.
reduce_C_function=np.median
Exercise: Change the label of the color bar to indicate that we are taking the median PM2.5 readings in each hexagon.
End of explanation
"""
fig5, ax5 = plt.subplots(figsize=(8,6))
ax5.grid(b="off")
ax5.set_facecolor("white")
ax5.set_title("PM2.5 pollutants as a function of temperature\nand atmospheric pressure", y=1.05)
ax5.set_xlabel("Temperature (C$^\circ$)")
ax5.set_ylabel("Pressure (hPa)")
img = ax5.hexbin(PRSA.TEMP, PRSA.PRES, C=PRSA["pm2.5"],
gridsize=(30,20),
bins="log",
cmap="rainbow"
)
cbar = plt.colorbar(img, ax=ax5)
cbar.set_label("$\log_{10}(P)$\n$P \;\mu m$/$m^3$")
"""
Explanation: 4.3.3 Changing the bins parameter
Besides controlling the number of hexagons, we can also bin the hexagons so that the hexagons within the same bin have the same color. This helps us further smooth out the plot and avoid overfitting.
End of explanation
"""
|
ericmjl/reassortment-simulator | Simulator Notebook.ipynb | mit | hosts = []
n_hosts = 1000
for i in range(n_hosts):
if i < n_hosts / 2:
hosts.append(Host(color='blue'))
else:
hosts.append(Host(color='red'))
"""
Explanation: Agent-Based Model to Dissect Contribution of Host Immunity and Contact Structure to Influenza Reassortment
Eric J. Ma
Runstadler Lab Meeting
If you want to follow along: https://github.com/ericmjl/reassortment-simulator
Reassortment
The reticulate evolutionary mechanism for the influenza virus.
Implicated in all past human pandemics.
Quantitatively important for host switches. (we're really excited about this finding!)
Scientific Questions
How does host contact structure affect the ability of a virus to shuffle its genome?
How does host immunity affect the necessity of a virus to shuffle its genome?
How does fitness of the virus within a host affect the ability of the virus to reassort? @Wendy
How does waning immunity affect the necessity of the virus to reassort? @Jon
Agent-Based Simulation Setup
Two types of agents: Host and Virus.
Interaction between hosts:
Homophily: degree of preference for interaction with the same color.
Transmission: passing on of virus.
Interaction between viruses:
Reassortment: exchange of genes between viruses.
Interaction between host and virus:
Host can gain immunity to viruses over time.
Virus has host preference of same color on one segment.
Host:
colour
immunity
viruses
expiry_time
alive_time
Virus:
seg1color
seg2color
infection_time
expiry_time
Simulation Setup
In the following blocks of code, we will look at how the simulation will run.
Initialization
1000 hosts: 500 red, 500 blue.
End of explanation
"""
# Pick 5 red hosts and 5 blue hosts at random, and infect it with a virus of the same color.
blue_hosts = [h for h in hosts if h.color == 'blue']
blue_hosts = sample(blue_hosts, 10)
blue_virus = Virus(seg1color='blue', seg2color='blue')
for h in blue_hosts:
h.viruses.append(blue_virus)
red_hosts = [h for h in hosts if h.color == 'red']
red_hosts = sample(red_hosts, 10)
red_virus = Virus(seg1color='red', seg2color='red')
for h in red_hosts:
h.viruses.append(red_virus)
"""
Explanation: Initialization
Pick 5 red hosts, infect with one red virus each.
Pick 5 blue hosts, infect iwth one blue virus each.
End of explanation
"""
p_immune = 1E-3 # 1 = always successful even under immune pressure
# 0 = always unsuccessful under immune pressure.
p_replicate = 0.95 # probability of replication given that a host is infected.
p_contact = 1 - 1E-1/n_hosts # probability of contacting a host of the same color.
p_same_color = 0.99 # probability of successful infection given segment of same color.
p_diff_color = 0.9 # probability of successful infection given segment of different color.
# Set up number of timesteps to run simulation
n_timesteps = 100
# Set up a defaultdict for storing data
data = defaultdict(list)
"""
Explanation: Parameters
End of explanation
"""
# Run simulation
for t in range(n_timesteps):
# First part, clear up old infections.
for h in hosts:
h.increment_time()
h.remove_old_viruses()
h.remove_immune_viruses()
# Step to replicate viruses present in hosts.
infected_hosts = [h for h in hosts if h.is_infected()]
for h in infected_hosts:
if bernoulli.rvs(p_replicate): # we probabilistically allow replication to occur
h.replicate_virus()
# Step to transmit the viruses present in hosts.
infected_hosts = [h for h in hosts if h.is_infected()]
num_contacts = 0
for h in infected_hosts:
same_color = bernoulli.rvs(p_contact)
if same_color:
new_host = choice([h2 for h2 in hosts if h2.color == h.color])
num_contacts += 0
else:
new_host = choice([h2 for h2 in hosts if h2.color != h.color])
num_contacts += 1
virus = h.viruses[-1] # choose the newly replicated virus every time.
# Determine whether to transmit or not.
p_transmit = 1
### First, check immunity ###
if virus.seg1color in new_host.immunity:
p_transmit = p_transmit * p_immune
elif virus.seg1color not in new_host.immunity:
pass
### Next, check seg1.
if virus.seg1color == new_host.color:
p_transmit = p_transmit * p_same_color
else:
p_transmit = p_transmit * p_diff_color
### Finally, check seg2.
if virus.seg2color == new_host.color:
p_transmit = p_transmit * p_same_color
else:
p_transmit = p_transmit * p_diff_color
# Determine whether to transmit or not, by using a Bernoulli trial.
transmit = bernoulli.rvs(p_transmit)
# Perform transmission step
if transmit:
new_host.viruses.append(virus)
# # Capture data in the summary graph.
# if virus.is_mixed():
# G.edge[h.color][new_host.color]['mixed'] += 1
# else:
# G.edge[h.color][new_host.color]['clonal'] += 1
else:
pass
### INSPECT THE SYSTEM AND RECORD DATA###
num_immunes = 0 # num immune hosts
num_infected = 0 # num infected hosts
num_blue_immune = 0 # num blue immune hosts
num_red_immune = 0 # num red immune hosts
num_uninfected = 0 # num uninfected hosts
num_mixed = 0 # num mixed viruses
num_original = 0 # num original colour viruses
num_red_virus = 0 # num red viruses
num_blue_virus = 0 # num blue viruses
for h in hosts:
if len(h.immunity) > 0:
num_immunes += 1
if h.is_infected() > 0:
num_infected += 1
if 'blue' in h.immunity:
num_blue_immune += 1
if 'red' in h.immunity:
num_red_immune += 1
if not h.is_infected():
num_uninfected += 1
for v in h.viruses:
if v.is_mixed():
num_mixed += 1
else:
if v.seg1color == 'blue' and v.seg2color == 'blue':
num_blue_virus += 1
elif v.seg1color == 'red' and v.seg2color == 'red':
num_red_virus += 1
num_original += 1
# Record data that was captured
data['n_immune'].append(num_immunes)
data['n_infected'].append(num_infected)
data['n_blue_immune'].append(num_blue_immune)
data['n_red_immune'].append(num_red_immune)
data['n_uninfected'].append(num_uninfected)
data['n_mixed'].append(num_mixed)
data['n_original'].append(num_original)
data['n_red_virus'].append(num_red_virus)
data['n_blue_virus'].append(num_blue_virus)
data['n_contacts'].append(num_contacts)
### INSPECT THE SYSTEM ###
"""
Explanation: Run Simulation!
End of explanation
"""
# Reassortment successful in establishing infection or not?
plt.plot(data['n_red_virus'], color='red', label='red')
plt.plot(data['n_blue_virus'], color='blue', label='blue')
plt.plot(data['n_original'], color='black', label='original')
plt.plot(data['n_mixed'], color='purple', label='mixed')
plt.ylabel('Number of Viruses')
plt.xlabel('Time Step')
plt.title('Viruses')
plt.legend()
np.array_equal(np.array(data['n_mixed']), np.zeros(100))
np.where(np.array(data['n_mixed']) == np.max(data['n_mixed']))[0] - np.where(np.array(data['n_original']) == np.max(data['n_original']))[0]
"""
Explanation: Result: Viral Dynamics
End of explanation
"""
plt.plot(data['n_infected'], color='green', label='infected')
plt.plot(data['n_immune'], color='purple', label='immune')
plt.plot(data['n_blue_immune'], color='blue', label='blue immune')
plt.plot(data['n_red_immune'], color='red', label='red immune')
plt.ylabel('Number')
plt.xlabel('Time Steps')
plt.title('Hosts')
plt.legend()
"""
Explanation: Result: Host Immunity
End of explanation
"""
plt.plot(data['n_contacts'], color='olive', label='contacts')
plt.title('Contact Frequency')
np.where(np.array(data['n_contacts']) >= 1)[0]
import pandas as pd
pd.DataFrame(data)
"""
Explanation: Result: Contact Frequency
End of explanation
"""
|
stanfordmlgroup/ngboost | examples/user-guide/content/5-dev.ipynb | apache-2.0 | import sys
sys.path.append('/Users/c242587/Desktop/projects/git/ngboost')
"""
Explanation: Developing NGBoost
End of explanation
"""
from scipy.stats import laplace as dist
import numpy as np
from ngboost.distns.distn import RegressionDistn
from ngboost.scores import LogScore
class LaplaceLogScore(LogScore): # will implement this later
pass
class Laplace(RegressionDistn):
n_params = 2
scores = [LaplaceLogScore] # will implement this later
def __init__(self, params):
# save the parameters
self._params = params
# create other objects that will be useful later
self.loc = params[0]
self.logscale = params[1]
self.scale = np.exp(params[1]) # since params[1] is log(scale)
self.dist = dist(loc=self.loc, scale=self.scale)
def fit(Y):
m, s = dist.fit(Y) # use scipy's implementation
return np.array([m, np.log(s)])
def sample(self, m):
return np.array([self.dist.rvs() for i in range(m)])
def __getattr__(self, name): # gives us access to Laplace.mean() required for RegressionDist.predict()
if name in dir(self.dist):
return getattr(self.dist, name)
return None
@property
def params(self):
return {'loc':self.loc, 'scale':self.scale}
"""
Explanation: As you work with NGBoost, you may want to experiment with distributions or scores that are not yet supported. Here we will walk through the process of implementing a new distribution or score.
Adding Distributions
The first order of business is to write the class for your new distribution. The distribution class must subclass the appropriate distribution type (either RegressionDistn or ClassificationDistn) and must implement methods for fit() and sample(). The scores compatible with the distribution should be stored in a class attribute called score and the number of parameters in an class attribute n_params. The class must also store the (internal) distributional parameters in a _params instance attribute. Additionally, regression distributions must implement a mean() method to support point prediction.
We'll use the Laplace distribution as an example. The Laplace distribution has PDF $\frac{1}{2b} e^{-\frac{|x-\mu|}{b}}$ with user-facing parameters $\mu \in \mathbb{R}$ and $b > 0$, which we will call loc and scale to conform to the scipy.stats implementation.
In NGBoost, all parameters must be represented internally in $\mathbb R$, so we need to reparametrize $(\mu, b)$ to, for instance, $(\mu, \log(b))$. The latter are the parameters we need to work with when we initialize a Laplace object and when implement the score.
End of explanation
"""
class LaplaceLogScore(LogScore):
def score(self, Y):
return -self.dist.logpdf(Y)
def d_score(self, Y):
D = np.zeros((len(Y), 2)) # first col is dS/d𝜇, second col is dS/d(log(b))
D[:, 0] = np.sign(self.loc - Y)/self.scale
D[:, 1] = 1 - np.abs(self.loc - Y)/self.scale
return D
"""
Explanation: The fit() method is a class method that takes a vector of observations and fits a marginal distribution. Meanwhile, sample() should return a $m$ samples from $P(Y|X=x)$, each of which is a vector of len(Y).
Here we're taking advantage of the fact that scipy.stats already has the Laplace distribution implemented so we can steal its fit() method and put a thin wrapper around rvs() to get samples. We also use __getattr__() on the internal scipy.stats object to get access to its mean() method.
Lastly, we write a convenience method params() that, when called, returns the distributional parameters as the user expects to see them, i.e. $(\mu, b)$, not $(\mu, \log b)$.
Implementing a Score for our Distribution
Now we turn our attention to implementing a score that we can use with this distribution. We'll use the log score as an example.
All implemented scores should subclass the appropriate score and implement three methods:
score() : the value of the score at the current parameters, given the data Y
d_score() : the derivative of the score at the current parameters, given the data Y
metric() : the value of the Riemannian metric at the current parameters
End of explanation
"""
class LaplaceLogScore(LogScore):
def score(self, Y):
return -self.dist.logpdf(Y)
def d_score(self, Y):
D = np.zeros((len(Y), 2)) # first col is dS/d𝜇, second col is dS/d(log(b))
D[:, 0] = np.sign(self.loc - Y)/self.scale
D[:, 1] = 1 - np.abs(self.loc - Y)/self.scale
return D
class Laplace(RegressionDistn):
n_params = 2
scores = [LaplaceLogScore]
def __init__(self, params):
# save the parameters
self._params = params
# create other objects that will be useful later
self.loc = params[0]
self.logscale = params[1]
self.scale = np.exp(params[1]) # since params[1] is log(scale)
self.dist = dist(loc=self.loc, scale=self.scale)
def fit(Y):
m, s = dist.fit(Y) # use scipy's implementation
return np.array([m, np.log(s)])
def sample(self, m):
return np.array([self.dist.rvs() for i in range(m)])
def __getattr__(self, name): # gives us access to Laplace.mean() required for RegressionDist.predict()
if name in dir(self.dist):
return getattr(self.dist, name)
return None
@property
def params(self):
return {'loc':self.loc, 'scale':self.scale}
"""
Explanation: Notice that the attributes of an instance of Laplace are referenced using the self.attr notation even though we haven't said these will be attributes of the LaplaceLogScore class. When a user asks NGBoost to use the Laplace distribution with the LogScore, NGBoost will first find the implmentation of the log score that is compatible with Laplace, i.e. LaplaceLogScore and dynamically create a new class that has both the attributes of the distribution and the appropriate implementation of the score. For this to work, the distribution class Laplace must have a scores class attribute that includes the implementation LaplaceLogScore and LaplaceLogScore must subclass LogScore. As long as those conditions are satisfied, NGBoost can take care of the rest.
The derivatives with respect to $\log b$ and $\mu$ are easily derived using, for instance, WolframAlpha.
In this example we won't bother implementing metric(), which would return the current Fisher Information. The reason is that the NGBoost implmentation of LogScore has a default metric() method that uses a Monte Carlo method to approximate the Fisher Information using the gradient() method and the distribution's sample() method (that's why we needed to implement sample()). By inhereting from LogScore(), not only can NGBoost find our implementation for the Laplace distribution, it can also fall back on the defualt metric() method. More on that later.
Putting it all together:
End of explanation
"""
from ngboost import NGBRegressor
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
X, Y = load_boston(True)
X_reg_train, X_reg_test, Y_reg_train, Y_reg_test = train_test_split(X, Y, test_size=0.2)
ngb = NGBRegressor(Dist=Laplace, Score=LogScore).fit(X_reg_train, Y_reg_train)
Y_preds = ngb.predict(X_reg_test)
Y_dists = ngb.pred_dist(X_reg_test)
# test Mean Squared Error
test_MSE = mean_squared_error(Y_preds, Y_reg_test)
print('Test MSE', test_MSE)
# test Negative Log Likelihood
test_NLL = -Y_dists.logpdf(Y_reg_test).mean()
print('Test NLL', test_NLL)
"""
Explanation: And we can test our method:
End of explanation
"""
from ngboost.scores import Score
class SphericalScore(Score):
pass
"""
Explanation: Dig into the source of ngboost.distns to find more examples. If you write and test your own distribution, please contribute it to NGBoost by making a pull request!
Censored Scores
You can make your distribution suitable for use in surival analysis if you implement a censored version of the score. The signature for the score(), d_score() and metric() methods should be the same, but they should expect Y to be indexable into two arrays like E, T = Y["Event"], Y["Time"]. Furthermore, any censored scores should be linked to the distribution class definition via a class attribute called censored_scores instead of scores.
Since censored scores are more general than their standard counterparts (fully observed data is a specific case of censored data), if you implement a censored score in NGBoost, it will automatically become available as a useable score for standard regression analysis. No need to implement the regression score seperately or register it in the scores class attribute.
Metrics
As we saw, using the log score, the easiest thing to do as a developer is to lean on the default ngboost method that calculates the log score metric.
However, the distribution-agnostic default method is slow because it must sample from the distribution many times to build up an approximation of the metric. If you want to make it faster, then you must derive and implement the distribution-specific Riemannian metric, which for the log score is the Fisher information matrix of that distribution. You have to derive the Fisher with respect to the internal ngboost parameterization (if that is different to the user-facing parametrization, e.g. $\log(\sigma)$, not $\sigma$). Deriving a Fisher is not necessarily easy since you have to compute an expectation analytically, but there are many examples online of deriving Fisher matrices that you can look through.
For example, consider the Student's-t distribution. This distribution is parameterised by degrees of freedom $\nu$, mean $\mu$, and standard deviation $\sigma$.
The Fisher information of this distribution can be found here, and is
$$\begin{align}
\begin{bmatrix}
\frac{\nu + 1}{(\nu + 3) \sigma^2} & 0 \ 0 & \frac{\nu}{2(\nu + 3) \sigma^4}
\end{bmatrix}
\end{align}$$
As $\sigma > 0$, for NGBoost we must replace this with $\log ( \sigma )$. This requires us to reparameterise the distribution. To find the Fisher information under this reparameterisation, we can follow the procedure laid out here on Wikipedia.
$\eta = (\mu, \sigma), \theta = (\mu, \log \sigma)$
$I_{\eta}(\eta) = J^T I_{\theta}(\theta) J$
Where $J$ is the $2 \times 2$ Jacobian matrix defined by
$$[J]_{ij} = \dfrac{\partial \theta_i}{\partial \eta_j}$$
Which evaluates to
$$\begin{align}
J = J^T &= \begin{bmatrix}
1 & 0 \ 0 & \frac{1}{\sigma}
\end{bmatrix}\
J^{-1} &= \begin{bmatrix}
1 & 0 \ 0 & \sigma
\end{bmatrix}\
\end{align}$$
We can thus obtain the desired Fisher information by rearranging as such,
$$\begin{align}
I_{\theta}(\theta) &= J^{-1} I_{\eta}(\eta) J^{-1}\
&= \begin{bmatrix}
1 & 0 \ 0 & \sigma
\end{bmatrix}
\begin{bmatrix}
\frac{\nu + 1}{(\nu + 3) \sigma^2} & 0 \ 0 & \frac{\nu}{2(\nu + 3) \sigma^4}
\end{bmatrix}
\begin{bmatrix}
1 & 0 \ 0 & \sigma
\end{bmatrix}\
&= \begin{bmatrix}
\frac{\nu + 1}{(\nu + 3) \sigma^2} & 0 \ 0 & \frac{\nu}{2(\nu + 3) \sigma^2}
\end{bmatrix}
\end{align}
$$
If you don't want to use the log score (say you want CRP score, for example), then ngboost does not (yet?) have a default method for calculating the metric and you must derive and implement it yourself. This is harder than deriving a Fisher because there are not many worked examples. The most general derivation process should follow the outline here, replacing the KL divergence (which is induced by the log score) with whichever divergence is induced by the scoring rule you want to use (e.g. L2 for CRPS), again taking care to derive with respect to the internal ngboost parameterization, not the user-facing one. For any particular score, there may be a specific closed-form expression that you can use to calculate the metric across distributions (the expression for the Fisher Info serves this purpose for the log score) or there may not be- I actually don't know the answer to this question! But if there were, that could suggest some kind of default implementation for that score's metric() method.
Adding Scores
We've seen how to implement an existing score for a new distribution, but making a new score altogether in NGBoost is also easy: just make a new class that subclasses Score:
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.