repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session01/Day2/ImageProcessing/Image Processing Workbook II.ipynb | mit | import numpy as np
import scipy.signal
import matplotlib.pyplot as plt
%matplotlib inline
# notebook
import detection
import imageProc
import utils
"""
Explanation: You are going to read some data, take a look at it, smooth it, and think about
whether the objects you've found are real.
I've provided three python files:
- detection.py Some code to detect objects
- imageProc.py Some image processing code to get you started
- utils.py Convenience functions for a Data object, I/O, and image display
There are also some data files. These started out as fits (as read with pyfits.py, not provided) but
I saved them as numpy ".npy" files (to be read with numpy.load).
End of explanation
"""
da = utils.Data()
da.read()
utils.mtv(da, b=10, alpha=0.8)
xlim, ylim = (80, 400), (100, 400)
plt.xlim(xlim); plt.ylim(ylim)
plt.show()
"""
Explanation: Let's take a look at some data.
Rather than asking you to install a display tool such as ds9, ginga, firefly, or aladin I've provided you with a lightweight image display tool, utils.mtv()
The coloured overlay shows the mask bits that tells you which pixels are bad (its visibility is controlled
by the alpha parameter). The stretch is controlled by "b" (it's roughly the transition from a linear to a
logarithmic stretch).
You can print the value of pixel (x, y) with
print da.image[y, x]
(note the order of the indices). Here x and y can be scalars or numpy arrays
End of explanation
"""
utils.mtv(da, b=10, alpha=0.0, fig=2)
plt.xlim(xlim); plt.ylim(ylim)
plt.show()
"""
Explanation: We can show the same data without marking the bad pixels -- you'll see that I fixed them. Magic
End of explanation
"""
raw = utils.Data()
raw.read(readRaw=True)
utils.mtv(raw, b=10, alpha=0.3)
plt.xlim(740, 810); plt.ylim(230, 290)
plt.show()
"""
Explanation: If you want to look at the raw data, you can:
End of explanation
"""
def gaussian2D(beta):
size = int(3*abs(beta) + 1)
x, y = np.mgrid[-size:size+1, -size:size+1]
phi = np.exp(-(x**2 + y**2)/(2*beta**2))
phi /= phi.sum()
return phi
def convolveWithGaussian(image, beta):
phi = gaussian2D(beta)
return scipy.signal.convolve(image, phi, mode='same')
# %%timeit -n 1 -r 1
sda = da.copy()
beta = 2.5
sda.image = convolveWithGaussian(sda.image, beta)
utils.mtv(sda.image)
"""
Explanation: Next write a function to smooth the data with a Gaussian filter. You can do the work with convolveWithGaussian in the next cell.
N.b. You can make a copy of a Data object using da.copy()
End of explanation
"""
phi = gaussian2D(beta)
n_eff = 1/np.sum(phi**2)
print "n_eff = %.3f (analytically: %.3f)" % (n_eff, 4*pi*beta**2)
"""
Explanation: We can also calculate the filter's effective area (and confirm or deny that I did my Gaussian integrals correctly in the lecture)
End of explanation
"""
def convolveWithGaussian(image, beta):
def gaussian1D(beta):
size = int(3*abs(beta) + 1)
x = np.arange(-size, size+1)
phi = np.exp(-x**2/(2*beta**2))
phi /= phi.sum()
return phi
beta = 2.5
phi = gaussian1D(beta)
for y in range(0, image.shape[0]):
image[y] = scipy.signal.convolve(image[y], phi, mode='same')
for x in range(0, image.shape[1]):
image[:, x] = scipy.signal.convolve(image[:, x], phi, mode='same')
return image
"""
Explanation: That convolution seemed slow to me. Go back to the cell, uncomment the %%timeit line, and run it again. How long did it take?
OK, take a look at the next cell and see if you can see what I did -- it's more python (and loops too) so it must be slower. Is it?
End of explanation
"""
nsigma = 3.5
threshold = nsigma*sqrt(np.median(sda.variance)/n_eff)
footprints = detection.findObjects(sda.image, threshold, grow=3)
print "I found %d objects" % (len(footprints))
"""
Explanation: Now let's look for objects. We know how to do this; we smooth the image with the PSF then look for peaks. It's not totally trivial to find all the sets of connected pixels, so I provided you with a function detection.findObjects to do the work
End of explanation
"""
nShow = 10
for foot in footprints.values()[0:nShow]:
print "(%5d, %5d) %3d" % (foot.centroid[0], foot.centroid[1], foot.npix)
if len(footprints) > nShow:
print "..."
"""
Explanation: We can look at all our objects by looping over the footprints:
End of explanation
"""
sda.clearMaskPlane("DETECTED")
detection.setMaskFromFootprints(sda, footprints, "DETECTED")
utils.mtv(sda)
plt.xlim(xlim); plt.ylim(ylim)
plt.show()
"""
Explanation: Or by setting a mask plane -- this way we'll be able to see all the pixels
End of explanation
"""
da.clearMaskPlane("DETECTED")
detection.setMaskFromFootprints(da, footprints, "DETECTED")
utils.mtv(da, alpha=0.3)
plt.xlim(xlim); plt.ylim(ylim)
plt.show()
"""
Explanation: We can do the same thing for the original (unsmoothed) image
End of explanation
"""
t = utils.Data(image=da.truth, mask=sda.mask)
utils.mtv(t, I0=1, b=0.01, alpha=0.6)
plt.xlim(xlim); plt.ylim(ylim)
plt.show()
"""
Explanation: I lied to you; or at least I didn't tell you everything. That 'data' was actually the output from the LSST simulator, which means that I know the Truth; more accurately, I know the location of every photon that arrived from the sources without any sky background. The pixels are 0.2 arcseconds on a side.
Let's overlay the detection mask on the truth.
End of explanation
"""
import scipy.special
pixelSize = 0.200
nPerPsf = 0.5*scipy.special.erfc(nsigma/sqrt(2))
nPerDeg = nPerPsf*3600**2/0.5
print "False positives per degree: %d In data: %d" % (
nPerDeg, nPerDeg/(3600/(da.image.shape[0]*pixelSize))**2)
"""
Explanation: If you look at the direct image you can see things that seem real when you compare with the truth, for example the object at (156, 205). So should we be using a lower threshold? What happens if you choose a smaller value?
OK, so that picked up the object I pointed out, but it picked up some noise too. How many false objects would I expect to detect per square degree? Naïvely we'd expect each PSF-sized patch to be independent, so we can try using the tails of a Gaussian to estimate how many objects we'd detect per square degree. If I take the area of a PSF to be 0.5 arcsec^2, I have
End of explanation
"""
# %%timeit -n 1 -r 1
detection = reload(detection)
ndeg = 1.0/2.0 # Size of image we'll simulate (in degrees)
size = int(3600*ndeg/pixelSize) # Size of image we'll simulate (in pixels)
im = np.zeros((size, size))
nsigma, Poisson= 5, False
np.random.seed(667)
sigma = 10
if Poisson:
mu = sigma**2
im += np.random.poisson(lam=mu, size=size*size).reshape(size, size) - mu
else:
im += np.random.normal(scale=sigma, size=size*size).reshape(size, size)
sim = convolveWithGaussian(im, beta)
n_eff = 4*pi*beta**2 # Effective area of PSF
threshold = nsigma*sigma/sqrt(n_eff)
footprints = detection.findObjects(sim, threshold, grow=0)
print "%s %g %d %.1f" % (("Poisson" if Poisson else "Gaussian"), nsigma, \
len(footprints)/ndeg**2, \
3600**2*1/(2**2.5*pi**1.5*(beta*pixelSize)**2)*nsigma*exp(-nsigma**2/2))
if not False:
tmp = utils.Data(sim)
tmp.clearMaskPlane("DETECTED")
detection.setMaskFromFootprints(tmp, footprints, "DETECTED")
utils.mtv(tmp, alpha=1)
"""
Explanation: Nick Kaiser has done the theory more carefully (it was easy for him; he used results from a classic paper, Bardeen et. al, of which he was a co-author). The answer is that the number of peaks per-arcsecond is
$$
\frac{1}{2^{5/2} \pi^{3/2} \beta^2} n_\sigma e^{-n_\sigma^2/2}
$$
I'm not as clever as Nick, but I do have access to a computer...
End of explanation
"""
|
ToAruShiroiNeko/revscoring | ipython/Feature construction demo.ipynb | mit | extractor = APIExtractor(api.Session("https://en.wikipedia.org/w/api.php"))
"""
Explanation: Feature extractor setup
This line constructs a "feature extractor" that uses Wikipedia's API to solve dependencies.
End of explanation
"""
list(extractor.extract(123456789, [diff.chars_added]))
"""
Explanation: Using the extractor to extract features
The following line demonstrates a simple feature extraction. Note that we wrap the call in a list() because it returns a generator.
End of explanation
"""
chars_added_ratio = Feature("diff.chars_added_ratio",
lambda a,c: a/max(c, 1), # Prevents divide by zero
depends_on=[diff.chars_added, revision.chars],
returns=float)
list(extractor.extract(123456789, [chars_added_ratio]))
"""
Explanation: Defining a custom feature
The next block defines a new feature and sets the dependencies to be two other features: diff.chars_added and revision.chars. This feature represents the proportion of characters in the current version of the page that the current edit is responsible for adding.
End of explanation
"""
chars_added_ratio = diff.chars_added / modifiers.max(revision.chars, 1) # Prevents divide by zero
list(extractor.extract(123456789, [chars_added_ratio]))
"""
Explanation: There's easier ways that we can do this though. I've overloaded simple mathematical operators to allow you to do simple math with feature and get a feature returned. This code roughly corresponds to what's going on above.
End of explanation
"""
from revscoring.datasources import diff as diff_datasource
list(extractor.extract(662953550, [diff_datasource.added_segments]))
"""
Explanation: Using datasources
There's a also a set of datasources that are part of the dependency injection system. See revscoring/revscoring/datasources. I'll need to rename the diff datasource when I import it because of the name clash. FWIW, you usually don't use features and datasources in the same context, so there's some name overlap.
End of explanation
"""
import mwparserfromhell as mwp
templates_added = Feature("diff.templates_added",
lambda add_segments: sum(len(mwp.parse(s).filter_templates()) > 0 for s in add_segments),
depends_on=[diff_datasource.added_segments],
returns=int)
list(extractor.extract(662953550, [templates_added]))
"""
Explanation: OK. Let's define a new feature for counting the number of templates added. I'll make use of mwparserfromhell to do this. See the docs.
End of explanation
"""
from revscoring.dependent import draw
draw(templates_added)
"""
Explanation: Debugging
There's some facilities in place to help you make sense of issues when they arise. The most important is the draw function.
End of explanation
"""
draw(diff.added_badwords_ratio)
"""
Explanation: In the tree structure above, you can see how our new feature depends on "diff.added_segments" which depends on "diff.operations" which depends (as you might imaging) on the current and parent revision. Other features are a bit more complicated.
End of explanation
"""
|
Guneet-Dhillon/mxnet | example/bayesian-methods/sgld.ipynb | apache-2.0 | %matplotlib inline
"""
Explanation: Stochastic Gradient Langevin Dynamics in MXNet
End of explanation
"""
import mxnet as mx
import mxnet.ndarray as nd
import numpy
import logging
import time
import matplotlib.pyplot as plt
def load_synthetic(theta1, theta2, sigmax, num=20):
flag = numpy.random.randint(0, 2, (num,))
X = flag * numpy.random.normal(theta1, sigmax, (num, )) \
+ (1.0 - flag) * numpy.random.normal(theta1 + theta2, sigmax, (num, ))
return X.astype('float32')
class SGLDScheduler(mx.lr_scheduler.LRScheduler):
def __init__(self, begin_rate, end_rate, total_iter_num, factor):
super(SGLDScheduler, self).__init__()
if factor >= 1.0:
raise ValueError("Factor must be less than 1 to make lr reduce")
self.begin_rate = begin_rate
self.end_rate = end_rate
self.total_iter_num = total_iter_num
self.factor = factor
self.b = (total_iter_num - 1.0) / ((begin_rate / end_rate) ** (1.0 / factor) - 1.0)
self.a = begin_rate / (self.b ** (-factor))
self.count = 0
def __call__(self, num_update):
self.base_lr = self.a * ((self.b + num_update) ** (-self.factor))
self.count += 1
return self.base_lr
def synthetic_grad(X, theta, sigma1, sigma2, sigmax, rescale_grad=1.0, grad=None):
if grad is None:
grad = nd.empty(theta.shape, theta.context)
theta1 = theta.asnumpy()[0]
theta2 = theta.asnumpy()[1]
v1 = sigma1 **2
v2 = sigma2 **2
vx = sigmax **2
denominator = numpy.exp(-(X - theta1)**2/(2*vx)) + numpy.exp(-(X - theta1 - theta2)**2/(2*vx))
grad_npy = numpy.zeros(theta.shape)
grad_npy[0] = -rescale_grad*((numpy.exp(-(X - theta1)**2/(2*vx))*(X - theta1)/vx
+ numpy.exp(-(X - theta1 - theta2)**2/(2*vx))*(X - theta1-theta2)/vx)/denominator).sum()\
+ theta1/v1
grad_npy[1] = -rescale_grad*((numpy.exp(-(X - theta1 - theta2)**2/(2*vx))*(X - theta1-theta2)/vx)/denominator).sum()\
+ theta2/v2
grad[:] = grad_npy
return grad
"""
Explanation: In this notebook, we will show how to replicate the toy example in the paper <a name="ref-1"/>(Welling and Teh, 2011). Here we have observed 20 instances from a mixture of Gaussians with tied means:
$$
\begin{aligned}
\theta_1 &\sim N(0, \sigma_1^2)\
\theta_2 &\sim N(0, \sigma_2^2)\
x_i &\sim \frac{1}{2}N(0, \sigma_x^2) + \frac{1}{2}N(\theta_1 + \theta_2, \sigma_x^2)
\end{aligned}
$$
We are asked to draw samples from the posterior distribution $p(\theta_1, \theta_2 \mid X)$. In the following, we will use stochastic gradient langevin dynamics (SGLD) to do the sampling.
End of explanation
"""
numpy.random.seed(100)
mx.random.seed(100)
theta1 = 0
theta2 = 1
sigma1 = numpy.sqrt(10)
sigma2 = 1
sigmax = numpy.sqrt(2)
X = load_synthetic(theta1=theta1, theta2=theta2, sigmax=sigmax, num=100)
minibatch_size = 1
total_iter_num = 1000000
lr_scheduler = SGLDScheduler(begin_rate=0.01, end_rate=0.0001, total_iter_num=total_iter_num,
factor=0.55)
optimizer = mx.optimizer.create('sgld',
learning_rate=None,
rescale_grad=1.0,
lr_scheduler=lr_scheduler,
wd=0)
updater = mx.optimizer.get_updater(optimizer)
theta = mx.random.normal(0, 1, (2,), mx.cpu())
grad = nd.empty((2,), mx.cpu())
samples = numpy.zeros((2, total_iter_num))
start = time.time()
for i in xrange(total_iter_num):
if (i+1)%100000 == 0:
end = time.time()
print "Iter:%d, Time spent: %f" %(i + 1, end-start)
start = time.time()
ind = numpy.random.randint(0, X.shape[0])
synthetic_grad(X[ind], theta, sigma1, sigma2, sigmax, rescale_grad=
X.shape[0] / float(minibatch_size), grad=grad)
updater('theta', grad, theta)
samples[:, i] = theta.asnumpy()
"""
Explanation: We first write the generation process. In the paper, the data instances are generated with the following parameter, $\theta_1^2=10, \theta_2^2=1, \theta_x^2=2$.
Also, we need to write a new learning rate schedule as described in the paper $\epsilon_t = a(b+t)^{-r}$
and calculate the gradient. After these preparations, we can go on with the sampling process.
End of explanation
"""
plt.hist2d(samples[0, :], samples[1, :], (200, 200), cmap=plt.cm.jet)
plt.colorbar()
plt.show()
"""
Explanation: We have collected 1000000 samples in the samples variable. Now we can draw the density plot. For more about SGLD, the original paper and <a name="ref-2"/>(Neal, 2011) are good references.
End of explanation
"""
|
tata-antares/tagging_LHCb | Stefania_files/vertex-based-tagging.ipynb | apache-2.0 | import pandas
import numpy
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_curve, roc_auc_score
from rep.metaml import FoldingClassifier
from rep.data import LabeledDataStorage
from rep.report import ClassificationReport
from rep.report.metrics import RocAuc
from utils import get_N_B_events, get_events_number, get_events_statistics
"""
Explanation: Import
End of explanation
"""
import root_numpy
data = pandas.DataFrame(root_numpy.root2array('datasets/1016_vtx.root'))
"""
Explanation: Reading initial data
End of explanation
"""
event_id_column = 'event_id'
data[event_id_column] = data.runNum.apply(str) + '_' + (data.evtNum.apply(int)).apply(str)
# reconstructing sign of B
data['signB'] = data.tagAnswer * (2 * data.iscorrect - 1)
# assure sign is +1 or -1
data['signVtx'] = (data.signVtx.values > 0) * 2 - 1
data['label'] = (data.signVtx.values * data.signB.values > 0) * 1
data.head()
get_events_statistics(data)
N_pass = get_events_number(data)
tagging_efficiency = 1. * N_pass / get_N_B_events()
tagging_efficiency_delta = sqrt(N_pass) / get_N_B_events()
print tagging_efficiency, tagging_efficiency_delta
"""
Explanation: Define label
label = signB * signVtx > 0
* same sign of B and vtx -> label = 1
* opposite sign of B and vtx -> label = 0
End of explanation
"""
sweight_threshold = 1.
data_sw_passed = data[data.N_sig_sw > sweight_threshold]
data_sw_not_passed = data[data.N_sig_sw <= sweight_threshold]
get_events_statistics(data_sw_passed)
"""
Explanation: Define B-like events for training and others for prediction
End of explanation
"""
features = ['mult', 'nnkrec', 'ptB', 'vflag', 'ipsmean', 'ptmean', 'vcharge',
'svm', 'svp', 'BDphiDir', 'svtau', 'docamax']
"""
Explanation: Define features
End of explanation
"""
data_sw_passed_lds = LabeledDataStorage(data_sw_passed, data_sw_passed.label, data_sw_passed.N_sig_sw)
base = RandomForestClassifier(n_estimators=300, max_depth=8, min_samples_leaf=50, n_jobs=4)
est_choose_RT = FoldingClassifier(base, features=features, random_state=13)
%time est_choose_RT.fit_lds(data_sw_passed_lds)
pass
report = ClassificationReport({'rf': est_choose_RT}, data_sw_passed_lds)
"""
Explanation: Find good vtx to define sign B
trying to guess sign of B based on sign of vtx. If the guess is good, the vtx will be used on next step to train classifier.
2-folding random forest selection for right tagged events
End of explanation
"""
report.compute_metric(RocAuc())
"""
Explanation: ROC AUC
End of explanation
"""
plot([0, 1], [1, 0], 'k--')
report.roc()
imp = numpy.sum([est.feature_importances_ for est in est_choose_RT.estimators], axis=0)
imp = pandas.DataFrame({'importance': imp, 'feature': est_choose_RT.features})
imp.sort('importance', ascending=False)
"""
Explanation: ROC curve
End of explanation
"""
from utils import plot_flattened_probs
probs = est_choose_RT.predict_proba(data_sw_passed)
flat_ss = plot_flattened_probs(probs, data_sw_passed.label.values, data_sw_passed.N_sig_sw.values, label=1)
flat_os = plot_flattened_probs(probs, data_sw_passed.label.values, data_sw_passed.N_sig_sw.values, label=0)
hist(probs[data_sw_passed.label.values == 1][:, 1], bins=60, normed=True, alpha=0.4)
hist(probs[data_sw_passed.label.values == 0][:, 1], bins=60, normed=True, alpha=0.4)
pass
"""
Explanation: Distributions of output
Using flattening of output with respect to one of classes
End of explanation
"""
mask = ((flat_ss(probs[:, 1]) < 0.6) & (data_sw_passed.label == 0)) | \
((flat_os(probs[:, 1]) > 0.2) & (data_sw_passed.label == 1))
data_sw_passed_selected = data_sw_passed[mask]
data_sw_passed_not_selected = data_sw_passed[~mask]
get_events_statistics(data_sw_passed_selected)
data_sw_passed_selected_lds = LabeledDataStorage(data_sw_passed_selected,
data_sw_passed_selected.label,
data_sw_passed_selected.N_sig_sw)
"""
Explanation: Select good vtx
leaving for training only those events that were not-so poorly predicted by RandomForest.
End of explanation
"""
from hep_ml.decisiontrain import DecisionTrainClassifier
from hep_ml.losses import LogLossFunction
tt_base = DecisionTrainClassifier(learning_rate=0.02, n_estimators=9000, depth=6, pretransform_needed=True,
max_features=8, loss=LogLossFunction(regularization=100))
tt_folding_rf = FoldingClassifier(tt_base, n_folds=2, random_state=11, ipc_profile='ssh-ipy',
features=features)
%time tt_folding_rf.fit_lds(data_sw_passed_selected_lds)
pass
"""
Explanation: DT for good vtx
End of explanation
"""
report = ClassificationReport({'tt': tt_folding_rf}, data_sw_passed_selected_lds)
report.learning_curve(RocAuc())
report.compute_metric(RocAuc())
"""
Explanation: Report for selected vtx
End of explanation
"""
from hep_ml.decisiontrain import DecisionTrainClassifier
from hep_ml.losses import LogLossFunction
tt_base = DecisionTrainClassifier(learning_rate=0.02, n_estimators=1500, depth=6, pretransform_needed=True,
max_features=8, loss=LogLossFunction(regularization=100))
tt_folding = FoldingClassifier(tt_base, n_folds=2, random_state=11, ipc_profile='ssh-ipy',
features=features)
%time tt_folding.fit_lds(data_sw_passed_lds)
pass
"""
Explanation: Training on all vtx
in this case we don't use preselection with RandomForest
DT full
End of explanation
"""
report = ClassificationReport({'tt': tt_folding}, data_sw_passed_lds)
report.learning_curve(RocAuc())
report.compute_metric(RocAuc())
report.roc()
"""
Explanation: Report for all vtx
End of explanation
"""
models = []
from utils import get_result_with_bootstrap_for_given_part
models.append(get_result_with_bootstrap_for_given_part(tagging_efficiency, tagging_efficiency_delta, tt_folding,
[data_sw_passed, data_sw_not_passed],
logistic=True, name="tt-log",
sign_part_column='signVtx', part_name='vertex'))
models.append(get_result_with_bootstrap_for_given_part(tagging_efficiency, tagging_efficiency_delta, tt_folding,
[data_sw_passed, data_sw_not_passed],
logistic=False, name="tt-iso",
sign_part_column='signVtx', part_name='vertex'))
models.append(get_result_with_bootstrap_for_given_part(tagging_efficiency, tagging_efficiency_delta, tt_folding_rf,
[data_sw_passed_selected,
data_sw_passed_not_selected,
data_sw_not_passed],
logistic=True, name="rf-tt-log",
sign_part_column='signVtx', part_name='vertex'))
models.append(get_result_with_bootstrap_for_given_part(tagging_efficiency,
tagging_efficiency_delta, tt_folding_rf,
[data_sw_passed_selected,
data_sw_passed_not_selected,
data_sw_not_passed],
logistic=False, name="rf-tt-iso",
sign_part_column='signVtx', part_name='vertex'))
"""
Explanation: Calibrating results $p(\text{vrt same sign}|B)$ and combining them
End of explanation
"""
pandas.concat(models)
"""
Explanation: Comparison of different models
End of explanation
"""
from utils import prepare_B_data_for_given_part
Bdata_prepared = prepare_B_data_for_given_part(tt_folding, [data_sw_passed, data_sw_not_passed], logistic=False,
sign_part_column='signVtx', part_name='vertex')
Bdata_prepared.to_csv('models/Bdata_vertex.csv', header=True, index=False)
"""
Explanation: Implementing the best vertex model
and saving its predictions
End of explanation
"""
|
kit-cel/wt | wt/vorlesung/ch7_9/size_weight.ipynb | gpl-2.0 | # importing
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(18, 6) )
"""
Explanation: Content and Objective
Show behavior and correlation and pdfs of a set of data being imported.
Pandas is being used since this significantly simplifies importing, extracting, and dealing with data frames.
Even if you are not interested in dealing with pandas, understanding of operations should still be possible.
Import
End of explanation
"""
# load data frame
df = pd.read_excel( 'Umfragedaten_v1_an.xlsx' )
# extract according data (size, weight) by slicing data out of the data frame
size_weight = df[ ['GESCHL', 'GRO', 'GEW'] ]
# removing NaN values
size_weight = size_weight.dropna( how='any' )
# extract from data-frame to numpy arrays
size = size_weight[ 'GRO' ].values
weight = size_weight[ 'GEW' ].values
# NOTE: Finding least squares solution for linear regression,
# which is not discussed in this leture
S = np.ones( (len(size) , 2) )
S[ :, 0 ] = size
params = np.dot( np.linalg.pinv( S ) , weight )
# including linear regression
regression = params[0] * np.array(size) + params[1]
"""
Explanation: Importing Data
NOTE: file "Umfragedaten_v1_an.xlsx" required; can be found at http://wikis.fu-berlin.de/pages/viewpage.action?pageId=696156185
NOTE 2: Importing and dealing with data may be relevant if your are interested in this stuff. Otherwise you may skip those lines.
NOTE 3: The following lines are using "pandas" being a Python module for dealing with data frames.
End of explanation
"""
# plotting
# point cloud of (size,weight) pairs
plt.plot( size, weight, '.', alpha=.3, ms = 24, mew = 2.0)
# linear regression
plt.plot( size, regression, linewidth=2.0, color='r' )
# histograms on x- and y-axis
bins = 20
w_hist = np.histogram( weight, bins = bins, density = 1 )
width = ( np.max( weight ) - np.min( weight) ) / bins
plt.barh( w_hist[1][:-1] , 120 + w_hist[0] / np.max( w_hist[0]) * 20, width, color = '#ff7f0e' )
s_hist = np.histogram( size, bins = bins, density = 1 )
width = ( np.max( size ) - np.min( size) ) / bins
plt.bar( s_hist[1][:-1] , s_hist[0] / np.max( s_hist[0]) * 30, width, color = '#ff7f0e' )
# axes and stuff
plt.grid( True )
plt.xlabel('$s/\mathrm{cm}$')
plt.ylabel('$w/\mathrm{kg}$')
plt.xlim( (120, 220 ) )
plt.ylim( (0, 220 ) )
# getting men and women
size_weight_m = size_weight[ size_weight.GESCHL == 'MAENNLICH' ]
size_weight_w = size_weight[ size_weight.GESCHL == 'WEIBLICH' ]
size_w = size_weight_w[ 'GRO' ].values
weight_w = size_weight_w[ 'GEW' ].values
size_m = size_weight_m[ 'GRO' ].values
weight_m = size_weight_m[ 'GEW' ].values
# plotting
# point cloud of (size,weight) pairs for women
plt.subplot(121)
plt.plot( size_w, weight_w, '.', alpha=.3, ms = 24, mew = 2.0)
# histograms on x- and y-axis
bins = 20
w_hist = np.histogram( weight_w, bins = bins, density = 1 )
width = ( np.max( weight_w ) - np.min( weight_w ) ) / bins
plt.barh( w_hist[1][:-1] , 120 + w_hist[0] / np.max( w_hist[0]) * 20, width, color = '#ff7f0e' )
s_hist = np.histogram( size_w, bins = bins, density = 1 )
width = ( np.max( size_w ) - np.min( size_w) ) / bins
plt.bar( s_hist[1][:-1] , s_hist[0] / np.max( s_hist[0]) * 30, width, color = '#ff7f0e' )
# axes and stuff
plt.title('Women')
plt.grid( True )
plt.xlabel('$s/\mathrm{cm}$')
plt.ylabel('$w/\mathrm{kg}$')
plt.xlim( (120, 220 ) )
plt.ylim( (0, 200 ) )
# now men
plt.subplot(122)
plt.plot( size_m, weight_m, '.', alpha=.3, ms = 24, mew = 2.0)
# histograms on x- and y-axis
bins = 20
w_hist = np.histogram( weight_m, bins = bins, density = 1 )
width = ( np.max( weight_m ) - np.min( weight_m ) ) / bins
plt.barh( w_hist[1][:-1] , 120 + w_hist[0] / np.max( w_hist[0]) * 20, width, color = '#ff7f0e' )
s_hist = np.histogram( size_m, bins = bins, density = 1 )
width = ( np.max( size_m ) - np.min( size_m) ) / bins
plt.bar( s_hist[1][:-1] , s_hist[0] / np.max( s_hist[0]) * 30, width, color = '#ff7f0e' )
# axes and stuff
plt.title('Men')
plt.grid( True )
plt.xlabel('$s/\mathrm{cm}$')
plt.ylabel('$w/\mathrm{kg}$')
plt.xlim( (120, 220 ) )
plt.ylim( (0, 200 ) )
"""
Explanation: Plotting Data
End of explanation
"""
# reduce to weights where size is within predefined interval
weight_160 = [ w for w, s in zip( weight, size ) if s <= 160 ]
weight_160_180 = [ w for w, s in zip( weight, size ) if 160 < s <= 180 ]
weight_180_ = [ w for w, s in zip( weight, size ) if s > 180 ]
# plotting
plt.subplot(141)
plt.hist( weight, bins=bins, color='#ff7f0e', density = 1 )
plt.grid(True)
plt.xlim( ( np.min(weight), np.max(weight) ) )
plt.ylim( (0, .1 ) )
plt.xlabel('$w/\mathrm{kg}$')
plt.title('$H_{{{}}}(w)$'.format(len(size)))
plt.subplot(142)
plt.hist( weight_160, bins=bins, color='#ff7f0e', density = 1 )
plt.grid(True)
plt.xlim( ( np.min(weight), np.max(weight) ) )
plt.ylim( (0, .1 ) )
plt.xlabel('$w/\mathrm{kg}$')
plt.title('$H_{{{}}}(w|s<160)$'.format(len(weight_160)))
plt.subplot(143)
plt.hist( weight_160_180, bins=bins, color='#ff7f0e', density = 1 )
plt.grid(True)
plt.xlim( ( np.min(weight), np.max(weight) ) )
plt.ylim( (0, .1 ) )
plt.xlabel('$w/\mathrm{kg}$')
plt.title('$H_{{{}}}(w|s\\in(160,180))$'.format(len(weight_160_180)))
plt.subplot(144)
plt.hist( weight_180_, bins=bins, color='#ff7f0e', density = 1 )
plt.grid(True)
plt.xlim( ( np.min(weight), np.max(weight) ) )
plt.ylim( (0, .1 ) )
plt.xlabel('$w/\mathrm{kg}$')
plt.title('$H_{{{}}}(w| s>180 )$'.format(len(weight_180_)))
"""
Explanation: Get Marginal PDFs and Plot
End of explanation
"""
# output various numbers
print('Number of data sets: \t\t\t\t{}'.format( len( weight ) ) )
print('Number of data sets with s <= 160: \t\t{}'.format( len( weight_160) ) )
print('Number of data sets with 160 < s <= 180: \t{}'.format( len( weight_160_180 ) ) )
print('Number of data sets with s > 180: \t\t{}\n'.format( len( weight_180_) ) )
print('----------')
print('Notation: S = Size; W = Weight\n')
print('E( S ) = {:2.2f} cm'.format( np.average( size) ) )
print('D( S ) = {:2.2f} cm\n'.format( np.std( size) ) )
print('E( W ) = {:2.2f} kg'.format( np.average( weight) ) )
print('D( W ) = {:2.2f} kg\n'.format( np.std( weight) ) )
print('E( W | S <= 160 ) = \t\t{:2.2f} kg'.format( np.average( weight_160) ) )
print('E( W | 160 < S <= 180 ) = \t{:2.2f} kg'.format( np.average( weight_160_180) ) )
print('E( W | S > 180 ) = \t\t{:2.2f} kg\n'.format( np.average( weight_180_) ) )
print('----------')
# find and print least squares solution
print('Parameter estimation in linear model w = a s + b: a = {:2.2f} kg/cm, b = {:2.2f} kg'.format( params[0], params[1] ) )
"""
Explanation: Printing Some Numbers
End of explanation
"""
rho = np.corrcoef( size, weight )
print('Correlation coefficient: {:2.4f}\n'.format( rho[0,1] ) )
rho_w = np.corrcoef( size_w, weight_w )
print('Correlation coefficient women: {:2.4f}\n'.format( rho_w[0,1] ) )
rho_m = np.corrcoef( size_m, weight_m )
print('Correlation coefficient men: {:2.4f}\n'.format( rho_m[0,1] ) )
"""
Explanation: Find and Print Correlation Coefficient
End of explanation
"""
|
ling7334/tensorflow-get-started | mnist/Deep_MNIST_for_Experts.ipynb | apache-2.0 | import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
"""
Explanation: 深入MNIST
TensorFlow是一个非常强大的用来做大规模数值计算的库。其所擅长的任务之一就是实现以及训练深度神经网络。
在本教程中,我们将学到构建一个TensorFlow模型的基本步骤,并将通过这些步骤为MNIST构建一个深度卷积神经网络。
这个教程假设你已经熟悉神经网络和MNIST数据集。如果你尚未了解,请查看新手指南。
关于本教程
本教程首先解释了mnist_softmax.py中的代码 —— 一个简单的Tensorflow模型的应用。然后展示了一些提高精度的方法。
你可以运行本教程中的代码,或通读代码。
本教程将会完成:
创建一个softmax回归算法用以输入MNIST图片来辨识数位的模型,用Tensorfow通过辨识成百上千的例子来训练模型(运行我们首个Tensorflow session)
使用测试数据来测试模型准确度
构建,训练,测试一个多层卷积神经网络来提高精度
安装
在创建模型之前,我们会先加载MNIST数据集,然后启动一个TensorFlow的session。
加载MNIST数据
为了方便起见,我们已经准备了一个脚本来自动下载和导入MNIST数据集。它会自动创建一个MNIST_data的目录来存储数据。
End of explanation
"""
import tensorflow as tf
sess = tf.InteractiveSession()
"""
Explanation: 这里,mnist是一个轻量级的类。它以Numpy数组的形式存储着训练、校验和测试数据集。同时提供了一个函数,用于在迭代每一小批数据,后面我们将会用到。
运行TensorFlow的InteractiveSession
Tensorflow依赖于一个高效的C++后端来进行计算。与后端的这个连接叫做session。一般而言,使用TensorFlow程序的流程是先创建一个图,然后在session中启动它。
这里,我们使用更加方便的InteractiveSession类。通过它,你可以更加灵活地构建你的代码。它能让你在运行图的时候,插入一些计算图,这些计算图是由某些操作(operations)构成的。这对于工作在交互式环境中的人们来说非常便利,比如使用IPython。如果你没有使用InteractiveSession,那么你需要在启动session之前构建整个计算图,然后启动该计算图。
End of explanation
"""
x = tf.placeholder("float", shape=[None, 784])
y_ = tf.placeholder("float", shape=[None, 10])
"""
Explanation: 计算图
为了在Python中进行高效的数值计算,我们通常会使用像NumPy一类的库,将一些诸如矩阵乘法的耗时操作在Python环境的外部来计算,这些计算通常会通过其它语言并用更为高效的代码来实现。
但遗憾的是,每一个操作切换回Python环境时仍需要不小的开销。如果你想在GPU或者分布式环境中计算时,这一开销更加可怖,这一开销主要可能是用来进行数据迁移。
TensorFlow也是在Python外部完成其主要工作,但是进行了改进以避免这种开销。其并没有采用在Python外部独立运行某个耗时操作的方式,而是先让我们描述一个交互操作图,然后完全将其运行在Python外部。这与Theano或Torch的做法类似。
因此Python代码的目的是用来构建这个可以在外部运行的计算图,以及安排计算图的哪一部分应该被运行。详情请查看基本用法中的计算图一节。
构建Softmax 回归模型
在这一节中我们将建立一个拥有一个线性层的softmax回归模型。在下一节,我们会将其扩展为一个拥有多层卷积网络的softmax回归模型。
占位符
我们通过为输入图像和目标输出类别创建节点,来开始构建计算图。
End of explanation
"""
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
"""
Explanation: 这里的x和y_并不是特定的值,相反,他们都只是一个占位符,可以在TensorFlow运行某一计算时根据该占位符输入具体的值。
输入图片x是一个2维的浮点数张量。这里,分配给它的维度为[None, 784],其中784是一张展平的MNIST图片的维度。None表示其值大小不定,在这里作为第一个维度值,用以指代batch的大小,意即x的数量不定。输出类别值y_也是一个2维张量,其中每一行为一个10维的独热码向量,用于代表对应某一MNIST图片的类别。
虽然占位符的维度参数是可选的,但有了它,TensorFlow能够自动捕捉因数据维度不一致导致的错误。
变量
我们现在为模型定义权重W和偏置b。可以将它们当作额外的输入量,但是TensorFlow有一个更好的处理方式:变量。一个变量代表着TensorFlow计算图中的一个值,能够在计算过程中使用,甚至进行修改。在机器学习的应用过程中,模型参数一般用变量来表示。
End of explanation
"""
sess.run(tf.global_variables_initializer())
"""
Explanation: 我们在调用tf.Variable的时候传入初始值。在这个例子里,我们把W和b都初始化为零向量。W是一个784x10的矩阵(因为我们有784个特征和10个输出值)。b是一个10维的向量(因为我们有10个分类)。
变量需要通过seesion初始化后,才能在session中使用。这一初始化步骤为,为初始值指定具体值(本例当中是全为零),并将其分配给每个变量,可以一次性为所有变量完成此操作。
End of explanation
"""
y = tf.matmul(x,W) + b
"""
Explanation: 类别预测与损失函数
现在我们可以实现我们的回归模型了。这只需要一行!我们把向量化后的图片x和权重矩阵W相乘,加上偏置b。
End of explanation
"""
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
"""
Explanation: 我们可以指定损失函数来指示模型预测一个实例有多不准;我们要在整个训练过程中使其最小化。这里我们的损失函数是目标类别和预测类别之间的交叉熵。斤现在初级教程中一样,我们使用稳定方程:
End of explanation
"""
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
"""
Explanation: 注意,tf.nn.softmax_cross_entropy_with_logits隐式地对模型未归一化模型预测值和所有类别的总值应用了softmax函数,tf.reduce_sum取了总值的平均值。
训练模型
我们已经定义好模型和训练用的损失函数,那么用TensorFlow进行训练就很简单了。因为TensorFlow知道整个计算图,它可以使用自动微分法找到对于各个变量的损失的梯度值。TensorFlow有大量内置的优化算法这个例子中,我们用最速下降法让交叉熵下降,步长为0.5。
End of explanation
"""
for _ in range(1000):
batch = mnist.train.next_batch(100)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
"""
Explanation: 这一行代码实际上是用来往计算图上添加一个新操作,其中包括计算梯度,计算每个参数的步长变化,并且计算出新的参数值。
返回的train_step操作对象,在运行时会使用梯度下降来更新参数。因此,整个模型的训练可以通过反复地运行train_step来完成。
End of explanation
"""
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
"""
Explanation: 每一步迭代,我们都会加载50个训练样本,然后执行一次train_step,并通过feed_dict将x和y_张量占位符用训练替代为训练数据。
注意,在计算图中,你可以用feed_dict来替代任何张量,并不仅限于替换占位符。
评估模型
那么我们的模型性能如何呢?
首先让我们找出那些预测正确的标签。tf.argmax是一个非常有用的函数,它能给出某个tensor对象在某一维上的其数据最大值所在的索引值。由于标签向量是由0,1组成,因此最大值1所在的索引位置就是类别标签,比如tf.argmax(y,1)返回的是模型对于任一输入x预测到的标签值,而tf.argmax(y_,1)代表正确的标签,我们可以用tf.equal来检测我们的预测是否真实标签匹配(索引位置一样表示匹配)。
End of explanation
"""
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
"""
Explanation: 这里返回一个布尔数组。为了计算我们分类的准确率,我们将布尔值转换为浮点数来代表对、错,然后取平均值。例如:[True, False, True, True]变为[1,0,1,1],计算出平均值为0.75。
End of explanation
"""
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
"""
Explanation: 最后,我们可以计算出在测试数据上的准确率,大概是92%。
End of explanation
"""
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
"""
Explanation: 构建一个多层卷积网络
在MNIST上只有91%正确率,实在太糟糕。在这个小节里,我们用一个稍微复杂的模型:卷积神经网络来改善效果。这会达到大概99.2%的准确率。虽然不是最高,但是还是比较让人满意。
权重初始化
为了创建这个模型,我们需要创建大量的权重和偏置项。这个模型中的权重在初始化时应该加入少量的噪声来打破对称性以及避免0梯度。由于我们使用的是[ReLU](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)神经元,因此比较好的做法是用一个较小的正数来初始化偏置项,以避免神经元节点输出恒为0的问题(dead neurons)。为了不在建立模型的时候反复做初始化操作,我们定义两个函数用于初始化。
End of explanation
"""
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
"""
Explanation: 卷积和池化
TensorFlow在卷积和池化上有很强的灵活性。我们怎么处理边界?步长应该设多大?在这个实例里,我们会一直使用vanilla版本。我们的卷积使用1步长(stride size),0边距(padding size)的模板,保证输出和输入是同一个大小。我们的池化用简单传统的2x2大小的模板做max pooling。为了代码更简洁,我们把这部分抽象成一个函数。
End of explanation
"""
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
"""
Explanation: 第一层卷积
现在我们可以开始实现第一层了。它由一个卷积接一个max pooling完成。卷积在每个5x5的patch中算出32个特征。卷积的权重张量形状是[5, 5, 1, 32],前两个维度是patch的大小,接着是输入的通道数目,最后是输出的通道数目。 而对于每一个输出通道都有一个对应的偏置量。
End of explanation
"""
x_image = tf.reshape(x, [-1,28,28,1])
"""
Explanation: 为了用这一层,我们把x变成一个4d向量,其第2、第3维对应图片的宽、高,最后一维代表图片的颜色通道数(因为是灰度图所以这里的通道数为1,如果是rgb彩色图,则为3)。
End of explanation
"""
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
"""
Explanation: 我们把x_image和权值向量进行卷积,加上偏置项,然后应用ReLU激活函数,最后进行max pooling。max_pool_2x2方法会将图像降为14乘14。
End of explanation
"""
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
"""
Explanation: 第二层卷积
为了构建一个更深的网络,我们会把几个类似的层堆叠起来。第二层中,每个5x5的patch会得到64个特征。
End of explanation
"""
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
"""
Explanation: 密集连接层
现在,图片尺寸减小到7x7,我们加入一个有1024个神经元的全连接层,用于处理整个图片。我们把池化层输出的张量reshape成一些向量,乘上权重矩阵,加上偏置,然后对其使用ReLU。
End of explanation
"""
keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
"""
Explanation: Dropout
为了减少过拟合,我们在输出层之前加入dropout。我们用一个placeholder来代表一个神经元的输出在dropout中保持不变的概率。这样我们可以在训练过程中启用dropout,在测试过程中关闭dropout。 TensorFlow的tf.nn.dropout操作除了可以屏蔽神经元的输出外,还会自动处理神经元输出值的scale。所以用dropout的时候可以不用考虑scale。<sup>1</sup>
1: 事实上,对于这个小型卷积网络,有没有dropout性能都差不多。dropout通常对降低过拟合总是很有用,但是是对于大型的神经网络来说。
End of explanation
"""
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
"""
Explanation: 输出层
最后,我们添加一个softmax层,就像前面的单层softmax回归一样。
End of explanation
"""
cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
sess.run(tf.global_variables_initializer())
for i in range(20000):
batch = mnist.train.next_batch(50)
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g" % (i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
# print ("test accuracy %g" % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
# Tensorflow throw OOM error if evaluate accuracy at once and memory is not enough
cross_accuracy = 0
for i in range(100):
testSet = mnist.test.next_batch(50)
each_accuracy = accuracy.eval(feed_dict={ x: testSet[0], y_: testSet[1], keep_prob: 1.0})
cross_accuracy += each_accuracy
print("test %d accuracy %g" % (i,each_accuracy))
print("test average accuracy %g" % (cross_accuracy/100,))
"""
Explanation: 训练和评估模型
这个模型的效果如何呢?为了进行训练和评估,我们使用与之前简单的单层SoftMax神经网络模型几乎相同的一套代码,只是我们会用更加复杂的ADAM优化器来做梯度最速下降,在feed_dict中加入额外的参数keep_prob来控制dropout比例。然后每100次迭代输出一次日志。
不同之处是:
我们替换最速梯度下降优化器为更复杂的自适应动量估计优化器
我们在feed_dict增加了keep_prob以控制dropout率
我们在训练过程中每100次迭代记录一次日志
你可以随意运行这段代码,但它有20,000次迭代可能要运行一段时间(可能超过半小时)。
End of explanation
"""
|
aravindk1992/TextClassification | A Noob's guide to text classification [Unrendered].ipynb | mit | import nltk
nltk.download()
"""
Explanation: A Noob's guide to Text Classification Using NLTK, Scikit and Gensim
The Summer of 2015 was very productive! I got an opportunity to work with a startup company on a Text Classification problem. We were dealing with a very large HTML corpus which made it all the more challenging to load, process and make sense of the data.
This tutorial(hopefully) will try and present a more <b>"VERBOSE VERSION TO TEXT CLASSIFICATION" </b> and discuss few libraries, techniques and hacks that could come in handy while working on Text Classification.
Here is a bit of a background about me before we dive right in- <i> "I am a Computer Science Grad student at The University at Buffalo, SUNY, New York. I hold my bachelor's degree in Telecommunication Engineering. I work with Dr.Kris Schindler on developing an augmentative communication system for the speech-impaired using Brain Computer Interfaces and Natural Language Processing (NLP). "</i>
Pre-requisite
The tutorial assumes that you have some basic working knowledge of Programming in Python. If you have never programmed in python before, then pause this tutorial for a second and check out <a href='http://www.swaroopch.com/notes/python/'> <b>A byte of Python</b> </a>. The ebook serves as a tutorial or guide to the Python language for the beginner audience. I would also highly recommend the <a href='https://www.udacity.com/course/programming-foundations-with-python--ud036'> <b> Programming foundations with python by Udacity </b></a>.
<br/>
This tutorial also assumes you to be familiar with some basic know-hows of machine learning especially the Classification algorithms such as LogisticRegression, SGDClassifier and Multinomial Naive Bayes. Then again i'll provide you with resources that'll help you understand the theory wherever required. If you are looking for a good ML tutorial online, i would highly recommend taking The <a href='https://www.udacity.com/course/intro-to-machine-learning--ud120'> <b>Introduction to Machine Learning by Udacity </b></a> and <a href='https://www.coursera.org/learn/machine-learning/home/info'><b> Introduction to Machine Learning by Andrew Ng </b></a> courses.
<br/> If you are new to ipython notebook, head <a href='http://ipython.org/notebook.html'> here</a>
Let' s get started shall we?
Installation Instruction
Download Anaconda from <a href='http://continuum.io/downloads'>Here</a>.
<br/>Anaconda comes pre packaged with all the libraries that we will be needing in this particular tutorial. We will be using Gensim, NLTK and Scikit-learn in particular.
<br/><b> Windows Installation Instruction </b>
<br/> Download the Windows Installer from the <a href = 'http://continuum.io/downloads'>link</a> .Voila!!
<br/>
<b> Linux Installation Instruction </b>
<br/> Download from the link provided and in your terminal window type the following, replacing the file path, name with the path , name of your downloaded install file. Follow the prompts on the installer screens. If unsure about any setting, simply accept the defaults, as they can all be changed later:
<code> bash ~/Downloads/Anaconda-2.3.0-Linux-x86_64.sh </code>
<br/><b> Mac OSX Installation Instruction </b>
<br/> Download and install the Setup file from the link. NOTE: You may see a screen that says “You cannot install Anaconda in this location. The Anaconda installer does not allow its software to be installed here.” To fix this click the “Install for me only” button with the house icon and continue the installation.
<br/><img src='http://docs.continuum.io/_images/osxbug2.png' height="500" width="500"/>
<img src='http://docs.continuum.io/_images/customizebutton.png' height="500" width="500"/>
<img src='http://docs.continuum.io/_images/pathoption.png' height="500" width="500"/>
<br/>
<br/>
FYI: I am running a 64 bit Ubuntu 14.04 LTS with Intel Core i5 CPU and 8 Gigs of RAM.
About the Dataset
We will be using the Reuters Dataset that comes bundled with the nltk package. You can downlaod the dataset from the following <a href='https://app.box.com/s/aatjmz041urfmp5i7ik3nli6o60zczj7'>link</a>.
<br/> If you wish to download all the dataset that comes bundled with nltk then run the code snippet below. You'll then be prompted by the NLTK downloader. Choose and download all the packages. It might take some time for all the corpora to be downloaded.
End of explanation
"""
from sklearn.datasets import load_files
"""
Explanation: The First Step
The first step in any machine learning problem is to go through the dataset and understand the structure. This will give us a better clarity when we start modelling the input data. We begin by extracting the dataset and notice that it contains a training, test folder. We observe that there are about 90 categories in the reuters dataset with about 7769 documents in the training set and 3019 documents in the test set. The distribution of categories in the corpus is highly skewed,
with 36.7% of the documents in the most common category, and only 0.0185% (2 documents) in each of the five least common categories. In fact, the original data source is even more skewed---in creating the corpus, any categories that did not contain at least one document in the training set and one document in the test set were removed from
the corpus by its original creator.
The Readme File should give you more information about the dataset.The <i>cats.txt</i> file contains the mapping of input filename to their respective category. There is also a stopword.txt file that contains a list of stop words. We will discuss more about the stopwords in the coming sections.
<br/> The first thing to do is to load the dataset. Though there is <i><u>CategorizedPlaintextCorpusReader</u></i> function in nltk to do this, I am more inclined towards using the <i><u>load_files</u></i> function provided with scikit-learn to load the data. This is mainly because it is much easier to handle data in scikit as the load_files function returns a data bunch. Bunch in python let's you acess python dicts as objects.
Loading the dataset using Scikit-learn
<img src='http://scipy-lectures.github.io/_images/scikit-learn-logo.png' height=500px, width=500px/>
Scikit-learn is an amazing library to quickly code up your machine learing project. It provides some very easy and useful functions to pretty much do everything from classification and regression to clustering. We are going to dwell right into scikit-learn and use the <i><b>sklearn.datasets.load_files</b> function to do this. Please note that the load_files expects a certain directory structure for it to work.
load_files load's the text files with categories as subfolder names.
Individual samples are assumed to be files stored a two levels folder structure such as the following:
container_folder/
category_1_folder/
file_1.txt
file_2.txt ...
file_42.txt
category_2_folder/
file_43.txt
file_44.txt ...
The folder names (categoty_1_folder, category_2_folder etc.) are used as target labels. The individual file names are not very important.
Check out the following <a href='http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_files.html'> documentation page </a> for detailed explanation on <i>load_files</i> module.
End of explanation
"""
import os
import os.path
from os.path import join
import shutil
# root path of the dataset
rootpath = '/media/arvindramesh/CCF02769F02758CA/TextClassification-NewsGroup/reuters/'
# the cats.txt file that contains the mapping between the file and their respective categories
catpath= '/media/arvindramesh/CCF02769F02758CA/TextClassification-NewsGroup/reuters/cats.txt'
# path were the newly setup dataset will reside
newpath= '/media/arvindramesh/CCF02769F02758CA/TextClassification-NewsGroup/'
# using with to open a file will automatically handle the closing of file handler
with open(catpath) as catsfile:
for line in catsfile:
key = line.split()[0] # path and filename
value= line.split()[1] # category
# create directory if it does not exists
if not os.path.exists(join(newpath,key.split('/')[0],value)): os.makedirs(join(newpath,key.split('/')[0],value))
#shutil.copy2(source,destination) lets you copy the files from the source directory to destination
shutil.copy2(join(rootpath,key), join(newpath,key.split('/')[0],value))
print "DONE"
"""
Explanation: Now we can't begin to directly port the Reuters dataset. We need to segregate the training examples(documents) into its respective category folder for scikit to load it. Python OS module to the rescue!!
<img src="https://i.imgflip.com/p46f8.jpg" title="made at imgflip.com" width=300px, height=300px/>
End of explanation
"""
from sklearn.datasets import load_files
training_data = load_files('/media/arvindramesh/CCF02769F02758CA/TextClassification-NewsGroup/training/')
print "Loaded " + str(len(training_data.filenames)) + " Training Documents "
"""
Explanation: VOILA!
Now we have the input data in the required structure ready. Let us go ahead and port it with
<b><i>sklearn.datasets.load_files</i></b>.
End of explanation
"""
# category of first document in the buch
print "TARGET NAME : " + training_data.target_names[training_data.target[0]]
# data of the first document in the bunch
print "DATA : " + training_data.data[0][:500]
# Target value of the first document in the bunch
print "TARGET : " + str(training_data.target[0])
# filename of the first document in the bunch
print "FILENAME: " + training_data.filenames[0]
"""
Explanation: As discussed earlier, the <b><i>load_files</i></b> function returns a data bunch which consists of <b>{target_names,data,target, DESCR, filenames}</b>. We can access a particular file from the training example as follows:
End of explanation
"""
stopwords_list = []
with open('/media/arvindramesh/CCF02769F02758CA/TextClassification-NewsGroup/reuters/stopwords') as f:
for line in f:
stopwords_list.append(line.strip())
print "Stop Words List :"
print stopwords_list[:10]
print "...."
from sklearn.feature_extraction.text import CountVectorizer
import datetime,re
print ' [process started: ' + str(datetime.datetime.now()) + ']'
# Initialize the "CountVectorizer" object, which is scikit-learn's bag of words tool.
count_vect = CountVectorizer(analyzer = "word", stop_words= set(stopwords_list))
# fit_transform() does two functions: First, it fits the model
# and learns the vocabulary; second, it transforms our training data
# into feature vectors. The input to fit_transform should be a list of
# strings.
X_train_count= count_vect.fit_transform(training_data.data)
print ' [process ended: ' + str(datetime.datetime.now()) + ']'
print "Created a Sparse Matrix with " + str(X_train_count.shape[0]) + " Documents and "+ str(X_train_count.shape[1]) + " Features"
"""
Explanation: Feature Extraction
Key thing to realizing a machine learning system is feature extraction. Often times 80% of your time and effort in a Machine Learning project is spent on realizing technique that give us good features to work with. Even an effective algorithm becomes useless when we use bad feature selection. The most popular technique that is used in text classification is the <a href='https://en.wikipedia.org/wiki/Bag-of-words_model'> <b>Bag of Words</b> </a> representation which converts the input text data into its numeric represenation. The Bag of Words model learns a vocabulary from all of the documents, then models each document by counting the number of times each word appears. *For example, consider the following two sentences:
<br/>
Sentence 1: "The cat sat on the hat"
<br/>
Sentence 2: "The dog ate the cat and the hat"
<br/>
From these two sentences, our vocabulary is as follows:
<br/>
{ the, cat, sat, on, hat, dog, ate, and }
<br/>"
To get our bags of words, we count the number of times each word occurs in each sentence. In Sentence 1, "the" appears twice, and "cat", "sat", "on", and "hat" each appear once, so the feature vector for Sentence 1 is:
<br/>
{ the, cat, sat, on, hat, dog, ate, and }
<br/>
Sentence 1: { 2, 1, 1, 1, 1, 0, 0, 0 }
<br/>
Similarly, the features for Sentence 2 are: { 3, 1, 0, 0, 1, 1, 1, 1}
<br/>
*Example and explanation taken from <a href='https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-1-for-beginners-bag-of-words'>Kaggle</a>
Let us fire up the <b><i> sklearn.feature_extraction.text</i></b> and start building the bag of words representation for the training data.
<br/> While constructing the bag of words representation, we come across words such as "the", "a", "am".. which do not add any meaning. So these words are filtered out before processing the input text. This is primarily done to reduce the dimensionality of the data. We begin by importing the stopwords.txt file from the reuters corpus into a set and passing that as a argument to CountVectorizer.
End of explanation
"""
print count_vect.vocabulary_.get(u'oil')
"""
Explanation: What <b><i>CountVectorizer</i></b> does is create a sparse matrix where every word in the input corpus is mapped to a unique integer. The index value of a word in the vocabulary is linked to its frequency in the whole training corpus. For instance the word oil is assigned a unique integer of 16654
End of explanation
"""
from sklearn.feature_extraction.text import TfidfTransformer
print '[process started: ' + str(datetime.datetime.now()) + ']'
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_count)
print '[process ended: ' + str(datetime.datetime.now()) + ']'
print X_train_tfidf.shape
"""
Explanation: From Occurences to Frequencies
Occurrence count is a good start but there is an issue: longer documents will have higher average count values than shorter documents, even though they might talk about the same topics.
To avoid these potential discrepancies it suffices to divide the number of occurrences of each word in a document by the total number of words in the document: these new features are called tf for Term Frequencies.
Another refinement on top of tf is to downscale weights for words that occur in many documents in the corpus and are therefore less informative than those that occur only in a smaller portion of the corpus.
This downscaling is called tf–idf for “Term Frequency times Inverse Document Frequency”.
<br/> Refer to the following <a href='http://nlp.stanford.edu/IR-book/html/htmledition/term-frequency-and-weighting-1.html'> Link </a> for a detailed explanation on TFIDF
End of explanation
"""
from sklearn.naive_bayes import MultinomialNB
print ' [Classification Started: ' + str(datetime.datetime.now()) + ']'
# you fit the NB model
clf = MultinomialNB().fit(X_train_tfidf, training_data.target)
print ' [Classification ended: ' + str(datetime.datetime.now()) + ']'
"""
Explanation: Naive Bayes Classifier
<img src="http://cdn.meme.am/instances2/500x/1291441.jpg" title="made at imgflip.com" height=400px, width=400px/>
<br/>
I am not going to bore you with the details here. Let us just say that Naive Bayes is a very solid yet simple algorithm when it comes to classifying text and by default should be the very first algorithm that you must try out. Scikit has a very good implementation of Naive Bayes and we will be using it to classify our text. For the curious souls more about <a href='https://en.wikipedia.org/wiki/Naive_Bayes_classifier'> NB Classifier </a>.
End of explanation
"""
from __future__ import division
import numpy as np
test_data= load_files('/media/arvindramesh/CCF02769F02758CA/TextClassification-NewsGroup/test/',shuffle=True, encoding='ISO-8859-2')
count=0
print ' [Classification Started: ' + str(datetime.datetime.now()) + ']'
for i in range(0,len(test_data.filenames)):
docs_test = [test_data.data[i]]
# Apply the count vectorizer we used to fit the training data on the test data
doc_test_counts = count_vect.transform(docs_test)
# apply the tfidf transformation
doc_test_tfidf = tfidf_transformer.transform(doc_test_counts)
# predict
predicted = clf.predict(doc_test_tfidf)
# Predicted label based on the classifier prediction above
predicted_label=training_data.target_names[predicted]
# True label of test document
true_label=test_data.target_names[test_data.target[i]]
# calculate the accuracy
if predicted_label==true_label:
count+=1
print ' [Classification Ended: ' + str(datetime.datetime.now()) + ']'
print "ACCURACY : " + str(count/len(test_data.filenames))
"""
Explanation: Testing the Performance of the Classifier
We now test the performance of the classifier in terms of the accuracy. What we do is we begin to import the test documents using the <b><i>load_files</i></b> the same way we did for the training documents. We then apply the <b>CountVectorizer</b> and <b>TFIDF</b> transformation to the test documents. The predict function will output the predicted label of the test document.
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
print count_vect
"""
Explanation: Did you just say 64.392182842 % Accuracy?
<br/>
The good news is we could do much better than this!!!!
<br/>
<img src="https://i.imgflip.com/p5w40.jpg" title="made at imgflip.com" width=300px height=300px/>
<br/>
The bad performance of Naive Bayes is a clear indication of the feature selection method that we chose. Hence in this section we will spend some time tweaking the features to see if we can improve the accuracy of the classifier.
Fortunately the <b><i>CountVectorizer</i></b> provides arguments that we can tweak. Let us dive right in.
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
import datetime,re
print ' [process started: ' + str(datetime.datetime.now()) + ']'
# Initialize the "CountVectorizer" object, which is scikit-learn's bag of words tool.
count_vect = CountVectorizer(analyzer = "word", stop_words= set(stopwords_list), min_df=3, max_df=0.5, lowercase=True,
ngram_range=(1,3))
# fit_transform() does two functions: First, it fits the model
# and learns the vocabulary; second, it transforms our training data
# into feature vectors. The input to fit_transform should be a list of
# strings.
X_train_count= count_vect.fit_transform(training_data.data)
print ' [process ended: ' + str(datetime.datetime.now()) + ']'
print "Created a Sparse Matrix with " + str(X_train_count.shape[0]) + " Documents and "+ str(X_train_count.shape[1]) + " Features"
"""
Explanation: We can see the default arguments of <b><i>CountVectorizer</i></b> by printing the CountVectorizer instance as shown above. The first and foremost parameter that we must pay attention to is <b>lowercase</b>
<br/>
This is a very important argument for <a href='https://en.wikipedia.org/wiki/Dimensionality_reduction'>dimensionality reduction</a>. Setting lowercase=False will treat a word <b><u>Cat</u></b> and <b><u>cat</u></b> differently while they may mean the same thing. Thus it is advisable to either lowercase the input data or set <u>lowercase=True</u> in CountVectorizer. CountVectorizer sets this parameter as True by default and hence we may not have to explicitly set is while intializing.
<br/> The <b><i> max_df</i></b> and <b><i>min_df</i></b> help filter out the <u>most common</u> and <u>rare </u> words respectively. We avoid using a word that is present in almost every document as they may not be very indicative of the class that the text belongs to. The same argument can be applied to words that are present very rarely in the input corpus. Thus filtering them out leads to a better set of features which could help us better categorize a text.
<br/> <b><i>min_df</i></b> can take integer or float values. setting min_df=3 tells the CountVectorizer to ignore words that exits in strictly less than or equal to 3 documents. If float, the parameter represents a a percentage of documents, integer absolute counts.
<br/> <b><i>max_df</i></b> ignores terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float, the parameter represents a proportion of documents, integer absolute counts.
<br/> <b><i>ngram</i></b> is a contiguous sequence of n items from a given set of texts. ngram representation can sometimes make more sense when we are analyzing a large chunk of texts as it spits out <a href='https://en.wikipedia.org/wiki/Collocation'>Collocation</a>. Using a bigram or tri-gram feature may give us features such as <b>"United States" </b>and <b>"US Vice President"</b> which could improve the performance of a classifier significantly compared to using unigram tokens such as <b>["United", "States" , "US" ,"Vice" and "President" ]</b>. We will be using unigram, bigram and trigram tokens by setting the <b>ngram_range</b> parameter as a tuple (1,3).
End of explanation
"""
from sklearn.feature_extraction.text import TfidfTransformer
print '[process started: ' + str(datetime.datetime.now()) + ']'
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_count)
print '[process ended: ' + str(datetime.datetime.now()) + ']'
print X_train_tfidf.shape
from sklearn.naive_bayes import MultinomialNB
print ' [Classification Started: ' + str(datetime.datetime.now()) + ']'
# you fit the NB model
clf = MultinomialNB().fit(X_train_tfidf, training_data.target)
print ' [Classification ended: ' + str(datetime.datetime.now()) + ']'
from __future__ import division
#load test data
test_data= load_files('/media/arvindramesh/CCF02769F02758CA/TextClassification-NewsGroup/test/',shuffle=True, encoding='ISO-8859-2')
# variable to track the accuracy
count=0
print ' [Classification Started: ' + str(datetime.datetime.now()) + ']'
# Iterate over the test document file by file
for i in range(0,len(test_data.filenames)):
docs_test = [test_data.data[i]]
# Apply the count vectorizer we used to fit the training data on the test data
doc_test_counts = count_vect.transform(docs_test)
# apply the tfidf transformation
doc_test_tfidf = tfidf_transformer.transform(doc_test_counts)
predicted = clf.predict(doc_test_tfidf)
# Predicted Target label
predicted_label=training_data.target_names[predicted]
# True Target label of test document
true_label=test_data.target_names[test_data.target[i]]
# calculate the accuracy
if predicted_label==true_label:
count+=1
print ' [Classification Ended: ' + str(datetime.datetime.now()) + ']'
print "ACCURACY : " + str(count/len(test_data.filenames))
"""
Explanation: Notice how the number of features has gone up from <b>25834</b> in the previous case to <b>55469</b>. This is because of using ngram features. We now use TFIDF on the word counts.
End of explanation
"""
from __future__ import division
#load test data
test_data= load_files('/media/arvindramesh/CCF02769F02758CA/TextClassification-NewsGroup/test/',shuffle=True, encoding='ISO-8859-2')
# variable to track the accuracy
count=0
print ' [Classification Started: ' + str(datetime.datetime.now()) + ']'
# Let's create a dictionary that stores the top 5 predictions for every test document
# Hence the key will be the document number and the value will be a list containing top 5 predictions
NBPredictions={}
# Iterate over the test document file by file
for i in range(0,len(test_data.filenames)):
docs_test = [test_data.data[i]]
# Apply the count vectorizer we used to fit the training data on the test data
doc_test_counts = count_vect.transform(docs_test)
# apply the tfidf transformation
doc_test_tfidf = tfidf_transformer.transform(doc_test_counts)
probability = clf.predict_proba(doc_test_tfidf).flatten()
#print probability
# sort according to the probability value in descending order
a= (-probability).argsort()[:5]
#create a list of predicted labels
predicted=[]
for target in a:
predicted.append(training_data.target_names[target])
true_label = test_data.target_names[test_data.target[i]]
NBPredictions[i]=predicted
if true_label in predicted:
count+=1
print ' [Classification Ended: ' + str(datetime.datetime.now()) + ']'
print "ACCURACY : " + str(count/len(test_data.filenames))
"""
Explanation: From 64 to 67 % Accuracy
<img src='http://cdn.meme.am/images/300x/12515644.jpg'/>
<br/>
Now there isn't much tweak that you can do further to improve the accuracy of Naive Bayes. Though using alternate models such as Logistic Regression and Stochastic Gradient Descent may give better results, i want see if i can maybe improve the accuracy of the exisiting model by using some simple techniques.
<br/> Instead of predicting one target label, we can tweak the program to maybe output the top 5 (say) predictions for every test document based on the probability value. This is quite easy to implement on the existing code and requires us to use the <b><i>predict_proba</i></b> function instead of the <b><i>predict</i></b> function in MultinomialNB. I'll show you in a second how to do this.
End of explanation
"""
# Print the top 5 predictions made for the first document in the test set by MultinomialNB model
NBPredictions[0]
"""
Explanation: WOAH!
We see here that the accuracy of the classifier is now 82.67% for the top 5 predictions. However in real text classification system, you would need human intervention to choose one target label out of the top 5 predictions which may be cumbersome. Our idea here is to automate the decision making, hence printing the top 5 predictions may not be a good idea.
<br/> What we will do now is apply the <b> Latent Semantic Indexing(LSI) </b> similarity measure between the test document and training document that belong to classes that were predicted in the Naive Bayes Model (which is the top 5 predictions that we made earlier) .
LSI is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts. It is called Latent Semantic Indexing because of its ability to correlate semantically related terms that are latent in a collection of text. The method, also called latent semantic analysis (LSA), uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches. Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don’t share a specific word or words with the search criteria. *<a href='https://en.wikipedia.org/wiki/Latent_semantic_indexing'>Source</a>
<br/>
Fortunately <a href='https://radimrehurek.com/gensim/'><b><i>Gensim</i></b></a> comes to our rescue here. Gensim is a python package by Radim Rehurek which is mainly used for statistical semantic analysis and topic modelling. Gensim provides us with a nice interface to carry out LSI similarity measure on the input corpus.
End of explanation
"""
cosine_data = load_files('/media/arvindramesh/CCF02769F02758CA/TextClassification-NewsGroup/training/',
categories=NBPredictions[0])
print "Loaded " + str(len(cosine_data.filenames)) + " Files belonging to " + str(len(cosine_data.target_names)) +" Classes "
print "Classes are " + str(list(NBPredictions[0]))
"""
Explanation: While doing the LSI similarity lookup, we will load only those categories from the training set that belong to the top 5 predictions. i.e for Test Document 0, since we know that the top 5 predictions are ['acq', 'earn', 'crude', 'trade', 'interest'], we'll load only training data that belong to these categories. The <b><i>sklearn.datasets.load_files</i></b> provides a <b>categories</b> parameter to do this. Categories takes in a list of classes and loads up training data only from those classes.
End of explanation
"""
# to print the gensim logging in ipython notebook
import logging
logging.basicConfig(format='%(levelname)s : %(message)s', level=logging.INFO)
logging.root.level = logging.INFO # ipython sometimes messes up the logging setup; restore
import gensim
import os
from os.path import join
from gensim import utils
import datetime, re, sys
from nltk.tokenize import RegexpTokenizer
from gensim import similarities,models, matutils
from gensim.similarities import MatrixSimilarity, SparseMatrixSimilarity, Similarity
from gensim import corpora
tokenizer = RegexpTokenizer(r'[a-zA-Z0-9]+')
# Tokenizer
def text_tokenize(text):
tokens = tokenizer.tokenize(text.lower())
# return the transformed corpus
out=[]
for token in tokens:
# we filter out tokens that exist in stopwords list
if token not in stopwords_list:
out.append(token)
return out # returns a list
class MyCorpus(gensim.corpora.TextCorpus):
def get_texts(self):
for filename in self.input:
yield text_tokenize(open(filename).read())
"""
Explanation: Now let us put it all together.
Gensim
Note that corpus that we loaded using <b><i>load_files</i></b> above resides fully in memory, as a plain Python Bunch. When the input corpus becomes very big with millions of documents , storing all of them in RAM may be a bad design decision and could crash the system. Gensim overcomes this by using a technique called streaming. While streaming a document is loaded and processed in memory one by one. Although the output is the same as for the plain Python bunch, the corpus is now much more memory friendly, because at most one vector resides in RAM at a time. The implementation of a gensim corpus class is as shown below:
End of explanation
"""
import sys
accCount=0 # Keep track of the accuracy
# Iterate over the test documents
# to iterate over the entire test_data replace range(0,30) with (0,len(test_data.filenames))
for i in range(0,50):
print "CLASSIFICATION " + str(i)
# Root path where training data exists
root_path='/media/arvindramesh/CCF02769F02758CA/TextClassification-NewsGroup/training/'
filename=[] # list containing absolute path name of all the filenames
targetvar=[] # list that contains target label
for j in NBPredictions[i]: # pull up the top5 predictions
container_path=join(root_path,j) # container path is root_path/class_name
for root, dirs, files in os.walk(container_path, topdown=True):
for name in files:
targetvar.append(root[root.rfind('/')+1:])
filename.append(join(root,name))
# import gensim here to carry out LSI
# serialize the corpus and store it as a .mm file
mycorpus= MyCorpus(filename)
corpora.MmCorpus.serialize('/home/arvindramesh/Desktop/Internship/Experimental/corpus.mm', mycorpus)
corpus= corpora.MmCorpus('/home/arvindramesh/Desktop/Internship/Experimental/corpus.mm')
dictionary = mycorpus.dictionary
# apply tfidf to the corpus
tfidf = models.TfidfModel(corpus,id2word=dictionary)
# Apply LSI to tfidf of corpus
lsi = models.LsiModel(tfidf[corpus],id2word=dictionary)
# create a index for fast lookup
index = similarities.SparseMatrixSimilarity(lsi[tfidf[corpus]],corpus.num_terms)
# Testing
tokens= tokenizer.tokenize(test_data.data[i])# tokenize the data
vec_bow = dictionary.doc2bow(tokens) # do a bag of words represenation on the data
sims = index[lsi[tfidf[vec_bow]]] # compute the lsi similarity measure
a= (sorted(enumerate(sims),key=lambda item: -item[1])) # sort the similarity measure
# a now contains a list of tuples that contains the (document id,similarity score)
predicted = targetvar[a[0][0]]
true = test_data.target_names[test_data.target[i]]
print "Predicted : " + predicted
print " True: " + true
sys.stdout.flush()
if true==predicted:
accCount+=1
print "ACCURACY : " + str(accCount/50)
"""
Explanation: To stream the input corpus we store the absolute path of the files as a list and pass that to the MyCorpus <b><i>get_texts</i></b> function. This function implements the streaming interface.
NOTE
I am going to run it only for the first 50 files in the test_data to show you the output. Iterating through the entire test set may take quite sometime.
End of explanation
"""
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import SGDClassifier
print '[process started: ' + str(datetime.datetime.now()) + ']'
count_vect= CountVectorizer(analyzer = "word", stop_words= set(stopwords_list), tokenizer=text_tokenize, min_df=3,
max_df=0.5,ngram_range=(1,3),lowercase=True)
X_train_count = count_vect.fit_transform(training_data.data)
print '[process ended: ' + str(datetime.datetime.now()) + ']'
print "Transformed " + str(X_train_count.shape[0]) + " documents with " + str(X_train_count.shape[1]) + " Features"
from sklearn.feature_extraction.text import TfidfTransformer
print '[process started: ' + str(datetime.datetime.now()) + ']'
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_count)
print '[process ended: ' + str(datetime.datetime.now()) + ']'
print X_train_tfidf.shape
"""
Explanation: Observations made from LSI
LSI is a very interesting technique to use while classifying text. LSI has not given us an enhanced performance compared to MultinomialNB,but it has helped establish some interesting similarity relationships between classes. <br/>
For a test document that belongs to the <b>meal-feed</b> class, the LSI model predicts <b>corn</b>. Now on closer inspection we find that the predicted class makes a intuitive sense and in fact we are not completely wrong to classify it as meal-feed as corn is infact a primary meal source in some countries such as China, Brazil and Mexico. Similarly <b>Bop(balance of payments)</b> is very close to <b>trade</b> and the same goes for <b>Crude</b> and <b>nat-gas (Natural Gas) </b>. Since these classes are semantically very close to each other, any misclassifications arising out of such a scenario would not be very expensive. Though the <b> Accuracy </b> numbers of a LSI model is <b>72%</b> but the predictions in LSI are very intuitive and can make a very good business sense when deployed in real time systems.
Wait but why go through all this trouble?
Well i hear you! There are some interesting models such as <b><i> Logistic Regression </i></b> and <b><i> Stochastic Gradient Descent </i></b> at your disposal. These methods are very popular in literature. However i wanted to stick with Naive Bayes as it is very simple and elegant.
<br/> In my pursuit to try and make the NB model work, i explored some very interesting NLP concepts such as Stemmers and Lemmatizers, Collocations, , Document Summarizers (which i will try to cover in a seperate tutorial soon) and Stanford NER (Named Entity Recognizer). NER helps identify what's "important" in a text document. Also called entity extraction, this process involves automatically extracting the names of persons, places, organizations, and potentially other entity types out of unstructured text. Building an NER classifier requires lots of annotated training data and some fancy machine learning algorithms. These helped establish a very cool news article summarizer that i am working on currently. The summarizer was made with an intention to summarize large chunk of texts in order to reduce the dimensionality of datasets for the classifer models. I hope to add other capabilities and make a nefty text summarizer for me to summarize those boring lecture notes ;)
<br/>
I also experimented with the <b>gensim</b> package which helped me try out the LSI model on the input with promising results. Deep Learning techniques such as <a href='https://code.google.com/p/word2vec/'><b>Google's Word2Vec</b></a> are also interesting alternatives for the bow of word representation that you can use on the existing model.
Beyond Naive Bayes- Logistic Regression and Stochastic Gradient Descent
Logistic regression is one of my personal favorite algorithm when it comes to multi-class classification. This model is so simple and yet gives amazing result out of the box. The best starting point would be this <a href='https://en.wikipedia.org/wiki/Logistic_regression'> Wikipedia </a> entry. Fortunately for us we have the Logistic Regression implementation in scikit-learn which we will code up quickly in just a minute. You can check out this <a href='http://scikit-learn.org/stable/modules/sgd.html'>page</a> on scikit-learn that talks about using the Stochastic Gradient Descent as a classifier. So let's try both of them out!
End of explanation
"""
print ' [Classification Started: ' + str(datetime.datetime.now()) + ']'
# Fit the logistic regression and SGD
# For the logistic regression we will use the one vs rest classifier
clf1 = LogisticRegression(class_weight='auto',solver='newton-cg',multi_class='ovr').fit(X_train_tfidf, training_data.target)
clf2 = SGDClassifier(loss='log',class_weight='auto').fit(X_train_tfidf,training_data.target)
print ' [Classification ended: ' + str(datetime.datetime.now()) + ']'
"""
Explanation: Fit SGDClassifier and LogisticRegression
End of explanation
"""
from __future__ import division
#load test data
test_data= load_files('/media/arvindramesh/CCF02769F02758CA/TextClassification-NewsGroup/test/',shuffle=True, encoding='ISO-8859-2')
# variable to track the accuracy
Logisticount=0
SGDcount=0
print ' [Classification Started: ' + str(datetime.datetime.now()) + ']'
# Iterate over the test document file by file
for i in range(0,len(test_data.filenames)):
docs_test = [test_data.data[i]]
# Apply the count vectorizer we used to fit the training data on the test data
doc_test_counts = count_vect.transform(docs_test)
# apply the tfidf transformation
doc_test_tfidf = tfidf_transformer.transform(doc_test_counts)
predicted1 = clf1.predict(doc_test_tfidf)
predicted2 = clf2.predict(doc_test_tfidf)
# Predicted Target label
predicted_label1=training_data.target_names[predicted1]
predicted_label2 = training_data.target_names[predicted2]
# True Target label of test document
true_label=test_data.target_names[test_data.target[i]]
# calculate the accuracy
if predicted_label1==true_label:
Logisticount+=1
if predicted_label2==true_label:
SGDcount+=1
print ' [Classification Ended: ' + str(datetime.datetime.now()) + ']'
print "ACCURACY of LogisticRegression : " + str(Logisticount/len(test_data.filenames))
print "ACCURACY of SGDClassifier : " + str(SGDcount/len(test_data.filenames))
"""
Explanation: Test the classifier
End of explanation
"""
from __future__ import division
#load test data
test_data= load_files('/media/arvindramesh/CCF02769F02758CA/TextClassification-NewsGroup/test/',shuffle=True, encoding='ISO-8859-2')
# variable to track the accuracy
Logisticount=0
SGDcount=0
truelabel=[]# list to hold true label
Logistic=[] # list to hold predicted label of LogisticRegression Classifier
Sgd=[]# list to hold predicted label of SGDClassifier
print ' [Classification Started: ' + str(datetime.datetime.now()) + ']'
# Iterate over the test document file by file
for i in range(0,len(test_data.filenames)):
docs_test = [test_data.data[i]]
# Apply the count vectorizer we used to fit the training data on the test data
doc_test_counts = count_vect.transform(docs_test)
# apply the tfidf transformation
doc_test_tfidf = tfidf_transformer.transform(doc_test_counts)
predicted1 = clf1.predict(doc_test_tfidf)
predicted2 = clf2.predict(doc_test_tfidf)
# Predicted Target label
predicted_label1=training_data.target_names[predicted1]
Logistic.append(predicted_label1)
predicted_label2 = training_data.target_names[predicted2]
Sgd.append(predicted_label2)
# True Target label of test document
true_label=test_data.target_names[test_data.target[i]]
truelabel.append(true_label)
# calculate the accuracy
if predicted_label1==true_label:
Logisticount+=1
if predicted_label2==true_label:
SGDcount+=1
print ' [Classification Ended: ' + str(datetime.datetime.now()) + ']'
# create a dictionary that maps class label with interger
# this will help us while calculating f1 scores
dictmap={}
for i in range(0,len(training_data.target_names)):
dictmap[str(training_data.target_names[i])]=i
# Create a vector of target for true and predicted label
y_true = [dictmap[str(x)] if dictmap.has_key(x) else 84 for x in truelabel]
y_pred_logistic = [dictmap[str(x)] for x in Logistic]
y_pred_SGD = [dictmap[str(x)] for x in Sgd]
print "Created Predicted Target labels"
"""
Explanation: Wohoooo!! We have a winner!!
<img src="http://cdn.meme.am/instances2/500x/1291494.jpg"/>
Other metrics for classifier performance
Accuracy is a good measure but there are few other metrics such the <b>f1 score </b> that help us visualize the performance of a classifier better.
<br/> F1 Score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0. The relative contribution of precision and recall to the F1 score are equal.
<br/><b> Precision </b> measures whether predictions are correct while <b> Recall </b> measures whether everything that should be predicted, is predicted. Precision and Recall are inversely proportional and hence f1 scores balances these two. For us, Scikit-learn has an implementation to compute the f1 score where the f1 score formula is given by
<br/><code>F1 = 2*(precision * recall) / (precision + recall)</code>
End of explanation
"""
from sklearn.metrics import classification_report
print(classification_report(y_true, y_pred_logistic, target_names=training_data.target_names))
"""
Explanation: Classification Report for LogisticRegression
End of explanation
"""
from sklearn.metrics import classification_report
print(classification_report(y_true, y_pred_SGD, target_names=training_data.target_names))
"""
Explanation: Classification Report for SGDClassifier
End of explanation
"""
|
Danghor/Algorithms | Python/Chapter-09/Kruskal.ipynb | gpl-2.0 | %run Union-Find-OO.ipynb
"""
Explanation: Kruskal's Algorithm for Computing the Minimum Spanning Tree
In our implementation of Kruskal's algorithm for finding the
minimum spanning tree we use the union-find data structure that we have defined previously.
End of explanation
"""
import heapq as hq
"""
Explanation: Furthermore, we need a priority queue. The module heapq implements a priority queue. The part
of the API from this module that we utilize is the following:
- H.heappush(x) pushes x onto the heap H,
- H.heappop() removes the smallest element from the heap H and returns this element,
- H = [] creates an empty heap.
End of explanation
"""
def mst(V, E):
UF = UnionFind(V)
MST = set() # minimum spanning tree, represented as set of weighted edges
H = [] # empty priority queue for weighted edges
for edge in E:
hq.heappush(H, edge)
while True:
w, (x, y) = hq.heappop(H)
root_x = UF.find(x)
root_y = UF.find(y)
if root_x != root_y:
MST.add((w, (x, y)))
UF.union(x, y)
if len(MST) == len(V) - 1:
return MST
"""
Explanation: The function $\texttt{mst}(V, E)$ takes a set of nodes $V$ and a set of weighted edges $E$ to compute a minimum spanning tree. It is assumed that the pair $(V, E)$ represents a weighted graph $G$ that is connected. The weighted edges in the set $E$ have the form
$$ \bigl\langle w, \langle x, y\rangle\bigr\rangle. $$
Here, $x$ and $y$ are nodes from the set $V$, while $w$ is the cost of the edge ${x,y}$.
The function call mst(V, E) returns a set of weighted edges that define a minimum spanning tree
of the weighted graph $G$. The function mst does not check whether $G$ is connected.
End of explanation
"""
def mst(V, E):
UF = UnionFind(V)
MST = set() # minimum spanning tree, represented as set of weighted edges
H = [] # empty priority queue for weighted edges
for edge in E:
hq.heappush(H, edge)
while True:
w, (x, y) = hq.heappop(H)
print(f'testing {x} - {y}, weight {w}')
root_x = UF.find(x)
root_y = UF.find(y)
if root_x != root_y:
print(f'connect {x} - {y}')
MST.add((w, (x, y)))
UF.union(x, y)
display(toDot(E, MST))
print('_' * 120)
if len(MST) == len(V) - 1:
return MST
import graphviz as gv
"""
Explanation: The implementation of mst that is given below traces its computation via graphviz.
End of explanation
"""
def toDot(E, H):
V = set()
for (_, (x, y)) in E:
V.add(x)
V.add(y)
dot = gv.Graph()
dot.attr(rankdir='LR')
for x in V:
dot.node(str(x))
for (w, (x, y)) in E:
if (w, (x, y)) in H:
dot.edge(str(x), str(y), label=str(w), color='blue', penwidth='2')
else:
dot.edge(str(x), str(y), label=str(w), style='dashed')
return dot
"""
Explanation: Given a set $E$ of weighted edges, the function $\texttt{toDot}$ transforms this set into a dot structure that can be displayed as a graph. The edges that are present in the set $H$ are assumed to be the edges that are part of the minimum spanning tree and therefore are highlighted.
End of explanation
"""
with open('tiny.txt', 'r') as f:
s = f.read()
print(s)
"""
Explanation: The file tiny.txt contains the description of a weighted graph. Every line in this file has the form:
x y w
Here x and y are numbers specifying nodes, while w is the weight of this node. The code given below displays the file tiny.txt.
End of explanation
"""
def demoFile(fn):
with open(fn, 'r') as file:
data = file.readlines()
Edges = set()
Nodes = set()
for line in data:
x, y, weight = line.split()
x, y, weight = int(x), int(y), int(weight)
Edges.add((weight, (x, y)))
Nodes.add(x)
Nodes.add(y)
MST = mst(Nodes, Edges);
print(MST)
return toDot(Edges, MST)
MST = demoFile('tiny.txt')
MST
"""
Explanation: The function demoFile(fn) takes a filename fn as its argument. The corresponding file is expected to hold the description of
an undirected weighted graph. The function computes the minimum spanning tree for this graph.
End of explanation
"""
|
kit-cel/lecture-examples | mloc/ch4_Deep_Learning/pytorch/Deep_NN_Detection_QAM.ipynb | gpl-2.0 | import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from ipywidgets import interactive
import ipywidgets as widgets
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print("We are using the following device for learning:",device)
"""
Explanation: QAM Demodulation in Nonlinear Channels with Deep Neural Networks
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates:
* demodulation of QAM symbols in highly nonlinear channels using an artificial neural network
* utilization of softmax layer
* variable batch size to improve learning towards lower error rates
End of explanation
"""
# Length of transmission (in km)
L = 5000
# fiber nonlinearity coefficient
gamma = 1.27
Pn = -21.3 # noise power (in dBm)
Kstep = 50 # number of steps used in the channel model
# noise variance per step
sigma_n = np.sqrt((10**((Pn-30)/10)) / Kstep / 2)
constellations = {'16-QAM': np.array([-3,-3,-3,-3,-1,-1,-1,-1,1,1,1,1,3,3,3,3]) + 1j*np.array([-3,-1,1,3,-3,-1,1,3,-3,-1,1,3,-3,-1,1,3]), \
'16-APSK': np.array([1,-1,0,0,1.4,1.4,-1.4,-1.4,3,-3,0,0,5,-5,0,0]) + 1j*np.array([0,0,1,-1,1.4,-1.4,1.4,-1.4,0,0,4,-4,0,0,6,-6]), \
'4-test' : np.array([-1,2,0,4]) + 1j*np.array([0,0,3,0])}
# permute constellations so that it is visually more appealing with the chosen colormap
for cname in constellations.keys():
constellations[cname] = constellations[cname][np.random.permutation(len(constellations[cname]))]
def simulate_channel(x, Pin, constellation):
# modulate bpsk
input_power_linear = 10**((Pin-30)/10)
norm_factor = 1 / np.sqrt(np.mean(np.abs(constellation)**2)/input_power_linear)
modulated = constellation[x] * norm_factor
temp = np.array(modulated, copy=True)
for i in range(Kstep):
power = np.absolute(temp)**2
rotcoff = (L / Kstep) * gamma * power
temp = temp * np.exp(1j*rotcoff) + sigma_n*(np.random.randn(len(x)) + 1j*np.random.randn(len(x)))
return temp
"""
Explanation: Specify the parameters of the transmission as the fiber length $L$ (in km), the fiber nonlinearity coefficienty $\gamma$ (given in 1/W/km) and the total noise power $P_n$ (given in dBM. The noise is due to amplified spontaneous emission in amplifiers along the link). We assume a model of a dispersion-less fiber affected by nonlinearity. The model, which is described for instance in [1] is given by an iterative application of the equation
$$
x_{k+1} = x_k\exp\left(\jmath\frac{L}{K}\gamma|x_k|^2\right) + n_{k+1},\qquad 0 \leq k < K
$$
where $x_0$ is the channel input (the modulated, complex symbols) and $x_K$ is the channel output. $K$ denotes the number of steps taken to simulate the channel Usually $K=50$ gives a good approximation.
[1] S. Li, C. Häger, N. Garcia, and H. Wymeersch, "Achievable Information Rates for Nonlinear Fiber Communication via End-to-end Autoencoder Learning," Proc. ECOC, Rome, Sep. 2018
End of explanation
"""
length_plot = 4000
def plot_constellation(Pin, constellation_name):
constellation = constellations[constellation_name]
t = np.random.randint(len(constellation),size=length_plot)
r = simulate_channel(t, Pin, constellation)
plt.figure(figsize=(12,6))
font = {'size' : 14}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
plt.subplot(1,2,1)
r_tx = constellation[range(len(constellation))]
plt.scatter(np.real(r_tx), np.imag(r_tx), c=range(len(constellation)), marker='o', s=200, cmap='tab20')
plt.xticks(())
plt.yticks(())
plt.axis('equal')
plt.xlabel(r'$\Re\{r\}$',fontsize=14)
plt.ylabel(r'$\Im\{r\}$',fontsize=14)
plt.title('Transmitted constellation')
plt.subplot(1,2,2)
plt.scatter(np.real(r), np.imag(r), c=t, cmap='tab20',s=4)
plt.xlabel(r'$\Re\{r\}$',fontsize=14)
plt.ylabel(r'$\Im\{r\}$',fontsize=14)
plt.axis('equal')
plt.title('Received constellation ($L = %d$\,km, $P_{in} = %1.2f$\,dBm)' % (L, Pin))
#plt.savefig('%s_received_zd_%1.2f.pdf' % (constellation_name.replace('-','_'),Pin),bbox_inches='tight')
interactive_update = interactive(plot_constellation, \
Pin = widgets.FloatSlider(min=-10.0,max=10.0,step=0.1,value=1, continuous_update=False, description='Input Power Pin (dBm)', style={'description_width': 'initial'}, layout=widgets.Layout(width='50%')), \
constellation_name = widgets.RadioButtons(options=['16-QAM','16-APSK','4-test'], value='16-QAM',continuous_update=False,description='Constellation'))
output = interactive_update.children[-1]
output.layout.height = '400px'
interactive_update
"""
Explanation: We consider BPSK transmission over this channel.
Show constellation as a function of the fiber input power. When the input power is small, the effect of the nonlinearity is small (as $\jmath\frac{L}{K}\gamma|x_k|^2 \approx 0$) and the transmission is dominated by the additive noise. If the input power becomes larger, the effect of the noise (the noise power is constant) becomes less pronounced, but the constellation rotates due to the larger input power and hence effect of the nonlinearity.
End of explanation
"""
# helper function to compute the symbol error rate
def SER(predictions, labels):
return (np.sum(np.argmax(predictions, 1) != labels) / predictions.shape[0])
"""
Explanation: Helper function to compute Bit Error Rate (BER)
End of explanation
"""
# set input power
Pin = -5
#define constellation
constellation = constellations['16-APSK']
input_power_linear = 10**((Pin-30)/10)
norm_factor = 1 / np.sqrt(np.mean(np.abs(constellation)**2)/input_power_linear)
sigma = np.sqrt((10**((Pn-30)/10)) / Kstep / 2)
constellation_mat = np.stack([constellation.real * norm_factor, constellation.imag * norm_factor],axis=1)
# validation set. Training examples are generated on the fly
N_valid = 100000
# number of neurons in hidden layers
hidden_neurons_1 = 50
hidden_neurons_2 = 50
hidden_neurons_3 = 50
hidden_neurons_4 = 50
y_valid = np.random.randint(len(constellation),size=N_valid)
r = simulate_channel(y_valid, Pin, constellation)
# find extension of data (for normalization and plotting)
ext_x = max(abs(np.real(r)))
ext_y = max(abs(np.imag(r)))
ext_max = max(ext_x,ext_y)*1.2
# scale data to be between 0 and 1
X_valid = torch.from_numpy(np.column_stack((np.real(r), np.imag(r))) / ext_max).float().to(device)
# meshgrid for plotting
mgx,mgy = np.meshgrid(np.linspace(-ext_max,ext_max,200), np.linspace(-ext_max,ext_max,200))
meshgrid = torch.from_numpy(np.column_stack((np.reshape(mgx,(-1,1)),np.reshape(mgy,(-1,1)))) / ext_max).float().to(device)
"""
Explanation: Here, we define the parameters of the neural network and training, generate the validation set and a helping set to show the decision regions
End of explanation
"""
class Receiver_Network(nn.Module):
def __init__(self, hidden_neurons_1, hidden_neurons_2, hidden_neurons_3, hidden_neurons_4):
super(Receiver_Network, self).__init__()
# Linear function, 2 input neurons (real and imaginary part)
self.fc1 = nn.Linear(2, hidden_neurons_1)
# Non-linearity
self.activation_function = nn.ELU()
# Linear function (hidden layer)
self.fc2 = nn.Linear(hidden_neurons_1, hidden_neurons_2)
# Another hidden layer
self.fc3 = nn.Linear(hidden_neurons_2, hidden_neurons_3)
# Another hidden layer
self.fc4 = nn.Linear(hidden_neurons_3, hidden_neurons_4)
# Output layer
self.fc5 = nn.Linear(hidden_neurons_4, len(constellation))
def forward(self, x):
# Linear function, first layer
out = self.fc1(x)
# Non-linearity, first layer
out = self.activation_function(out)
# Linear function, second layer
out = self.fc2(out)
# Non-linearity, second layer
out = self.activation_function(out)
# Linear function, third layer
out = self.fc3(out)
# Non-linearity, third layer
out = self.activation_function(out)
# Linear function, fourth layer
out = self.fc4(out)
# Non-linearity, fourth layer
out = self.activation_function(out)
# Linear function, output layer
out = self.fc5(out)
# Do *not* apply softmax, as it is already included in the CrossEntropyLoss
return out
"""
Explanation: This is the main neural network with 4 hidden layers, each with ELU activation function. Note that the final layer does not use a softmax function, as this function is already included in the CrossEntropyLoss.
End of explanation
"""
model = Receiver_Network(hidden_neurons_1, hidden_neurons_2, hidden_neurons_3, hidden_neurons_4)
model.to(device)
# Cross Entropy loss accepting logits at input
loss_fn = nn.CrossEntropyLoss()
# Adam Optimizer
optimizer = optim.Adam(model.parameters())
# Softmax function
softmax = nn.Softmax(dim=1)
num_epochs = 100
batches_per_epoch = 500
# increase batch size while learning from 100 up to 10000
batch_size_per_epoch = np.linspace(100,10000,num=num_epochs)
validation_SERs = np.zeros(num_epochs)
decision_region_evolution = []
constellation_tensor = torch.from_numpy(constellation_mat).float().to(device)
for epoch in range(num_epochs):
batch_labels = torch.empty(int(batch_size_per_epoch[epoch]), device=device)
noise = torch.empty((int(batch_size_per_epoch[epoch]),2), device=device, requires_grad=False)
for step in range(batches_per_epoch):
# sample new mini-batch directory on the GPU (if available)
batch_labels.random_(len(constellation))
temp_onehot = torch.zeros(int(batch_size_per_epoch[epoch]), len(constellation), device=device)
temp_onehot[range(temp_onehot.shape[0]), batch_labels.long()]=1
# channel simulation directly on the GPU
qam = (temp_onehot @ constellation_tensor).to(device)
for i in range(Kstep):
power = torch.norm(qam, dim=1) ** 2
rotcoff = (L / Kstep) * gamma * power
noise.normal_(mean=0, std=sigma) # sample noise
# phase rotation due to nonlinearity
temp1 = qam[:,0] * torch.cos(rotcoff) - qam[:,1] * torch.sin(rotcoff)
temp2 = qam[:,0] * torch.sin(rotcoff) + qam[:,1] * torch.cos(rotcoff)
qam = torch.stack([temp1, temp2], dim=1) + noise
qam = qam / ext_max
outputs = model(qam)
# compute loss
loss = loss_fn(outputs.squeeze(), batch_labels.long())
# compute gradients
loss.backward()
optimizer.step()
# reset gradients
optimizer.zero_grad()
# compute validation SER
out_valid = softmax(model(X_valid))
validation_SERs[epoch] = SER(out_valid.detach().cpu().numpy(), y_valid)
print('Validation SER after epoch %d: %f (loss %1.8f)' % (epoch, validation_SERs[epoch], loss.detach().cpu().numpy()))
# store decision region for generating the animation
mesh_prediction = softmax(model(meshgrid))
decision_region_evolution.append(0.195*mesh_prediction.detach().cpu().numpy() + 0.4)
"""
Explanation: This is the main learning function, generate the data directly on the GPU (if available) and the run the neural network. We use a variable batch size that varies during training. In the first iterations, we start with a small batch size to rapidly get to a working solution. The closer we come towards the end of the training we increase the batch size. If keeping the batch size small, it may happen that there are no misclassifications in a small batch and there is no incentive of the training to improve. A larger batch size will most likely contain errors in the batch and hence there will be incentive to keep on training and improving.
Here, the data is generated on the fly inside the graph, by using PyTorchs random number generation. As PyTorch does not natively support complex numbers (at least in early versions), we decided to replace the complex number operations in the channel by an equivalent simple rotation matrix and treating real and imaginary parts separately.
We employ the Adam optimization algorithm. Here, the epochs are not defined in the classical way, as we do not have a training set per se. We generate new data on the fly and never reuse data.
End of explanation
"""
cmap = matplotlib.cm.tab20
base = plt.cm.get_cmap(cmap)
color_list = base.colors
new_color_list = [[t/2 + 0.5 for t in color_list[k]] for k in range(len(color_list))]
# find minimum SER from validation set
min_SER_iter = np.argmin(validation_SERs)
plt.figure(figsize=(16,8))
plt.subplot(121)
#plt.contourf(mgx,mgy,decision_region_evolution[-1].reshape(mgy.shape).T,cmap='coolwarm',vmin=0.3,vmax=0.7)
plt.scatter(X_valid.cpu()[:,0]*ext_max, X_valid.cpu()[:,1]*ext_max, c=y_valid, cmap='tab20',s=4)
plt.axis('scaled')
plt.xlabel(r'$\Re\{r\}$',fontsize=16)
plt.ylabel(r'$\Im\{r\}$',fontsize=16)
plt.xlim((-ext_max,ext_max))
plt.ylim((-ext_max,ext_max))
plt.title('Received constellation',fontsize=16)
#light_tab20 = cmap_map(lambda x: x/2 + 0.5, matplotlib.cm.tab20)
plt.subplot(122)
decision_scatter = np.argmax(decision_region_evolution[min_SER_iter], 1)
plt.scatter(meshgrid.cpu()[:,0] * ext_max, meshgrid.cpu()[:,1] * ext_max, c=decision_scatter, cmap=matplotlib.colors.ListedColormap(colors=new_color_list),s=4)
plt.scatter(X_valid.cpu()[0:4000,0]*ext_max, X_valid.cpu()[0:4000,1]*ext_max, c=y_valid[0:4000], cmap='tab20',s=4)
plt.axis('scaled')
plt.xlim((-ext_max,ext_max))
plt.ylim((-ext_max,ext_max))
plt.xlabel(r'$\Re\{r\}$',fontsize=16)
plt.ylabel(r'$\Im\{r\}$',fontsize=16)
plt.title('Decision region after learning',fontsize=16)
#plt.savefig('decision_region_16APSK_Pin%d.pdf' % Pin,bbox_inches='tight')
"""
Explanation: Plt decision region and scatter plot of the validation set. Note that the validation set is only used for computing BERs and plotting, there is no feedback towards the training!
End of explanation
"""
%matplotlib notebook
%matplotlib notebook
# Generate animation
from matplotlib import animation, rc
from matplotlib.animation import PillowWriter # Disable if you don't want to save any GIFs.
font = {'size' : 18}
plt.rc('font', **font)
fig, ax = plt.subplots(1, figsize=(8,8))
ax.axis('scaled')
written = False
def animate(i):
ax.clear()
decision_scatter = np.argmax(decision_region_evolution[i], 1)
plt.scatter(meshgrid.cpu()[:,0] * ext_max, meshgrid.cpu()[:,1] * ext_max, c=decision_scatter, cmap=matplotlib.colors.ListedColormap(colors=new_color_list),s=4, marker='s')
plt.scatter(X_valid.cpu()[0:4000,0]*ext_max, X_valid.cpu()[0:4000,1]*ext_max, c=y_valid[0:4000], cmap='tab20',s=4)
ax.set_xlim(( -ext_max, ext_max))
ax.set_ylim(( -ext_max, ext_max))
ax.set_xlabel(r'$\Re\{r\}$',fontsize=18)
ax.set_ylabel(r'$\Im\{r\}$',fontsize=18)
anim = animation.FuncAnimation(fig, animate, frames=min_SER_iter+1, interval=200, blit=False)
fig.show()
#anim.save('learning_decision_16APSK_Pin%d_varbatch.gif' % Pin, writer=PillowWriter(fps=5))
"""
Explanation: Generate animation and save as a gif.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cmcc/cmip6/models/cmcc-cm2-hr4/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-hr4', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-CM2-HR4
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:49
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
adrn/tutorials | notebooks/synthetic-images/synthetic-images.ipynb | cc0-1.0 | from astropy.utils.data import download_file
from astropy.io import fits
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.wcs import WCS
from astropy.convolution import Gaussian2DKernel
from astropy.modeling.models import Lorentz1D
from astropy.convolution import convolve_fft
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
"""
Explanation: Synthetic Images from simulated data
Authors
Yi-Hao Chen, Sebastian Heinz, Kelle Cruz, Stephanie T. Douglas
Learning Goals
Assign WCS astrometry to an image using astropy.wcs
Construct a PSF using astropy.modeling.model
Convolve raw data with PSF using astropy.convolution
Calculate polarization fraction and angle from Stokes I, Q, U data
Overplot quivers on the image
Keywords
modeling, convolution, coordinates, WCS, FITS, radio astronomy, matplotlib, colorbar
Summary
In this tutorial, we will:
1. Load and examine the FITS file
2. Set up astrometry coordinates
3. Prepare a Point Spread Function (PSF)
3.a How to do this without astropy kernels
4. Convolve image with PSF
5. Convolve Stokes Q and U images
6. Calculate polarization angle and fraction for quiver plot
End of explanation
"""
file_i = download_file(
'http://data.astropy.org/tutorials/synthetic-images/synchrotron_i_lobe_0700_150MHz_sm.fits',
cache=True)
hdulist = fits.open(file_i)
hdulist.info()
hdu = hdulist['NN_EMISSIVITY_I_LOBE_150.0MHZ']
hdu.header
"""
Explanation: 1. Load and examine the FITS file
Here we begin with a 2-dimensional data that were stored in FITS format from some simulations. We have Stokes I, Q, and U maps. We we'll first load a FITS file and examine the header.
End of explanation
"""
print(hdu.data.max())
print(hdu.data.min())
np.seterr(divide='ignore') #suppress the warnings raised by taking log10 of data with zeros
plt.hist(np.log10(hdu.data.flatten()), range=(-3, 2), bins=100);
"""
Explanation: We can see this FITS file, which was created in yt, has x and y coordinate in physical units (cm). We want to convert it into sky coordinates. Before we proceed, let's find out the range of the data and plot a histogram.
End of explanation
"""
fig = plt.figure(figsize=(6,12))
fig.add_subplot(111)
# We plot it in log-scale and add a small number to avoid nan values.
plt.imshow(np.log10(hdu.data+1E-3), vmin=-1, vmax=1, origin='lower')
"""
Explanation: Once we know the range of the data, we can do a visualization with the proper range (vmin and vmax).
End of explanation
"""
# distance to the object
dist_obj = 200*u.Mpc
# We have the RA in hh:mm:ss and DEC in dd:mm:ss format.
# We will use Skycoord to convert them into degrees later.
ra_obj = '19h59m28.3566s'
dec_obj = '+40d44m02.096s'
"""
Explanation: 2. Set up astrometry coordinates
From the header, we know that the x and y axes are in centimeter. However, in an observation we usually have RA and Dec. To convert physical units to sky coordinates, we will need to make some assumptions about where the object is located, i.e. the distance to the object and the central RA and Dec.
End of explanation
"""
cdelt1 = ((hdu.header['CDELT1']*u.cm/dist_obj.to('cm'))*u.rad).to('deg')
cdelt2 = ((hdu.header['CDELT2']*u.cm/dist_obj.to('cm'))*u.rad).to('deg')
print(cdelt1, cdelt2)
"""
Explanation: Here we convert the pixel scale from cm to degree by dividing the distance to the object.
End of explanation
"""
w = WCS(naxis=2)
# reference pixel coordinate
w.wcs.crpix = [hdu.data.shape[0]/2,hdu.data.shape[1]/2]
# sizes of the pixel in degrees
w.wcs.cdelt = [-cdelt1.base, cdelt2.base]
# converting ra and dec into degrees
c = SkyCoord(ra_obj, dec_obj)
w.wcs.crval = [c.ra.deg, c.dec.deg]
# the units of the axes are in degrees
w.wcs.cunit = ['deg', 'deg']
"""
Explanation: Use astropy.wcs.WCS to prepare a FITS header.
End of explanation
"""
wcs_header = w.to_header()
hdu.header.update(wcs_header)
"""
Explanation: Now we can convert the WCS coordinate into header and update the hdu.
End of explanation
"""
hdu.header
wcs = WCS(hdu.header)
fig = plt.figure(figsize=(6,12))
fig.add_subplot(111, projection=wcs)
plt.imshow(np.log10(hdu.data+1e-3), vmin=-1, vmax=1, origin='lower')
plt.xlabel('RA')
plt.ylabel('Dec')
"""
Explanation: Let's take a look at the header. CDELT1, CDELT2, CUNIT1, CUNIT2, CRVAL1, and CRVAL2 are in sky coordinates now.
End of explanation
"""
# assume our telescope has 1 arcsecond resolution
telescope_resolution = 1*u.arcsecond
# calculate the sigma in pixels.
# since cdelt is in degrees, we use _.to('deg')
sigma = telescope_resolution.to('deg')/cdelt2
# By default, the Gaussian kernel will go to 4 sigma
# in each direction
psf = Gaussian2DKernel(sigma)
# let's take a look:
plt.imshow(psf.array.value)
"""
Explanation: Now we have the sky coordinate for the image!
3. Prepare a Point Spread Function (PSF)
Simple PSFs are included in astropy.convolution.kernel. We'll use astropy.convolution.Gaussian2DKernel here.
First we need to set the telescope resolution. For a 2D Gaussian, we can calculate sigma in pixels by using our pixel scale keyword cdelt2 from above.
End of explanation
"""
# set FWHM and psf grid
telescope_resolution = 1*u.arcsecond
gamma = telescope_resolution.to('deg')/cdelt2
x_grid = np.outer(np.linspace(-gamma*4,gamma*4,int(8*gamma)),np.ones(int(8*gamma)))
r_grid = np.sqrt(x_grid**2 + np.transpose(x_grid**2))
lorentzian = Lorentz1D(fwhm=2*gamma)
# extrude a 2D azimuthally symmetric PSF
lorentzian_psf = lorentzian(r_grid)
# normalization
lorentzian_psf /= np.sum(lorentzian_psf)
# let's take a look again:
plt.imshow(lorentzian_psf.value, interpolation='none')
"""
Explanation: 3.a How to do this without astropy kernels
Maybe your PSF is more complicated. Here's an alternative way to do this, using a 2D Lorentzian
End of explanation
"""
convolved_image = convolve_fft(hdu.data, psf, boundary='wrap')
# Put a psf at the corner of the image
delta_x_psf=100 # number of pixels from the edges
xmin, xmax = -psf.shape[1]-delta_x_psf, -delta_x_psf
ymin, ymax = delta_x_psf, delta_x_psf+psf.shape[0]
convolved_image[xmin:xmax, ymin:ymax] = psf.array/psf.array.max()*10
"""
Explanation: 4. Convolve image with PSF
Here we use astropy.convolution.convolve_fft to convolve image. This routine uses fourier transform for faster calculation. Especially since our data is $2^n$ sized, which makes it particually fast. Using a fft, however, causes boundary effects. We'll need to specify how we want to handle the boundary. Here we choose to "wrap" the data, which means making the data periodic.
End of explanation
"""
wcs = WCS(hdu.header)
fig = plt.figure(figsize=(8,12))
i_plot = fig.add_subplot(111, projection=wcs)
plt.imshow(np.log10(convolved_image+1e-3), vmin=-1, vmax=1.0, origin='lower')#, cmap=plt.cm.viridis)
plt.xlabel('RA')
plt.ylabel('Dec')
plt.colorbar()
"""
Explanation: Now let's take a look at the convolved image.
End of explanation
"""
hdulist.info()
file_q = download_file(
'http://data.astropy.org/tutorials/synthetic-images/synchrotron_q_lobe_0700_150MHz_sm.fits',
cache=True)
hdulist = fits.open(file_q)
hdu_q = hdulist['NN_EMISSIVITY_Q_LOBE_150.0MHZ']
file_u = download_file(
'http://data.astropy.org/tutorials/synthetic-images/synchrotron_u_lobe_0700_150MHz_sm.fits',
cache=True)
hdulist = fits.open(file_u)
hdu_u = hdulist['NN_EMISSIVITY_U_LOBE_150.0MHZ']
# Update the header with the wcs_header we created earlier
hdu_q.header.update(wcs_header)
hdu_u.header.update(wcs_header)
# Convolve the images with the the psf
convolved_image_q = convolve_fft(hdu_q.data, psf, boundary='wrap')
convolved_image_u = convolve_fft(hdu_u.data, psf, boundary='wrap')
"""
Explanation: 5. Convolve Stokes Q and U images
End of explanation
"""
wcs = WCS(hdu.header)
fig = plt.figure(figsize=(16,12))
fig.add_subplot(121, projection=wcs)
plt.imshow(convolved_image_q, cmap='seismic', vmin=-0.5, vmax=0.5, origin='lower')#, cmap=plt.cm.viridis)
plt.xlabel('RA')
plt.ylabel('Dec')
plt.colorbar()
fig.add_subplot(122, projection=wcs)
plt.imshow(convolved_image_u, cmap='seismic', vmin=-0.5, vmax=0.5, origin='lower')#, cmap=plt.cm.viridis)
plt.xlabel('RA')
plt.ylabel('Dec')
plt.colorbar()
"""
Explanation: Let's plot the Q and U images.
End of explanation
"""
# First, we plot the background image
fig = plt.figure(figsize=(8,16))
i_plot = fig.add_subplot(111, projection=wcs)
i_plot.imshow(np.log10(convolved_image+1e-3), vmin=-1, vmax=1, origin='lower')
# ranges of the axis
xx0, xx1 = i_plot.get_xlim()
yy0, yy1 = i_plot.get_ylim()
# binning factor
factor = [64, 66]
# re-binned number of points in each axis
nx_new = convolved_image.shape[1] // factor[0]
ny_new = convolved_image.shape[0] // factor[1]
# These are the positions of the quivers
X,Y = np.meshgrid(np.linspace(xx0,xx1,nx_new,endpoint=True),
np.linspace(yy0,yy1,ny_new,endpoint=True))
# bin the data
I_bin = convolved_image.reshape(nx_new, factor[0], ny_new, factor[1]).sum(3).sum(1)
Q_bin = convolved_image_q.reshape(nx_new, factor[0], ny_new, factor[1]).sum(3).sum(1)
U_bin = convolved_image_u.reshape(nx_new, factor[0], ny_new, factor[1]).sum(3).sum(1)
# polarization angle
psi = 0.5*np.arctan2(U_bin, Q_bin)
# polarization fraction
frac = np.sqrt(Q_bin**2+U_bin**2)/I_bin
# mask for low signal area
mask = I_bin < 0.1
frac[mask] = 0
psi[mask] = 0
pixX = frac*np.cos(psi) # X-vector
pixY = frac*np.sin(psi) # Y-vector
# keyword arguments for quiverplots
quiveropts = dict(headlength=0, headwidth=1, pivot='middle')
i_plot.quiver(X, Y, pixX, pixY, scale=8, **quiveropts)
"""
Explanation: 6. Calculate polarization angle and fraction for quiver plot
Note that rotating Stokes Q and I maps requires changing signs of both. Here we assume that the Stokes q and u maps were calculated defining the y/declination axis as vertical, such that Q is positive for polarization vectors along the x/right-ascention axis.
End of explanation
"""
|
nikitaswinnen/model-for-predicting-rapid-response-team-events | Data Science Notebooks/Notebooks/EDA/rrt_reasons[EDA].ipynb | apache-2.0 | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import datetime as datetime
from impala.util import as_pandas
from collections import defaultdict
from operator import itemgetter
import cPickle as pickle
%matplotlib notebook
plt.style.use('ggplot')
from impala.dbapi import connect
conn = connect(host="mycluster.domain.com", port=my_impala_port_number)
cur = conn.cursor()
cur.execute("use my_db")
"""
Explanation: Explore the reasons RRT events are called
Have Jan 2015 - Aug 2016 loaded in cluster.
End of explanation
"""
def find_rrt_freq(reasons):
'''
reasons is a python list of RRT reasons -- 1 line for each RRT event.
Each entry is a different RRT event; there may be multiple reasons per event
output: a pandas dataframe with the counts for each reason.
'''
rrt_reasons = defaultdict(int)
for reason in reasons:
otherreason = ''
if reason.lower().startswith("other:"):
# if the line starts with "other" --> the only reason is the otherreason(s)
otherreason = reason.lower().split('other:')[1]
rrts = []
else:
# if the line contains "other:" or not
splitreason = reason.lower().split('other:')
if len(splitreason) > 1:
# if an "other" reason exists, process it differently
otherreason = splitreason[1] # text of the line after 'other:'
primaryreason = splitreason[0].strip().strip('"')
rrts = primaryreason.split(',')
for rrt in rrts:
rrt = rrt.strip()
# loop through list of rrt reasons for patient & add to count tracker
if len(rrt) > 0:
# included len check b/c splitting on "other" above caused trailing comma
if rrt not in rrt_reasons.keys():
rrt_reasons[rrt] = 1
else:
rrt_reasons[rrt] += 1
if len(otherreason) > 0:
# handle the "other" reason(s)
otherreason = "other: " + otherreason.strip().strip('"')
if otherreason not in rrt_reasons.keys():
rrt_reasons[otherreason] = 1
else:
rrt_reasons[otherreason] += 1
return pd.DataFrame(rrt_reasons, index=['count']).transpose().sort_values('count', ascending=False).reset_index()
def count_others(reasons):
'''
Count how many "Other" reasons there are, both occurring alone & with other reasons.
"other_counts" is a dict which contains "only_other" & "other_withothers" as keys
'''
other_counts = defaultdict(int)
other_counts['only_other'] = 0
other_counts['other_withothers'] = 0
for reason in reasons:
if 'other' in reason.lower():
if reason.lower().startswith('other'):
other_counts['only_other'] += 1
else:
other_counts['other_withothers'] +=1
print other_counts
def count_staffconcern(reasons):
'''
input: list of reasons
Counts how many time staff concern line happens, both by itself and with other reasons.
'''
staff_counts = defaultdict(int)
staff_counts['by_itself'] = 0
staff_counts['with_other_reasons'] = 0
for reason in reasons:
if 'staff concern' in reason.lower():
if 'patient,' in reason.lower():
staff_counts['with_other_reasons'] += 1
else:
staff_counts['by_itself'] += 1
print staff_counts
def avg_num_reasons(reasons):
'''
input: list of reasons; each entry is an RRT reason
ouput: average number of reasons per RRT call.
'''
reasoncount = 0.0
rrtcount = 0.0
for entry in reasons:
reasoncount += len(entry.split(','))
rrtcount +=1
return reasoncount/rrtcount
"""
Explanation: function definitions
End of explanation
"""
query = '''
SELECT ce.event_tag
FROM encounter enc
INNER JOIN clinical_event ce
ON enc.encntr_id = ce.encntr_id
WHERE enc.loc_facility_cd='633867'
AND enc.encntr_complete_dt_tm < 4e12
AND ce.event_cd='54408578'
AND ce.result_status_cd NOT IN ('31', '36')
AND ce.valid_until_dt_tm > 4e12
AND ce.event_class_cd not in ('654645')
AND enc.admit_type_cd !='0'
AND enc.encntr_type_class_cd='391';
'''
cur.execute(query)
reasons = cur.fetchall() # would read result into a list of tuples, e.g. [('Other: iv start',),(...)...]
# look at the result
reasons
# Make the tuple within a list a simple list
reasons = [reason[0] for reason in reasons]
reasons
"""
Explanation: Query impala for rrt reasons
only looking at valid events (ce.valid_until_dt_tm > 4e12) & complete encounters for inpatients (enc.encntr_type_class_cd='391') at Main Hospital (enc.loc_facility_cd='633867')
End of explanation
"""
len(reasons)
"""
Explanation: Number of reasons given (note, not all rrt events have reasons given; we have 2048 rrt events)
End of explanation
"""
count_others(reasons)
"""
Explanation: Number of reasons with "Other: ..." reason provided (user filled)
End of explanation
"""
count_staffconcern(reasons)
"""
Explanation: This means num_only_other/num_rrt_events => % of reasons have only a personnel-specified reason listed
Number of reasons including "Staff Concern" as a reason, by itself & with others
End of explanation
"""
avg_num_reasons(reasons)
"""
Explanation: This means num_by_itself/num_rrt_events => % of all RRTs have no specific reason provided
Average number of reasons per RRT call:
End of explanation
"""
df_reasons = find_rrt_freq(reasons)
df_reasons
"""
Explanation: Count the occurrences -- what are the most frequent reasons?
End of explanation
"""
plt.figure(figsize=(12,8))
plt.tight_layout
val = df_reasons['count'][0:15]
pos = np.arange(15)+0.5 #bar centers on the y axis
plt.barh(-pos, val, align='center')
plt.yticks(-pos, df_reasons['index'][0:15])
plt.tick_params(direction='in', labelsize='16', pad=1)
plt.xlabel('Frequency of Reason', fontsize='16')
plt.title('Top Reasons for RRT Event, Jan 2015 - Aug 2016', fontsize = '16')
plt.tight_layout()
# Run the line below to save an image of the chart
plt.savefig('RRT_top15reasons.png')
"""
Explanation: Staff concern for patient is the top reason for RRT.
Visualize the top 15 reaons
End of explanation
"""
|
kiseyno92/SNU_ML | Practice6/3_char_rnn_inference.ipynb | mit | # Important RNN parameters
rnn_size = 128
num_layers = 2
batch_size = 1 # <= In the training phase, these were both 50
seq_length = 1
def unit_cell():
return tf.contrib.rnn.BasicLSTMCell(rnn_size,state_is_tuple=True,reuse=tf.get_variable_scope().reuse)
cell = tf.contrib.rnn.MultiRNNCell([unit_cell() for _ in range(num_layers)])
input_data = tf.placeholder(tf.int32, [batch_size, seq_length])
targets = tf.placeholder(tf.int32, [batch_size, seq_length])
istate = cell.zero_state(batch_size, tf.float32)
# Weigths
with tf.variable_scope('rnnlm'):
softmax_w = tf.get_variable("softmax_w", [rnn_size, vocab_size])
softmax_b = tf.get_variable("softmax_b", [vocab_size])
with tf.device("/cpu:0"):
embedding = tf.get_variable("embedding", [vocab_size, rnn_size])
inputs = tf.split(tf.nn.embedding_lookup(embedding, input_data), seq_length, 1)
inputs = [tf.squeeze(_input, [1]) for _input in inputs]
# Output
def loop(prev, _):
prev = tf.nn.xw_plus_b(prev, softmax_w, softmax_b)
prev_symbol = tf.stop_gradient(tf.argmax(prev, 1))
return tf.nn.embedding_lookup(embedding, prev_symbol)
"""
loop_function: If not None, this function will be applied to the i-th output
in order to generate the i+1-st input, and decoder_inputs will be ignored,
except for the first element ("GO" symbol).
"""
outputs, last_state = tf.contrib.rnn.static_rnn(cell, inputs, istate
, scope='rnnlm')
output = tf.reshape(tf.concat(outputs, 1), [-1, rnn_size])
logits = tf.nn.xw_plus_b(output, softmax_w, softmax_b)
probs = tf.nn.softmax(logits)
print ("Network Ready")
# Restore RNN
sess = tf.Session()
sess.run(tf.initialize_all_variables())
saver = tf.train.Saver(tf.all_variables())
ckpt = tf.train.get_checkpoint_state(load_dir)
print (ckpt.model_checkpoint_path)
saver.restore(sess, ckpt.model_checkpoint_path)
"""
Explanation: Now, we are ready to make our RNN model with seq2seq
This network is for sampling, so we don't need batches for sequenes nor optimizers
End of explanation
"""
# Sampling function
def weighted_pick(weights):
t = np.cumsum(weights)
s = np.sum(weights)
return(int(np.searchsorted(t, np.random.rand(1)*s)))
# Sample using RNN and prime characters
prime = "/* "
state = sess.run(cell.zero_state(1, tf.float32))
for char in prime[:-1]:
x = np.zeros((1, 1))
x[0, 0] = vocab[char]
state = sess.run(last_state, feed_dict={input_data: x, istate:state})
# Sample 'num' characters
ret = prime
char = prime[-1] # <= This goes IN!
num = 1000
for n in range(num):
x = np.zeros((1, 1))
x[0, 0] = vocab[char]
[probsval, state] = sess.run([probs, last_state]
, feed_dict={input_data: x, istate:state})
p = probsval[0]
sample = weighted_pick(p)
# sample = np.argmax(p)
pred = chars[sample]
ret = ret + pred
char = pred
print ("Sampling Done. \n___________________________________________\n")
print (ret)
"""
Explanation: Finally, show what RNN has generated!
End of explanation
"""
|
kfollette/ASTR200-Spring2017 | Labs/Lab7/.ipynb_checkpoints/Lab7-checkpoint.ipynb | mit | from astropy.table import Table
from numpy import *
import matplotlib
matplotlib.use('nbagg') # required for interactive plotting
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: <small><i>This notebook is based on the 2016 AAS Python Workshop tutorial on tables, available on GitHub, though it has been modified. Some of the pandas stuff was borrowed from a notebook put together by Jake Vanderplas and has been modified to suit the purposes of this course, including expansion/modification of explanations and additional exercises. Source and license info for the original is on GitHub</i></small>
Names: [Insert Your Names Here]
Lab 7 - Python Tables
The astropy Table class provides an extension of NumPy structured arrays for storing and manipulating heterogeneous tables of data. A few notable features of this package are:
Initialize a table from a wide variety of input data structures and types.
Modify a table by adding or removing columns, changing column names, or adding new rows of data.
Handle tables containing missing values.
Include table and column metadata as flexible data structures.
Specify a description, units and output formatting for columns.
Perform operations like database joins, concatenation, and grouping.
Manipulate multidimensional columns.
Methods for Reading and writing Table objects to files
Integration with Astropy Units and Quantities
Tables vs. Pandas DataFrames
The Pandas package provides a powerful, high-performance table object via the DataFrame class. Pandas has a few downsides, including its lack of support for multidimensional table columns, but Pandas is the generally-used Python tables packange,a dn so we will use it here as well. Pandas DataFrame functionality is very complementary to astropy Tables so astropy 1.1 and later provides interfaces for converting between astropy Tables and DataFrames. If you wish to learn more about Pandas, there are many resources available on-line. A good starting point is the main tutorials site at http://pandas.pydata.org/pandas-docs/stable/tutorials.html.
Documentation
For more information about the features presented below, you can read the
astropy.table docs.
Tutorial
End of explanation
"""
t = Table()
t['name'] = ['larry', 'curly', 'moe', 'shemp']
t['flux'] = [1.2, 2.2, 3.1, 4.3]
"""
Explanation: Astropy tables
There is great deal of flexibility in the way that a table can be initially constructed:
Read an existing table from a file or web URL
Add columns of data one by one
Add rows of data one by one
From an existing data structure in memory:
List of data columns
Dict of data columns
List of row dicts
Numpy homgeneous array or structured array
List of row records
See the documentation section on Constructing a table for the gory details and plenty of examples.
End of explanation
"""
t
"""
Explanation: Looking at your table
In IPython notebook, showing a table will produce a nice HTML representation of the table:
End of explanation
"""
print(t)
##similar, but nicer when there are lots and lots of rows/columns
t.pprint()
"""
Explanation: If you did the same in a terminal session you get a different view that isn't as pretty but does give a bit more information about the table:
>>> t
<Table rows=4 names=('name','flux')>
array([('source 1', 1.2), ('source 2', 2.2), ('source 3', 3.1),
('source 4', 4.3)],
dtype=[('name', 'S8'), ('flux', '<f8')])
To get a plain view which is the same in notebook and terminal use print():
End of explanation
"""
t.colnames
t.dtype
"""
Explanation: To get the table column names and data types using the colnames and dtype properties:
End of explanation
"""
t.show_in_notebook()
"""
Explanation: Astropy 1.1 and later provides a show_in_notebook() method that allows more interactive exploration of tables. It can be especially handy for large tables.
End of explanation
"""
t['flux'] # Flux column (notice meta attributes)
t['flux'][1] # Row 1 of flux column
t[1]['flux'] # equivalent!
t[1][1] # also equivalent. Which is the column index? Play with this to find out.
t[1] # one index = row number
t[1:3] # 2nd and 3rd rows in a new table (remember that the a:b indexing is not inclusive of b)
t[1:3]['flux']
t[[1, 3]] # the second and fourth rows of t in a new table
"""
Explanation: Accessing parts of the table
We can access the columns and rows in a way similar to accessing discionary entries (with dict[key]), but here the syntax is table[column]. Table objects can also be indexed by row or column, and the column index can be swapped with column name.
End of explanation
"""
mask = t['flux'] > 3.0 # Define boolean (True/False) mask for all flux values > 3
mask
t[mask] # Create a new table with only the "True" rows
"""
Explanation: One of the most powerful concepts is using boolean selection masks to filter tables
End of explanation
"""
t.add_row(('joe', 10.1)) # Add a new source at the end
t['logflux'] = log10(t['flux']) # Compute the log10 of the flux
t
"""
Explanation: Modifying the table
Once the table exists with defined columns there are a number of ways to modify the table in place. These are fully documented in the section Modifying a Table.
To give a couple of simple examples, you can add rows with the add_row() method or add new columns using dict-style assignment:
End of explanation
"""
t['flux'].format = '%.2f'
t['logflux'].format = '%.2f'
t
print('%11.2f'% 100000)
print('%8.2f'% 100000)
t['flux'].format = '%5.2e'
t['logflux'].format = '%.2E'
t
print('%5.2e'% 0.0005862341)
print('%4.2E'% 246001)
"""
Explanation: Notice that the logflux column really has too many output digits given the precision of the input values. We can fix this by setting the format using normal Python formatting syntax:
The format operator in python acts on an object and reformatts it according to your specifications. The syntax is alwasy object.format = '%format_string', where format_string tells it how to format the output. For now let's just deal with two of the more useful types:
Float Formatting
Floats are denoted with '%A.Bf', where A is the number of total characters you want, including the decimal point, and B is the number of characters that you want after the decimal. The f tells it that you would like the output as a float. If you don't specify A, python will keep as many characters as are currently to the left of the decimal point. If you specify more characters to the left of the decimal than are there, python will usually print the extra space as blank characters. If you want it to print leading zeroes instead, use the format '%0A.Bf'. This is not the case in tables though, where white space and leading zeroes will be ignored.
Scientific Notation Formatting
Sometimes in tables, we will be dealing with very large numbers. Exponential formatting is similar to float formatting in that you are formatting the float that comes before the "e" (meaning 10 to some power). Numbers in scientific notation print as X.YeNN where NN is the power of the exponent. The formatting string for floating point exponentials looks like "%A.Be" or "%A.BE", where e and E print lowwercase and capital es, respectively.
Should you need it in the future, here is a more detailed reference regarding string formatting.
Also useful is printing numbers in a given format, for which you use the syntax print('%format code'% object), as demonstrated below. Play around with the cells below to make sure you understand the subtelties here before moving on.
End of explanation
"""
array(t)
array(t['flux'])
"""
Explanation: Converting the table to numpy
Sometimes you may not want or be able to use a Table object and prefer to work with a plain numpy array (like if you read in data and then want to manipulate it. This is easily done by passing the table to the np.array() constructor.
This makes a copy of the data. If you have a huge table and don't want to waste memory, supply copy=False to the constructor, but be warned that changing the output numpy array will change the original table.
End of explanation
"""
t2 = Table([['x', 'y', 'z'],
[1.1, 2.2, 3.3]],
names=['name', 'value'],
masked=True)
t2
t2['value'].mask = [False, True, False]
print(t2)
t2['value'].fill_value = -99
print(t2.filled())
"""
Explanation: Masked tables
End of explanation
"""
from astropy.table import join
"""
Explanation: High-level table operations
So far we've just worked with one table at a time and viewed that table as a monolithic entity. Astropy also supports high-level Table operations that manipulate multiple tables or view one table as a collection of sub-tables (groups).
Documentation | Description
---------------------------------------------------------------------------------------- |-----------------------------------------
Grouped operations | Group tables and columns by keys
Stack vertically | Concatenate input tables along rows
Stack horizontally | Concatenate input tables along columns
Join | Database-style join of two tables
Here we'll just introduce the join operation but go into more detail on the others in the exercises.
End of explanation
"""
t
"""
Explanation: Now recall our original table t:
End of explanation
"""
t2 = Table()
t2['name'] = ['larry', 'moe', 'groucho']
t2['flux2'] = [1.4, 3.5, 8.6]
"""
Explanation: Now say that we now got some additional flux values from a different reference for a different, but overlapping sample of sources:
End of explanation
"""
t3 = join(t, t2, keys=['name'], join_type='outer')
print(t3)
mean(t3['flux2'])
"""
Explanation: Now we can get a master table of flux measurements which are joined matching the values the name column. This includes every row from each of the two tables, which is known as an outer join.
End of explanation
"""
join(t, t2, keys=['name'], join_type='inner')
"""
Explanation: Alternately we could choose to keep only rows where both tables had a valid measurement using an inner join:
End of explanation
"""
t3.write('test.fits', overwrite=True)
t3.write('test.vot', format='votable', overwrite=True)
"""
Explanation: Writing data
End of explanation
"""
t4 = Table.read('test.fits')
t4
"""
Explanation: Reading data
You can read data using the Table.read() method:
End of explanation
"""
Table.read?
t_2mass = Table.read("data/2mass.tbl", format="ascii.ipac")
t_2mass.show_in_notebook()
"""
Explanation: Some formats, such as FITS and HDF5, are automatically identified by file extention while most others will require format to be explicitly provided. A number of common ascii formats are supported such as IPAC, sextractor, daophot, and CSV. Refer to the documentation for a full listing.
End of explanation
"""
import pandas as pd
"""
Explanation: Pandas
Although astropy Tables has some nice functionality that Pandas doesn't and is also a simpler, easier to use package, Pandas is the more versatile and commonly used table manipluator for Python so I recommend you use it wherever possible.
Astropy 1.1 includes new to_pandas() and from_pandas() methods that facilitate conversion to/from pandas DataFrame objects. There are a few caveats in making these conversions:
- Tables with multi-dimensional columns cannot be converted.
- Masked values are converted to numpy.nan. Numerical columns, int or float, are thus converted to numpy.float while string columns with missing values are converted to object columns with numpy.nan values to indicate missing or masked data. Therefore, one cannot always round-trip between Table and DataFrame.
End of explanation
"""
df = pd.DataFrame({'a': [10,20,30],
'b': [40,50,60]})
df
"""
Explanation: Data frames are defined like dictionaries with a column header/label (similar to a key) and a list of entries.
End of explanation
"""
df.columns
df.index
#hit shift + tab tab in the cell below to read more about dataframe objects and operations
df.
"""
Explanation: think of DataFrames as numpy arrays plus some extra pointers to their columns and the indices of the row entries that make them amenable for tables
End of explanation
"""
pd.r
"""
Explanation: pandas has built-in functions for reading all kinds of types of data. In the cell below, hit tab once after the r to see all of the read functions. In our case here, read_table will work fine
End of explanation
"""
pd_2mass = t_2mass.to_pandas()
pd_2mass
"""
Explanation: we can also convert the table that we already made with Astropy Tables to pandas dataframe format
End of explanation
"""
t_pd = Table.from_pandas(pd_2mass)
t_pd.show_in_notebook()
"""
Explanation: And the opposite operation (conversion from pandas dataframe to astropy table) works as well
End of explanation
"""
asteroids = pd.read_excel("data/asteroids5000.xlsx")
asteroids
#excel_data = Table.from_pandas(pd.read_excel("2mass.xls"))
#excel_data.show_in_notebook()
"""
Explanation: Unlike astropy Tables, pandas can also read excel spreadsheets
End of explanation
"""
asteroids.ra
"""
Explanation: pandas dataframe columns can be called as python series using the syntax dataframe.columnlabel, as below, which is why it usually makes sense to define a column name/label that is short and has no spaces
End of explanation
"""
#this one counts how many occurrences there are in the table for each unique value
asteroids.ph_qual.value_counts()
"""
Explanation: this calling method allows you to do use some useful built-in functions as well
End of explanation
"""
asteroids.loc[4,"ra"]
asteroids.iloc[4,0] #same because column 0 is "ra"
"""
Explanation: To pull up individual rows or entries, the fact that pandas dataframes always print the indices of rows off of their lefthand side helps. You index dataframes with .loc (if using column name) or .iloc (if using column index), as below
End of explanation
"""
asteroids.columns[0]
"""
Explanation: you can always check that the column you're indexing is the one you want as below
End of explanation
"""
# make the row names more interesting than numbers starting from zero
asteroids.index = ['Asteroid %d'%(i+1) for i in asteroids.index]
#and you can index multiple columns/rows in the usual way
asteroids.iloc[:10,:2]
"""
Explanation: Although indices are nice for reference, sometimes you might want the row labels to be more descriptive. What is the line below doing?
End of explanation
"""
asteroids.columns
ast_new = asteroids[asteroids.dist < 500]
ast_new
"""
Explanation: You can do lots more with this as well, including logical operations to parse the table
End of explanation
"""
## code to read in source data here
"""
Explanation: Exercises
Read the data
To start with, read in the two data files representing the master source list and observations source list. The fields for the two tables are respectively documented in:
master_sources
obs_sources
You may use either pandas or astropy tables.
End of explanation
"""
|
dtamayo/reboundx | ipython_examples/Custom_Effects.ipynb | gpl-3.0 | import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(m=1e-6,a=1.)
sim.move_to_com()
"""
Explanation: Custom Effects
This notebook walks you through how to simply add your own custom forces and operators through REBOUNDx.
The first thing you need to decide is whether you want to write a force or an operator. A force function would appropriately update particle accelerations, and REBOUND will call it every timestep in addition to its built-in standard gravitational acceleration function.
By contrast an operator does not calculate accelerations for REBOUND to integrate numerically. Instead, it gets called before and/or after each REBOUND timestep, and directly updates particle parameters or particle states (positions, velocities, masses).
This notebook shows how to do both.
Adding a Custom Force
This example parallels REBOUND's Forces.ipynb example, which implements a Python function for calculating the simple Stark force, showing how to add it to REBOUNDx.
The reason you would want to do it this way is if you wanted to use other built-in REBOUNDx effects at the same time (see details at the end of this notebook, and the original Forces.ipynb notebook for context of the force itself--this just shows the method).
End of explanation
"""
def starkForce(reb_sim, rebx_force, particles, N):
# Our function will be passed ctypes pointers. To get Python objects we can access we use:
sim = reb_sim.contents
starkforce = rebx_force.contents
particles[1].ax += starkforce.params["c"]
# make sure you UPDATE (+=, not =) the accelerations
# and update the passed particles array, NOT sim.particles
"""
Explanation: We now define our function that updates our particle accelerations. Note that this looks different from an additional force in REBOUND. In addition to accepting the simulation, it also should take a rebx_force Structure, and a particles array with the number of particles N in it.
The force structure allows us to store and access parameters. Here we expect the user to set the 'c' parameter determining the perturbation strength.
End of explanation
"""
import reboundx
rebx = reboundx.Extras(sim)
myforce = rebx.create_force("stark")
"""
Explanation: Now we add REBOUNDx, and instead of loading a predefined force from REBOUNDx, we create a new one, which we call 'stark':
End of explanation
"""
myforce.force_type = "vel"
myforce.update_accelerations = starkForce
rebx.add_force(myforce)
"""
Explanation: First we need to set the force type. This will ensure the various REBOUND integrators treat it correctly.
- "pos": The particle acceleration updates only depend on particle positions (not velocities)
- "vel": Accelerations depend on velocities (can also depend on positions)
Then we set the update_accelerations to the function we wrote above, and add our custom force to REBOUNDx:
End of explanation
"""
myforce.params["c"] = 0.01
"""
Explanation: We wrote our function to read a parameter c from our effect, so we need to set it before integrating. The object we get back from create_force is a pointer, so it doesn't matter if we set parameters before or after calling rebx.add_force.
End of explanation
"""
import numpy as np
Nout = 1000
es = np.zeros(Nout)
times = np.linspace(0.,100.*2.*np.pi,Nout)
for i, time in enumerate(times):
sim.integrate(time)
es[i] = sim.particles[1].e
"""
Explanation: Now we can just integrate as usual.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
plt.xlabel="time"
plt.plot(times, es);
"""
Explanation: Comparing the output between this and Forces.ipynb in the REBOUND examples, it is the same.
End of explanation
"""
import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(m=1e-6,a=1.)
sim.move_to_com() # Moves to the center of momentum frame
"""
Explanation: Adding a Custom Operator
This looks very similar to what we did above, but now this function is executed between integrator timesteps, rather than including a custom force in addition to point source gravity as above.
End of explanation
"""
def add_mass(reb_sim, rebx_operator, dt):
sim = reb_sim.contents
Mdot = 1.e-6 # hard-coded. Could read a user-set parameter as done above
sim.particles[1].m += Mdot*dt
"""
Explanation: Now we define a simple function that adds mass to particles[1] between timesteps. Note that if this is all you want to do, you probably should use the modify_mass effect that's already in REBOUNDx, since that will be substantially faster (since it's doing everything in C and not switching back and forth to call your Python function). If you haven't, read the description of the custom force above, since all the same points apply here.
With an operator instead of updating the particle accelerations, we are providing the solutions for the positions, velocities and/or masses for our effects. For much more on why you'd want to do that, see Tamayo et al. 2019.
End of explanation
"""
import reboundx
rebx = reboundx.Extras(sim)
myoperator = rebx.create_operator("massgrowth")
"""
Explanation: We now add REBOUNDx as normal, and create an operator
End of explanation
"""
myoperator.operator_type = "updater"
myoperator.step_function = add_mass
rebx.add_operator(myoperator)
"""
Explanation: This time we have to set the operator type:
"updater": An operator that changes particles states (positions, velocities, masses)
"recorder": A passive operator that records the state or updates parameters that do not feed back on the dynamics
We also set the step_function function pointer to point to our new function, and add the operator to REBOUNDx:
End of explanation
"""
import numpy as np
Nout = 1000
ms = np.zeros(Nout)
times = np.linspace(0.,10.*2.*np.pi,Nout)
for i, time in enumerate(times):
sim.integrate(time)
ms[i] = sim.particles[1].m
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
ax.set_xlabel("Time", fontsize=24)
ax.set_ylabel("Mass", fontsize=24)
ax.plot(times, ms);
"""
Explanation: Now if we integrate and plot the mass of particles[1], we see that our function is getting called, since the mass grows linearly with time.
End of explanation
"""
|
kabrapratik28/Stanford_courses | cs231n/assignment2/FullyConnectedNets.ipynb | apache-2.0 | # As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in list(data.items()):
print(('%s: ' % k, v.shape))
"""
Explanation: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:
```python
def layer_forward(x, w):
""" Receive inputs x and weights w """
# Do some computations ...
z = # ... some intermediate value
# Do some more computations ...
out = # the output
cache = (x, w, z, out) # Values we need to compute gradients
return out, cache
```
The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:
```python
def layer_backward(dout, cache):
"""
Receive derivative of loss with respect to outputs and cache,
and compute derivative with respect to inputs.
"""
# Unpack cache values
x, w, z, out = cache
# Use values in cache to compute derivatives
dx = # Derivative of loss with respect to x
dw = # Derivative of loss with respect to w
return dx, dw
```
After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.
In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.
End of explanation
"""
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print('Testing affine_forward function:')
print('difference: ', rel_error(out, correct_out))
"""
Explanation: Affine layer: foward
Open the file cs231n/layers.py and implement the affine_forward function.
Once you are done you can test your implementaion by running the following:
End of explanation
"""
# Test the affine_backward function
np.random.seed(231)
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print('Testing affine_backward function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
"""
Explanation: Affine layer: backward
Now implement the affine_backward function and test your implementation using numeric gradient checking.
End of explanation
"""
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 5e-8
print('Testing relu_forward function:')
print('difference: ', rel_error(out, correct_out))
"""
Explanation: ReLU layer: forward
Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:
End of explanation
"""
np.random.seed(231)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 3e-12
print('Testing relu_backward function:')
print('dx error: ', rel_error(dx_num, dx))
"""
Explanation: ReLU layer: backward
Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:
End of explanation
"""
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print('Testing affine_relu_forward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
"""
Explanation: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:
End of explanation
"""
np.random.seed(231)
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print('Testing svm_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print('\nTesting softmax_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
"""
Explanation: Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.
You can make sure that the implementations are correct by running the following:
End of explanation
"""
np.random.seed(231)
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-3
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print('Testing initialization ... ')
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print('Testing test-time forward pass ... ')
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print('Testing training loss (no regularization)')
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print('Running numeric gradient check with reg = ', reg)
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
"""
Explanation: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
End of explanation
"""
model = TwoLayerNet()
solver = None
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
solver = Solver(model, data,
update_rule='sgd',
optim_config={
'learning_rate': 1e-3,
},
lr_decay=0.95,
num_epochs=9, batch_size=100,
print_every=100)
solver.train()
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
"""
Explanation: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
End of explanation
"""
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
"""
Explanation: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
End of explanation
"""
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-2
learning_rate = 1e-2
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
"""
Explanation: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
End of explanation
"""
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 1e-3
weight_scale = 1e-1
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
"""
Explanation: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
End of explanation
"""
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print('next_w error: ', rel_error(next_w, expected_next_w))
print('velocity error: ', rel_error(expected_velocity, config['velocity']))
"""
Explanation: Inline question:
Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?
Answer:
[FILL THIS IN]
Update rules
So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.
SGD+Momentum
Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.
Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.
End of explanation
"""
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
End of explanation
"""
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('cache error: ', rel_error(expected_cache, config['cache']))
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('v error: ', rel_error(expected_v, config['v']))
print('m error: ', rel_error(expected_m, config['m']))
"""
Explanation: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012).
[2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015.
End of explanation
"""
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:
End of explanation
"""
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
learning_rates['sgd_momentum']=1e-2
best_model_score=0.0
for learning_rate in [1e-2,5e-3,1e-3]:
for weight_scale in [5e-2,5e-1]:
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=weight_scale)
solver = Solver(model, data,
num_epochs=8, batch_size=500,
update_rule='adam',
optim_config={
'learning_rate': learning_rate
},
verbose=True)
solver.train()
print(".")
if best_model_score < solver.val_acc_history[-1]:
best_model = model
best_model_score = solver.val_acc_history[-1]
print ("score is "+str(best_model_score))
################################################################################
# END OF YOUR CODE #
################################################################################
"""
Explanation: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
End of explanation
"""
y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)
y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)
print('Validation set accuracy: ', (y_val_pred == data['y_val']).mean())
print('Test set accuracy: ', (y_test_pred == data['y_test']).mean())
"""
Explanation: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
End of explanation
"""
|
SteveDiamond/cvxpy | examples/notebooks/WWW/robust_kalman.ipynb | gpl-3.0 | import matplotlib
import matplotlib.pyplot as plt
import numpy as np
def plot_state(t,actual, estimated=None):
'''
plot position, speed, and acceleration in the x and y coordinates for
the actual data, and optionally for the estimated data
'''
trajectories = [actual]
if estimated is not None:
trajectories.append(estimated)
fig, ax = plt.subplots(3, 2, sharex='col', sharey='row', figsize=(8,8))
for x, w in trajectories:
ax[0,0].plot(t,x[0,:-1])
ax[0,1].plot(t,x[1,:-1])
ax[1,0].plot(t,x[2,:-1])
ax[1,1].plot(t,x[3,:-1])
ax[2,0].plot(t,w[0,:])
ax[2,1].plot(t,w[1,:])
ax[0,0].set_ylabel('x position')
ax[1,0].set_ylabel('x velocity')
ax[2,0].set_ylabel('x input')
ax[0,1].set_ylabel('y position')
ax[1,1].set_ylabel('y velocity')
ax[2,1].set_ylabel('y input')
ax[0,1].yaxis.tick_right()
ax[1,1].yaxis.tick_right()
ax[2,1].yaxis.tick_right()
ax[0,1].yaxis.set_label_position("right")
ax[1,1].yaxis.set_label_position("right")
ax[2,1].yaxis.set_label_position("right")
ax[2,0].set_xlabel('time')
ax[2,1].set_xlabel('time')
def plot_positions(traj, labels, axis=None,filename=None):
'''
show point clouds for true, observed, and recovered positions
'''
matplotlib.rcParams.update({'font.size': 14})
n = len(traj)
fig, ax = plt.subplots(1, n, sharex=True, sharey=True,figsize=(12, 5))
if n == 1:
ax = [ax]
for i,x in enumerate(traj):
ax[i].plot(x[0,:], x[1,:], 'ro', alpha=.1)
ax[i].set_title(labels[i])
if axis:
ax[i].axis(axis)
if filename:
fig.savefig(filename, bbox_inches='tight')
"""
Explanation: Robust Kalman filtering for vehicle tracking
We will try to pinpoint the location of a moving vehicle with high accuracy from noisy sensor data. We'll do this by modeling the vehicle state as a discrete-time linear dynamical system. Standard Kalman filtering can be used to approach this problem when the sensor noise is assumed to be Gaussian. We'll use robust Kalman filtering to get a more accurate estimate of the vehicle state for a non-Gaussian case with outliers.
Problem statement
A discrete-time linear dynamical system consists of a sequence of state vectors $x_t \in \mathbf{R}^n$, indexed by time $t \in \lbrace 0, \ldots, N-1 \rbrace$ and dynamics equations
\begin{align}
x_{t+1} &= Ax_t + Bw_t\
y_t &=Cx_t + v_t,
\end{align}
where $w_t \in \mathbf{R}^m$ is an input to the dynamical system (say, a drive force on the vehicle), $y_t \in \mathbf{R}^r$ is a state measurement, $v_t \in \mathbf{R}^r$ is noise, $A$ is the drift matrix, $B$ is the input matrix, and $C$ is the observation matrix.
Given $A$, $B$, $C$, and $y_t$ for $t = 0, \ldots, N-1$, the goal is to estimate $x_t$ for $t = 0, \ldots, N-1$.
Kalman filtering
A Kalman filter estimates $x_t$ by solving the optimization problem
\begin{array}{ll}
\mbox{minimize} & \sum_{t=0}^{N-1} \left(
\|w_t\|2^2 + \tau \|v_t\|_2^2\right)\
\mbox{subject to} & x{t+1} = Ax_t + Bw_t,\quad t=0,\ldots, N-1\
& y_t = Cx_t+v_t,\quad t = 0, \ldots, N-1,
\end{array}
where $\tau$ is a tuning parameter. This problem is actually a least squares problem, and can be solved via linear algebra, without the need for more general convex optimization. Note that since we have no observation $y_{N}$, $x_N$ is only constrained via $x_{N} = Ax_{N-1} + Bw_{N-1}$, which is trivially resolved when $w_{N-1} = 0$ and $x_{N} = Ax_{N-1}$. We maintain this vestigial constraint only because it offers a concise problem statement.
This model performs well when $w_t$ and $v_t$ are Gaussian. However, the quadratic objective can be influenced by large outliers, which degrades the accuracy of the recovery. To improve estimation in the presence of outliers, we can use robust Kalman filtering.
Robust Kalman filtering
To handle outliers in $v_t$, robust Kalman filtering replaces the quadratic cost with a Huber cost, which results in the convex optimization problem
\begin{array}{ll}
\mbox{minimize} & \sum_{t=0}^{N-1} \left( \|w_t\|^2_2 + \tau \phi_\rho(v_t) \right)\
\mbox{subject to} & x_{t+1} = Ax_t + Bw_t,\quad t=0,\ldots, N-1\
& y_t = Cx_t+v_t,\quad t=0,\ldots, N-1,
\end{array}
where $\phi_\rho$ is the Huber function
$$
\phi_\rho(a)= \left{ \begin{array}{ll} \|a\|_2^2 & \|a\|_2\leq \rho\
2\rho \|a\|_2-\rho^2 & \|a\|_2>\rho.
\end{array}\right.
$$
The Huber penalty function penalizes estimation error linearly outside of a ball of radius $\rho$, whereas in standard Kalman filtering, all errors are penalized quadratically. Thus, large errors are penalized less harshly, making this model more robust to outliers.
Vehicle tracking example
We'll apply standard and robust Kalman filtering to a vehicle tracking problem with state $x_t \in \mathbf{R}^4$, where
$(x_{t,0}, x_{t,1})$ is the position of the vehicle in two dimensions, and $(x_{t,2}, x_{t,3})$ is the vehicle velocity.
The vehicle has unknown drive force $w_t$, and we observe noisy measurements of the vehicle's position, $y_t \in \mathbf{R}^2$.
The matrices for the dynamics are
$$
A = \begin{bmatrix}
1 & 0 & \left(1-\frac{\gamma}{2}\Delta t\right) \Delta t & 0 \
0 & 1 & 0 & \left(1-\frac{\gamma}{2} \Delta t\right) \Delta t\
0 & 0 & 1-\gamma \Delta t & 0 \
0 & 0 & 0 & 1-\gamma \Delta t
\end{bmatrix},
$$
$$
B = \begin{bmatrix}
\frac{1}{2}\Delta t^2 & 0 \
0 & \frac{1}{2}\Delta t^2 \
\Delta t & 0 \
0 & \Delta t \
\end{bmatrix},
$$
$$
C = \begin{bmatrix}
1 & 0 & 0 & 0 \
0 & 1 & 0 & 0
\end{bmatrix},
$$
where $\gamma$ is a velocity damping parameter.
1D Model
The recurrence is derived from the following relations in a single dimension. For this subsection, let $x_t, v_t, w_t$ be the vehicle position, velocity, and input drive force. The resulting acceleration of the vehicle is $w_t - \gamma v_t$, with $- \gamma v_t$ is a damping term depending on velocity with parameter $\gamma$.
The discretized dynamics are obtained from numerically integrating:
$$
\begin{align}
x_{t+1} &= x_t + \left(1-\frac{\gamma \Delta t}{2}\right)v_t \Delta t + \frac{1}{2}w_{t} \Delta t^2\
v_{t+1} &= \left(1-\gamma\right)v_t + w_t \Delta t.
\end{align}
$$
Extending these relations to two dimensions gives us the dynamics matrices $A$ and $B$.
Helper Functions
End of explanation
"""
n = 1000 # number of timesteps
T = 50 # time will vary from 0 to T with step delt
ts, delt = np.linspace(0,T,n,endpoint=True, retstep=True)
gamma = .05 # damping, 0 is no damping
A = np.zeros((4,4))
B = np.zeros((4,2))
C = np.zeros((2,4))
A[0,0] = 1
A[1,1] = 1
A[0,2] = (1-gamma*delt/2)*delt
A[1,3] = (1-gamma*delt/2)*delt
A[2,2] = 1 - gamma*delt
A[3,3] = 1 - gamma*delt
B[0,0] = delt**2/2
B[1,1] = delt**2/2
B[2,0] = delt
B[3,1] = delt
C[0,0] = 1
C[1,1] = 1
"""
Explanation: Problem Data
We generate the data for the vehicle tracking problem. We'll have $N=1000$, $w_t$ a standard Gaussian, and $v_t$ a standard Guassian, except $20\%$ of the points will be outliers with $\sigma = 20$.
Below, we set the problem parameters and define the matrices $A$, $B$, and $C$.
End of explanation
"""
sigma = 20
p = .20
np.random.seed(6)
x = np.zeros((4,n+1))
x[:,0] = [0,0,0,0]
y = np.zeros((2,n))
# generate random input and noise vectors
w = np.random.randn(2,n)
v = np.random.randn(2,n)
# add outliers to v
np.random.seed(0)
inds = np.random.rand(n) <= p
v[:,inds] = sigma*np.random.randn(2,n)[:,inds]
# simulate the system forward in time
for t in range(n):
y[:,t] = C.dot(x[:,t]) + v[:,t]
x[:,t+1] = A.dot(x[:,t]) + B.dot(w[:,t])
x_true = x.copy()
w_true = w.copy()
plot_state(ts,(x_true,w_true))
plot_positions([x_true,y], ['True', 'Observed'],[-4,14,-5,20],'rkf1.pdf')
"""
Explanation: Simulation
We seed $x_0 = 0$ (starting at the origin with zero velocity) and simulate the system forward in time. The results are the true vehicle positions x_true (which we will use to judge our recovery) and the observed positions y.
We plot the position, velocity, and system input $w$ in both dimensions as a function of time.
We also plot the sets of true and observed vehicle positions.
End of explanation
"""
%%time
import cvxpy as cp
x = cp.Variable(shape=(4, n+1))
w = cp.Variable(shape=(2, n))
v = cp.Variable(shape=(2, n))
tau = .08
obj = cp.sum_squares(w) + tau*cp.sum_squares(v)
obj = cp.Minimize(obj)
constr = []
for t in range(n):
constr += [ x[:,t+1] == A*x[:,t] + B*w[:,t] ,
y[:,t] == C*x[:,t] + v[:,t] ]
cp.Problem(obj, constr).solve(verbose=True)
x = np.array(x.value)
w = np.array(w.value)
plot_state(ts,(x_true,w_true),(x,w))
plot_positions([x_true,y], ['True', 'Noisy'], [-4,14,-5,20])
plot_positions([x_true,x], ['True', 'KF recovery'], [-4,14,-5,20], 'rkf2.pdf')
print("optimal objective value: {}".format(obj.value))
"""
Explanation: Kalman filtering recovery
The code below solves the standard Kalman filtering problem using CVXPY. We plot and compare the true and recovered vehicle states. Note that the recovery is distorted by outliers in the measurements.
End of explanation
"""
%%time
import cvxpy as cp
x = cp.Variable(shape=(4, n+1))
w = cp.Variable(shape=(2, n))
v = cp.Variable(shape=(2, n))
tau = 2
rho = 2
obj = cp.sum_squares(w)
obj += cp.sum([tau*cp.huber(cp.norm(v[:,t]),rho) for t in range(n)])
obj = cp.Minimize(obj)
constr = []
for t in range(n):
constr += [ x[:,t+1] == A*x[:,t] + B*w[:,t] ,
y[:,t] == C*x[:,t] + v[:,t] ]
cp.Problem(obj, constr).solve(verbose=True)
x = np.array(x.value)
w = np.array(w.value)
plot_state(ts,(x_true,w_true),(x,w))
plot_positions([x_true,y], ['True', 'Noisy'], [-4,14,-5,20])
plot_positions([x_true,x], ['True', 'Robust KF recovery'], [-4,14,-5,20],'rkf3.pdf')
print("optimal objective value: {}".format(obj.value))
"""
Explanation: Robust Kalman filtering recovery
Here we implement robust Kalman filtering with CVXPY. We get a better recovery than the standard Kalman filtering, which can be seen in the plots below.
End of explanation
"""
|
aleph314/K2 | Foundations/Python CS/Activity 12.ipynb | gpl-3.0 | class MyVector:
def __init__(self, x):
self.x = x
# Return length of vector
def size(self):
return len(self.x)
# This allows access by index, e.g. y[2]
def __getitem__(self, index):
return self.x[index]
# Return norm of vector
def norm(self):
squared_norm = 0
for el in self.x:
squared_norm += el*el
return np.sqrt(squared_norm)
# Return dot product of vector with another vector
def dot(self, other):
# Check if vector have the same length
if self.size() == other.size():
# Initialize product
prod = 0
# Add the product element by element to the total
for i in range(self.size()):
prod += self[i]*other[i]
else:
raise ValueError('Vector must be of the same length')
return prod
# Test cases
x = MyVector([3, 4, 0])
y = MyVector([1, 2, 6])
z = MyVector([1, 2])
print(x.size()) # 3
print(y.size()) # 3
print(x.norm()) # 5
print(y.norm()) # square root of 41
print(x.dot(y)) # 11
print(x.dot(z)) # Error
"""
Explanation: Exercise 12.1
Create a class to represent vectors of arbitrary length and which is initialised with a list of values, e.g.:
python
x = MyVector([0, 2, 4])
Equip the class with methods that:
Return the length of the vector
Compute the norm of the vector $\sqrt{x \cdot x}$
Compute the dot product of the vector with another vector
Test your implementation using two vectors of length 3. To help you get started, a skeleton of the class is provided below. Don't forget to use self where necessary.
End of explanation
"""
import datetime
year = datetime.date.today().year
print(year)
class Students:
# We have 6 attributes, CRSid is optional (default value set to None)
def __init__(self, surname, forename, birthYear, triposYear, college, CRSid = None):
self.surname = surname
self.forename = forename
self.birthYear = birthYear
self.triposYear = triposYear
self.college = college
self.CRSid = CRSid
def __repr__(self):
"Print the student in the format Surname: surname, Forename: forename, College: college"
return 'Surname: {}, Forename: {}, College: {}'.format(self.surname, self.forename, self.college)
def age(self):
"Return the age of the student subtracting the year of birth from the current year"
return datetime.date.today().year - self.birthYear
def __lt__(self, other):
"""Return true if a student's surname is before (in alphabetical order) another student's surname;
if the surnames are equal do the same with the forenames"""
if self.surname == other.surname:
return self.forename < other.forename
else:
return self.surname < other.surname
# Test cases
stud0 = Students('Bella' ,'Zio', 1987, 2011, 'Università degli Studi di Torino')
stud1 = Students('Nano' ,'Gongolo', 1985, 2008, 'Università degli Studi di Torino')
stud2 = Students('Nano' ,'Dotto', 1984, 2008, 'Università degli Studi di Torino')
stud3 = Students('Casto' ,'Immanuel', 1988, 2011, 'Università degli Studi di Torino')
stud4 = Students('Ciavarro' ,'Massimo', 1983, 2006, 'Università degli Studi di Torino')
print(stud0)
print(stud0.age()) # 30
print(stud0 < stud1) # True
print(stud1 < stud0) # False
print(stud2 < stud1) # True
students_list = [stud0, stud1, stud2, stud3, stud4]
print('List:')
print(students_list)
print('Sorted list:')
students_list.sort()
print(students_list)
"""
Explanation: Exercise 12.2
Create a class for holding a student record entry. It should have the following attributes:
Surname
Forename
Birth year
Tripos year
College
CRSid (optional field)
Equip your class with the method 'age' that returns the age of the student
Equip your class with the method '__repr__' such using print on a student record displays with the format
Surname: Bloggs, Forename: Andrea, College: Churchill
Equip your class with the method __lt__(self, other) so that a list of record entries can be sorted by
(surname, forename). Create a list of entries and test the sorting. Make sure you have two entries with the same
surname.
Hint: To get the current year:
End of explanation
"""
|
junhwanjang/DataSchool | Lecture/18. 문서 전처리/4) 문서 전처리.ipynb | mit | from sklearn.feature_extraction.text import CountVectorizer
corpus = [
'This is the first document.',
'This is the second second document.',
'And the third one.',
'Is this the first document?',
'The last document?',
]
vect = CountVectorizer()
vect.fit(corpus)
vect.vocabulary_
vect.transform(['This is the second document.']).toarray()
vect.transform(['Something completely new.']).toarray()
vect.transform(corpus).toarray()
"""
Explanation: 문서 전처리
모든 데이터 분석 모형은 숫자로 구성된 고정 차원 벡터를 독립 변수로 하고 있으므로 문서(document)를 분석을 하는 경우에도 숫자로 구성된 특징 벡터(feature vector)를 문서로부터 추출하는 과정이 필요하다. 이러한 과정을 문서 전처리(document preprocessing)라고 한다.
BOW (Bag of Words)
문서를 숫자 벡터로 변환하는 가장 기본적인 방법은 BOW (Bag of Words) 이다. BOW 방법에서는 전체 문서 ${D_1, D_2, \ldots, D_n}$ 들를 구성하는 고정된 단어장(vocabulary) ${W_1, W_2, \ldots, W_m}$ 를 만들고 $D_i$라는 개별 문서에 단어장에 해당하는 단어들이 포함되어 있는지를 표시하는 방법이다.
$$ \text{ if word $W_j$ in document $D_i$ }, \;\; \rightarrow x_{ij} = 1 $$
Scikit-Learn 의 문서 전처리 기능
Scikit-Learn 의 feature_extraction.text 서브 패키지는 다음과 같은 문서 전처리용 클래스를 제공한다.
CountVectorizer:
문서 집합으로부터 단어의 수를 세어 카운트 행렬을 만든다.
TfidfVectorizer:
문서 집합으로부터 단어의 수를 세고 TF-IDF 방식으로 단어의 가중치를 조정한 카운트 행렬을 만든다.
HashingVectorizer:
hashing trick 을 사용하여 빠르게 카운트 행렬을 만든다.
End of explanation
"""
vect = CountVectorizer(stop_words=["and", "is", "the", "this"]).fit(corpus)
vect.vocabulary_
vect = CountVectorizer(stop_words="english").fit(corpus)
vect.vocabulary_
"""
Explanation: 문서 처리 옵션
CountVectorizer는 다양한 인수를 가진다. 그 중 중요한 것들은 다음과 같다.
stop_words : 문자열 {‘english’}, 리스트 또는 None (디폴트)
stop words 목록.‘english’이면 영어용 스탑 워드 사용.
analyzer : 문자열 {‘word’, ‘char’, ‘char_wb’} 또는 함수
단어 n-그램, 문자 n-그램, 단어 내의 문자 n-그램
tokenizer : 함수 또는 None (디폴트)
토큰 생성 함수 .
token_pattern : string
토큰 정의용 정규 표현식
ngram_range : (min_n, max_n) 튜플
n-그램 범위
max_df : 정수 또는 [0.0, 1.0] 사이의 실수. 디폴트 1
단어장에 포함되기 위한 최대 빈도
min_df : 정수 또는 [0.0, 1.0] 사이의 실수. 디폴트 1
단어장에 포함되기 위한 최소 빈도
vocabulary : 사전이나 리스트
단어장
Stop Words
Stop Words 는 문서에서 단어장을 생성할 때 무시할 수 있는 단어를 말한다. 보통 영어의 관사나 접속사, 한국어의 조사 등이 여기에 해당한다. stop_words 인수로 조절할 수 있다.
End of explanation
"""
vect = CountVectorizer(analyzer="char").fit(corpus)
vect.vocabulary_
import nltk
nltk.download("punkt")
vect = CountVectorizer(tokenizer=nltk.word_tokenize).fit(corpus)
vect.vocabulary_
vect = CountVectorizer(token_pattern="t\w+").fit(corpus)
vect.vocabulary_
"""
Explanation: 토큰(token)
토큰은 문서에서 단어장을 생성할 때 하나의 단어가 되는 단위를 말한다. analyzer, tokenizer, token_pattern 등의 인수로 조절할 수 있다.
End of explanation
"""
vect = CountVectorizer(ngram_range=(2,2)).fit(corpus)
vect.vocabulary_
vect = CountVectorizer(ngram_range=(1,2), token_pattern="t\w+").fit(corpus)
vect.vocabulary_
"""
Explanation: n-그램
n-그램은 단어장 생성에 사용할 토큰의 크기를 결정한다. 1-그램은 토큰 하나만 단어로 사용하며 2-그램은 두 개의 연결된 토큰을 하나의 단어로 사용한다.
End of explanation
"""
vect = CountVectorizer(max_df=4, min_df=2).fit(corpus)
vect.vocabulary_, vect.stop_words_
vect.transform(corpus).toarray().sum(axis=0)
"""
Explanation: 빈도수
max_df, min_df 인수를 사용하여 문서에서 토큰이 나타난 횟수를 기준으로 단어장을 구성할 수도 있다. 토큰의 빈도가 max_df로 지정한 값을 초과 하거나 min_df로 지정한 값보다 작은 경우에는 무시한다. 인수 값은 정수인 경우 횟수, 부동소수점인 경우 비중을 뜻한다.
End of explanation
"""
from sklearn.feature_extraction.text import TfidfVectorizer
tfidv = TfidfVectorizer().fit(corpus)
tfidv.transform(corpus).toarray()
"""
Explanation: TF-IDF
TF-IDF(Term Frequency – Inverse Document Frequency) 인코딩은 단어를 갯수 그대로 카운트하지 않고 모든 문서에 공통적으로 들어있는 단어의 경우 문서 구별 능력이 떨어진다고 보아 가중치를 축소하는 방법이다.
구제적으로는 문서 $d$(document)와 단어 $t$ 에 대해 다음과 같이 계산한다.
$$ \text{tf-idf}(d, t) = \text{tf}(d, t) \cdot \text{idf}(d, t) $$
여기에서
$\text{tf}(d, t)$: 단어의 빈도수
$\text{idf}(d, t)$ : inverse document frequency
$$ \text{idf}(d, t) = \log \dfrac{n_d}{1 + \text{df}(t)} $$
$n_d$ : 전체 문서의 수
$\text{df}(t)$: 단어 $t$를 가진 문서의 수
End of explanation
"""
from sklearn.datasets import fetch_20newsgroups
twenty = fetch_20newsgroups()
len(twenty.data)
%time CountVectorizer().fit(twenty.data).transform(twenty.data)
from sklearn.feature_extraction.text import HashingVectorizer
hv = HashingVectorizer(n_features=10)
%time hv.transform(twenty.data)
"""
Explanation: Hashing Trick
CountVectorizer는 모든 작업을 in-memory 상에서 수행하므로 데이터 양이 커지면 속도가 느려지거나 실행이 불가능해진다. 이 때
HashingVectorizer를 사용하면 Hashing Trick을 사용하여 메모리 및 실행 시간을 줄일 수 있다.
End of explanation
"""
corpus = ["imaging", "image", "imagination", "imagine", "buys", "buying", "bought"]
vect = CountVectorizer().fit(corpus)
vect.vocabulary_
from sklearn.datasets import fetch_20newsgroups
twenty = fetch_20newsgroups()
docs = twenty.data[:100]
vect = CountVectorizer(stop_words="english", token_pattern="wri\w+").fit(docs)
vect.vocabulary_
from nltk.stem import SnowballStemmer
class StemTokenizer(object):
def __init__(self):
self.s = SnowballStemmer('english')
self.t = CountVectorizer(stop_words="english", token_pattern="wri\w+").build_tokenizer()
def __call__(self, doc):
return [self.s.stem(t) for t in self.t(doc)]
vect = CountVectorizer(tokenizer=StemTokenizer()).fit(docs)
vect.vocabulary_
"""
Explanation: 형태소 분석기 이용
End of explanation
"""
import urllib2
import json
import string
from konlpy.utils import pprint
from konlpy.tag import Hannanum
hannanum = Hannanum()
req = urllib2.Request("https://www.datascienceschool.net/download-notebook/708e711429a646818b9dcbb581e0c10a/")
opener = urllib2.build_opener()
f = opener.open(req)
json = json.loads(f.read())
cell = ["\n".join(c["source"]) for c in json["cells"] if c["cell_type"] == u"markdown"]
docs = [w for w in hannanum.nouns(" ".join(cell)) if ((not w[0].isnumeric()) and (w[0] not in string.punctuation))]
vect = CountVectorizer().fit(docs)
count = vect.transform(docs).toarray().sum(axis=0)
plt.bar(range(len(count)), count)
plt.show()
pprint(zip(vect.get_feature_names(), count))
"""
Explanation: 예
End of explanation
"""
|
tjwei/HackNTU_Data_2017 | Week08/06-text_generation2.ipynb | mit | import os
os.environ['KERAS_BACKEND']='theano'
#os.environ['THEANO_FLAGS']="floatX=float64,device=cpu"
os.environ['THEANO_FLAGS']="floatX=float32,device=cuda"
from keras.models import Sequential
from keras.layers import Dense, Activation, Embedding
from keras.layers import LSTM
from keras.optimizers import RMSprop, Adam
import keras.backend as K
import numpy as np
import random
import sys
text = open("sdyxz_all.txt").read().lower().replace("\n\n", "\n").replace("\n\n", "\n")
print('corpus length:', len(text))
chars = sorted(list(set(text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
maxlen = 30
step = 1
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('nb sequences:', len(sentences))
"""
Explanation: ref: https://github.com/fchollet/keras/blob/master/examples/lstm_text_generation.py
End of explanation
"""
X = np.zeros((len(sentences), maxlen), dtype=np.int32)
y = np.zeros((len(sentences),), dtype=np.int32)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
X[i, t] = char_indices[char]
y[i] = char_indices[next_chars[i]]
"""
Explanation: Vectorization
End of explanation
"""
def sparse_top_k_categorical_accuracy(y_true, y_pred, k=5):
return K.mean(K.in_top_k(y_pred, K.max(y_true, axis=-1), k))
model = Sequential()
model.add(Embedding(len(chars), 128))
model.add(LSTM(512, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
optimizer = RMSprop(lr=0.001)
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer,
metrics=['accuracy', sparse_top_k_categorical_accuracy])
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
for iteration in range(1, 60):
print()
print('-' * 50)
print('Iteration', iteration)
model.fit(X, y,
batch_size=128,
epochs=1)
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print()
print('----- diversity:', diversity)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
for i in range(400):
x = np.zeros((1, maxlen))
for t, char in enumerate(sentence):
x[0, t] = char_indices[char]
preds = model.predict(x, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
print(generated)
"""
Explanation: The model
End of explanation
"""
|
hetaodie/hetaodie.github.io | assets/media/uda-ml/fjd/ica/独立成分分析/Independent Component Analysis Lab-zh.ipynb | mit | import numpy as np
import wave
# Read the wave file
mix_1_wave = wave.open('ICA_mix_1.wav','r')
"""
Explanation: 独立成分分析 Lab
在此 notebook 中,我们将使用独立成分分析方法从三个观察结果中提取信号,每个观察结果都包含不同的原始混音信号。这个问题与 ICA 视频中解释的问题一样。
数据集
首先看看手头的数据集。我们有三个 WAVE 文件,正如我们之前提到的,每个文件都是混音形式。如果你之前没有在 python 中处理过音频文件,没关系,它们实际上就是浮点数列表。
首先加载第一个音频文件 ICA_mix_1.wav [点击即可聆听该文件]:
End of explanation
"""
mix_1_wave.getparams()
"""
Explanation: 我们看看该 wave 文件的参数,详细了解该文件
End of explanation
"""
264515/44100
"""
Explanation: 该文件只有一个声道(因此是单声道)。帧率是 44100,表示每秒声音由 44100 个整数组成(因为文件是常见的 PCM 16 位格式,所以是整数)。该文件总共有 264515 个整数/帧,因此时长为:
End of explanation
"""
# Extract Raw Audio from Wav File
signal_1_raw = mix_1_wave.readframes(-1)
signal_1 = np.fromstring(signal_1_raw, 'Int16')
"""
Explanation: 我们从该 wave 文件中提取帧,这些帧将属于我们将运行 ICA 的数据集:
End of explanation
"""
'length: ', len(signal_1) , 'first 100 elements: ',signal_1[:100]
"""
Explanation: signal_1 现在是一个整数列表,表示第一个文件中包含的声音。
End of explanation
"""
import matplotlib.pyplot as plt
fs = mix_1_wave.getframerate()
timing = np.linspace(0, len(signal_1)/fs, num=len(signal_1))
plt.figure(figsize=(12,2))
plt.title('Recording 1')
plt.plot(timing,signal_1, c="#3ABFE7")
plt.ylim(-35000, 35000)
plt.show()
"""
Explanation: 如果将此数组绘制成线形图,我们将获得熟悉的波形:
End of explanation
"""
mix_2_wave = wave.open('ICA_mix_2.wav','r')
#Extract Raw Audio from Wav File
signal_raw_2 = mix_2_wave.readframes(-1)
signal_2 = np.fromstring(signal_raw_2, 'Int16')
mix_3_wave = wave.open('ICA_mix_3.wav','r')
#Extract Raw Audio from Wav File
signal_raw_3 = mix_3_wave.readframes(-1)
signal_3 = np.fromstring(signal_raw_3, 'Int16')
plt.figure(figsize=(12,2))
plt.title('Recording 2')
plt.plot(timing,signal_2, c="#3ABFE7")
plt.ylim(-35000, 35000)
plt.show()
plt.figure(figsize=(12,2))
plt.title('Recording 3')
plt.plot(timing,signal_3, c="#3ABFE7")
plt.ylim(-35000, 35000)
plt.show()
"""
Explanation: 现在我们可以按照相同的方式加载另外两个 wave 文件 ICA_mix_2.wav 和 ICA_mix_3.wav
End of explanation
"""
X = list(zip(signal_1, signal_2, signal_3))
# Let's peak at what X looks like
X[:10]
"""
Explanation: 读取所有三个文件后,可以通过 zip 运算创建数据集。
通过将 signal_1、signal_2 和 signal_3 组合成一个列表创建数据集 X
End of explanation
"""
# TODO: Import FastICA
# TODO: Initialize FastICA with n_components=3
# TODO: Run the FastICA algorithm using fit_transform on dataset X
ica_result.shape
"""
Explanation: 现在准备运行 ICA 以尝试获取原始信号。
导入 sklearn 的 FastICA 模块
初始化 FastICA,查看三个成分
使用 fit_transform 对数据集 X 运行 FastICA 算法
End of explanation
"""
result_signal_1 = ica_result[:,0]
result_signal_2 = ica_result[:,1]
result_signal_3 = ica_result[:,2]
"""
Explanation: 我们将其拆分为单独的信号并查看这些信号
End of explanation
"""
# Plot Independent Component #1
plt.figure(figsize=(12,2))
plt.title('Independent Component #1')
plt.plot(result_signal_1, c="#df8efd")
plt.ylim(-0.010, 0.010)
plt.show()
# Plot Independent Component #2
plt.figure(figsize=(12,2))
plt.title('Independent Component #2')
plt.plot(result_signal_2, c="#87de72")
plt.ylim(-0.010, 0.010)
plt.show()
# Plot Independent Component #3
plt.figure(figsize=(12,2))
plt.title('Independent Component #3')
plt.plot(result_signal_3, c="#f65e97")
plt.ylim(-0.010, 0.010)
plt.show()
"""
Explanation: 我们对信号进行绘制,查看波浪线的形状
End of explanation
"""
from scipy.io import wavfile
# Convert to int, map the appropriate range, and increase the volume a little bit
result_signal_1_int = np.int16(result_signal_1*32767*100)
result_signal_2_int = np.int16(result_signal_2*32767*100)
result_signal_3_int = np.int16(result_signal_3*32767*100)
# Write wave files
wavfile.write("result_signal_1.wav", fs, result_signal_1_int)
wavfile.write("result_signal_2.wav", fs, result_signal_2_int)
wavfile.write("result_signal_3.wav", fs, result_signal_3_int)
"""
Explanation: 某些波浪线看起来像音乐波形吗?
确认结果的最佳方式是聆听生成的文件。另存为 wave 文件并进行验证。在此之前,我们需要:
* 将它们转换为整数(以便另存为 PCM 16 位 Wave 文件),否则只有某些媒体播放器能够播放它们
* 将值映射到 int16 音频的相应范围内。该范围在 -32768 到 +32767 之间。基本的映射方法是乘以 32767。
* 音量有点低,我们可以乘以某个值(例如 100)来提高音量
End of explanation
"""
|
ledeprogram/algorithms | class7/donow/radhikapc_Class7_DoNow.ipynb | gpl-3.0 | import pandas as pd
%matplotlib inline
import numpy as np
from sklearn.linear_model import LogisticRegression
"""
Explanation: Apply logistic regression to categorize whether a county had high mortality rate due to contamination
1. Import the necessary packages to read in the data, plot, and create a logistic regression model
End of explanation
"""
df = pd.read_csv("hanford.csv")
df.head()
"""
Explanation: 2. Read in the hanford.csv file in the data/ folder
End of explanation
"""
df.describe()
df.median()
rang= df['Mortality'].max() - df['Mortality'].min()
rang
iqr_m = df['Mortality'].quantile(q=0.75)- df['Mortality'].quantile(q=0.25)
iqr_m
iqr_e = df['Exposure'].quantile(q=0.75)- df['Exposure'].quantile(q=0.25)
iqr_e
UAL_m= (iqr_m*1.5) + df['Mortality'].quantile(q=0.75)
UAL_m
UAL_e= (iqr_m*1.5) + df['Exposure'].quantile(q=0.75)
UAL_e
LAL_m= df['Mortality'].quantile(q=0.25) - (iqr_e*1.5)
LAL_m
LAL_e= df['Exposure'].quantile(q=0.25) - (iqr_e*1.5)
LAL_e
len(df[df['Mortality']> UAL_m])
len(df[df['Exposure']> UAL_e])
len(df[df['Mortality']< LAL_m])
len(df[df['Mortality'] > UAL_m])
"""
Explanation: <img src="../../images/hanford_variables.png"></img>
3. Calculate the basic descriptive statistics on the data
End of explanation
"""
lm = LogisticRegression()
data = np.asarray(df[['Mortality','Exposure']])
x = data[:,1:]
y = data[:,0]
data
x
y
lm.fit(x,y)
lm.coef_
lm.score(x,y)
slope = lm.coef_[0]
intercept = lm.intercept_
"""
Explanation: 4. Find a reasonable threshold to say exposure is high and recode the data
5. Create a logistic regression model
End of explanation
"""
lm.predict(50)
"""
Explanation: 6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50
End of explanation
"""
|
bp-kelley/rdkit | Docs/Notebooks/RGroupDecomposition-example-lactam.ipynb | bsd-3-clause | from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
IPythonConsole.ipython_useSVG=True
from rdkit.Chem.rdRGroupDecomposition import RGroupDecomposition
import pandas as pd
from rdkit.Chem import PandasTools
from IPython.display import HTML
from rdkit import rdBase
rdBase.DisableLog("rdApp.debug")
core = Chem.MolFromSmiles('O=C1C([*])C2N1C(C(O)=O)=C([*])CS2')
core
"""
Explanation: Example lactam analysis using a single cores on a very large dataset.
End of explanation
"""
rg = RGroupDecomposition(core)
mols = []
count = 0
for line in open("compounds.txt"):
sm = line.split()[-1]
m = Chem.MolFromSmiles(sm)
if m:
count += 1
idx = rg.Add(m)
rg.Process()
print ("Added %s to RGroup Decomposition out of %s"%(idx, count))
"""
Explanation: To use RGroupDecomposition:
construct the class on the core rg = RGroupDecomposition(core)
Call rg.Add( mol ) on the molecules. If this returns -1, the molecule is not
compatible with the core
After all molecules are added, call rg.Process() to complete the rgroup
decomposition.
End of explanation
"""
from rdkit import RDLogger
RDLogger.DisableLog("rdApp.*")
"""
Explanation: It is useful to disable logging here. When making RGroup renderings there
are a lot of sanitization warnings.
End of explanation
"""
frame = pd.DataFrame(rg.GetRGroupsAsColumns())
PandasTools.ChangeMoleculeRendering(frame)
"""
Explanation: The RGroupDecomposition code is quite compatible with the python pandas integration.
Calling rg.GetRGroupsAsColumns() can be sent directly into a pandas table.
n.b. You need to call PandasTools.ChangeMoleculeRendering(frame) to allow the molecules
to be rendered properly.
End of explanation
"""
f2 = pd.DataFrame(frame.head())
PandasTools.ChangeMoleculeRendering(f2)
HTML(f2.to_html())
"""
Explanation: Just show the first few (for speed and to keep the notebook small)
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/00ac060e49528fd74fda09b97366af98/3d_to_2d.ipynb | bsd-3-clause | # Authors: Christopher Holdgraf <choldgraf@berkeley.edu>
# Alex Rockhill <aprockhill@mailbox.org>
#
# License: BSD-3-Clause
from mne.io.fiff.raw import read_raw_fif
import numpy as np
from matplotlib import pyplot as plt
from os import path as op
import mne
from mne.viz import ClickableImage # noqa: F401
from mne.viz import (plot_alignment, snapshot_brain_montage, set_3d_view)
misc_path = mne.datasets.misc.data_path()
ecog_data_fname = op.join(misc_path, 'ecog', 'sample_ecog_ieeg.fif')
subjects_dir = op.join(misc_path, 'ecog')
# We've already clicked and exported
layout_path = op.join(op.dirname(mne.__file__), 'data', 'image')
layout_name = 'custom_layout.lout'
"""
Explanation: How to convert 3D electrode positions to a 2D image.
Sometimes we want to convert a 3D representation of electrodes into a 2D
image. For example, if we are using electrocorticography it is common to
create scatterplots on top of a brain, with each point representing an
electrode.
In this example, we'll show two ways of doing this in MNE-Python. First,
if we have the 3D locations of each electrode then we can use Mayavi to
take a snapshot of a view of the brain. If we do not have these 3D locations,
and only have a 2D image of the electrodes on the brain, we can use the
:class:mne.viz.ClickableImage class to choose our own electrode positions
on the image.
End of explanation
"""
raw = read_raw_fif(ecog_data_fname)
raw.pick_channels([f'G{i}' for i in range(1, 257)]) # pick just one grid
# Since we loaded in the ecog data from FIF, the coordinates
# are in 'head' space, but we actually want them in 'mri' space.
# So we will apply the head->mri transform that was used when
# generating the dataset (the estimated head->mri transform).
montage = raw.get_montage()
trans = mne.coreg.estimate_head_mri_t('sample_ecog', subjects_dir)
montage.apply_trans(trans)
"""
Explanation: Load data
First we will load a sample ECoG dataset which we'll use for generating
a 2D snapshot.
End of explanation
"""
fig = plot_alignment(raw.info, trans=trans, subject='sample_ecog',
subjects_dir=subjects_dir, surfaces=dict(pial=0.9))
set_3d_view(figure=fig, azimuth=20, elevation=80)
xy, im = snapshot_brain_montage(fig, raw.info)
# Convert from a dictionary to array to plot
xy_pts = np.vstack([xy[ch] for ch in raw.ch_names])
# Compute beta power to visualize
raw.load_data()
beta_power = raw.filter(20, 30).apply_hilbert(envelope=True).get_data()
beta_power = beta_power.max(axis=1) # take maximum over time
# This allows us to use matplotlib to create arbitrary 2d scatterplots
fig2, ax = plt.subplots(figsize=(10, 10))
ax.imshow(im)
cmap = ax.scatter(*xy_pts.T, c=beta_power, s=100, cmap='coolwarm')
cbar = fig2.colorbar(cmap)
cbar.ax.set_ylabel('Beta Power')
ax.set_axis_off()
# fig2.savefig('./brain.png', bbox_inches='tight') # For ClickableImage
"""
Explanation: Project 3D electrodes to a 2D snapshot
Because we have the 3D location of each electrode, we can use the
:func:mne.viz.snapshot_brain_montage function to return a 2D image along
with the electrode positions on that image. We use this in conjunction with
:func:mne.viz.plot_alignment, which visualizes electrode positions.
End of explanation
"""
# This code opens the image so you can click on it. Commented out
# because we've stored the clicks as a layout file already.
# # The click coordinates are stored as a list of tuples
# im = plt.imread('./brain.png')
# click = ClickableImage(im)
# click.plot_clicks()
# # Generate a layout from our clicks and normalize by the image
# print('Generating and saving layout...')
# lt = click.to_layout()
# lt.save(op.join(layout_path, layout_name)) # save if we want
# # We've already got the layout, load it
lt = mne.channels.read_layout(layout_name, path=layout_path, scale=False)
x = lt.pos[:, 0] * float(im.shape[1])
y = (1 - lt.pos[:, 1]) * float(im.shape[0]) # Flip the y-position
fig, ax = plt.subplots()
ax.imshow(im)
ax.scatter(x, y, s=80, color='r')
fig.tight_layout()
ax.set_axis_off()
"""
Explanation: Manually creating 2D electrode positions
If we don't have the 3D electrode positions then we can still create a
2D representation of the electrodes. Assuming that you can see the electrodes
on the 2D image, we can use :class:mne.viz.ClickableImage to open the image
interactively. You can click points on the image and the x/y coordinate will
be stored.
We'll open an image file, then use ClickableImage to
return 2D locations of mouse clicks (or load a file already created).
Then, we'll return these xy positions as a layout for use with plotting topo
maps.
End of explanation
"""
|
janten/lcfam | lcfam.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
from skimage import io, color
"""
Explanation: Lightness Correction for Color-Coded FA Maps
To install the necessary prerequisites for this tool:
pip install ipython[all]
pip install scikit-image
pip install seaborn
Import the required packages
End of explanation
"""
fam = io.imread("fam.png")
fam_norm = fam.astype(float) / 255
cfam = io.imread("cfam.png")[:, :, 0:3]
cfam_lab = color.rgb2lab(cfam)
cfam_lightness = cfam_lab[:, :, 0].astype(float) / 100
"""
Explanation: Read the FAM and CFAM from PNG images, remove alpha channel.
End of explanation
"""
lcfam_lab = cfam_lab
lcfam_lightness = fam_norm * 100
lcfam_lab[:, :, 0] = lcfam_lightness
lcfam = color.lab2rgb(lcfam_lab)
"""
Explanation: Inject the FA values from the FAM into the L* channel of the CIE L*a*b* representation of the CFAM to get the LCFAM
End of explanation
"""
warnings.simplefilter("ignore")
io.imsave("lcfam.png", lcfam)
"""
Explanation: Save the resulting lightness corrected FA map to lcfam.png. Note that some loss of precision is to be expected here since all conversions were done on 64 bit floating point numbers but PNG only stores 8 bit of information in each color channel.
End of explanation
"""
lcfam_lightness = color.rgb2lab(io.imread("lcfam.png")[:, :, 0:3])[:, :, 0]
"""
Explanation: Load the LCFAM's L* channel from the resulting PNG file to visualise the difference between LCFAM L* and then input FAM. This will be slightly different from the original lcfam_lightness computed above due to the precision loss in saving mentioned above.
End of explanation
"""
fig, axes = plt.subplots(3, 2, figsize=(20, 20))
axes[0, 0].imshow(cfam)
axes[0, 0].axis("off")
axes[0, 0].set_title("FreeSurfer FA Map (CFAM)", fontsize=18)
axes[0, 1].imshow(lcfam)
axes[0, 1].axis("off")
axes[0, 1].set_title("Lightness Corrected FA Map (LCFAM)", fontsize=18)
axes[1, 0].imshow(cfam_lightness, cmap="gray")
axes[1, 0].axis("off")
axes[1, 0].set_title("Lightness of CFAM (CFAM L*)", fontsize=18)
axes[1, 1].imshow(lcfam_lightness, cmap="gray")
axes[1, 1].axis("off")
axes[1, 1].set_title("Lightness of LCFAM (LCFAM L*)", fontsize=18)
axes[2, 0].imshow(fam_norm - cfam_lightness, cmap="RdBu_r")
axes[2, 0].axis("off")
axes[2, 0].set_title("Difference Between CFAM L* and FAM", fontsize=18)
axes[2, 1].imshow(fam_norm, cmap="gray")
axes[2, 1].axis("off")
axes[2, 1].set_title("Plain FA Map (FAM)", fontsize=18)
fig.savefig("figure.pdf")
"""
Explanation: Visualize the results in a single figure environment
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | blogs/housing_prices/cloud-ml-housing-prices-hp-tuning.ipynb | apache-2.0 | %%bash
mkdir trainer
touch trainer/__init__.py
%%writefile trainer/task.py
import argparse
import pandas as pd
import tensorflow as tf
import os #NEW
import json #NEW
from tensorflow.contrib.learn.python.learn import learn_runner
from tensorflow.contrib.learn.python.learn.utils import saved_model_export_utils
print(tf.__version__)
tf.logging.set_verbosity(tf.logging.ERROR)
data_train = pd.read_csv(
filepath_or_buffer='https://storage.googleapis.com/spls/gsp418/housing_train.csv',
names=["CRIM","ZN","INDUS","CHAS","NOX","RM","AGE","DIS","RAD","TAX","PTRATIO","MEDV"])
data_test = pd.read_csv(
filepath_or_buffer='https://storage.googleapis.com/spls/gsp418/housing_test.csv',
names=["CRIM","ZN","INDUS","CHAS","NOX","RM","AGE","DIS","RAD","TAX","PTRATIO","MEDV"])
FEATURES = ["CRIM", "ZN", "INDUS", "NOX", "RM",
"AGE", "DIS", "TAX", "PTRATIO"]
LABEL = "MEDV"
feature_cols = [tf.feature_column.numeric_column(k)
for k in FEATURES] #list of Feature Columns
def generate_estimator(output_dir):
return tf.estimator.DNNRegressor(feature_columns=feature_cols,
hidden_units=[args.hidden_units_1, args.hidden_units_2], #NEW (use command line parameters for hidden units)
model_dir=output_dir)
def generate_input_fn(data_set):
def input_fn():
features = {k: tf.constant(data_set[k].values) for k in FEATURES}
labels = tf.constant(data_set[LABEL].values)
return features, labels
return input_fn
def serving_input_fn():
#feature_placeholders are what the caller of the predict() method will have to provide
feature_placeholders = {
column.name: tf.placeholder(column.dtype, [None])
for column in feature_cols
}
#features are what we actually pass to the estimator
features = {
# Inputs are rank 1 so that we can provide scalars to the server
# but Estimator expects rank 2, so we expand dimension
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(
features, feature_placeholders
)
train_spec = tf.estimator.TrainSpec(
input_fn=generate_input_fn(data_train),
max_steps=3000)
exporter = tf.estimator.LatestExporter('Servo', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn=generate_input_fn(data_test),
steps=1,
exporters=exporter)
######START CLOUD ML ENGINE BOILERPLATE######
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Input Arguments
parser.add_argument(
'--output_dir',
help='GCS location to write checkpoints and export models',
required=True
)
parser.add_argument(
'--job-dir',
help='this model ignores this field, but it is required by gcloud',
default='junk'
)
parser.add_argument(
'--hidden_units_1', #NEW (expose hyperparameter to command line)
help='number of neurons in first hidden layer',
type = int,
default=10
)
parser.add_argument(
'--hidden_units_2', #NEW (expose hyperparameter to command line)
help='number of neurons in second hidden layer',
type = int,
default=10
)
args = parser.parse_args()
arguments = args.__dict__
output_dir = arguments.pop('output_dir')
output_dir = os.path.join(#NEW (give each trial its own output_dir)
output_dir,
json.loads(
os.environ.get('TF_CONFIG', '{}')
).get('task', {}).get('trial', '')
)
######END CLOUD ML ENGINE BOILERPLATE######
#initiate training job
tf.estimator.train_and_evaluate(generate_estimator(output_dir), train_spec, eval_spec)
"""
Explanation: Automatic Hyperparameter tuning
This notebook will show you how to extend the code in the cloud-ml-housing-prices notebook to take advantage of Cloud ML Engine's automatic hyperparameter tuning.
We will use it to determine the ideal number of hidden units to use in our neural network.
Cloud ML Engine uses bayesian optimization to find the hyperparameter settings for you. You can read the details of how it works here.
1) Modify Tensorflow Code
We need to make code changes to:
1. Expose any hyperparameter we wish to tune as a command line argument (this is how CMLE passes new values)
2. Modify the output_dir so each hyperparameter 'trial' gets written to a unique directory
These changes are illustrated below. Any change from the original code has a #NEW comment next to it for easy reference
End of explanation
"""
%%writefile config.yaml
trainingInput:
hyperparameters:
goal: MINIMIZE
hyperparameterMetricTag: average_loss
maxTrials: 5
maxParallelTrials: 1
params:
- parameterName: hidden_units_1
type: INTEGER
minValue: 1
maxValue: 100
scaleType: UNIT_LOG_SCALE
- parameterName: hidden_units_2
type: INTEGER
minValue: 1
maxValue: 100
scaleType: UNIT_LOG_SCALE
"""
Explanation: 2) Define Hyperparameter Configuration File
Here you specify:
Which hyperparamters to tune
The min and max range to search between
The metric to optimize
The number of trials to run
End of explanation
"""
GCS_BUCKET = 'gs://vijays-sandbox-ml' #CHANGE THIS TO YOUR BUCKET
PROJECT = 'vijays-sandbox' #CHANGE THIS TO YOUR PROJECT ID
REGION = 'us-central1' #OPTIONALLY CHANGE THIS
import os
os.environ['GCS_BUCKET'] = GCS_BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
"""
Explanation: 3) Train
End of explanation
"""
%%bash
gcloud ml-engine local train \
--module-name=trainer.task \
--package-path=trainer \
-- \
--output_dir='./output'
"""
Explanation: Run local
It's a best practice to first run locally to check for errors. Note you can ignore the warnings in this case, as long as there are no errors.
End of explanation
"""
%%bash
gcloud config set project $PROJECT
%%bash
JOBNAME=housing_$(date -u +%y%m%d_%H%M%S)
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=./trainer \
--job-dir=$GCS_BUCKET/$JOBNAME/ \
--runtime-version 1.4 \
--config config.yaml \
-- \
--output_dir=$GCS_BUCKET/$JOBNAME/output
"""
Explanation: Run on cloud (1 cloud ML unit)
End of explanation
"""
|
danlamanna/scratch | notebooks/experimental/Geoserver.ipynb | apache-2.0 | %matplotlib inline
from matplotlib import pylab as plt
"""
Explanation: Using Geoserver to load data on the map
In this notebook we'll take a look at using Geoserver to render raster data to the map. Geoserver is an open source server for sharing geospatial data. It includes a tiling server which the GeoJS map uses to render data efficiently to the map for visualization. Geonotebook comes with a vagrant virtual machine for hosting a local instance of Geoserver. This instance can be used for testing geonotebook. To use it simply install vagrant using your system package manager, in a checked out copy of the source code go to the devops/geoserver/ folder and run vagrant up
End of explanation
"""
!cd ../devops/geoserver && vagrant status
"""
Explanation: Make sure you have the geoserver VM running
The following cell will check whether or not your have a running instance of the geoserver virtual machine available. The following cell should show text to the effect of:
```
Current machine states:
geoserver running (virtualbox)
The VM is running. To stop this VM, you can run vagrant halt to
shut it down forcefully, or you can run vagrant suspend to simply
suspend the virtual machine. In either case, to restart it again,
simply run vagrant up.
```
If it does not show the geoserver machine in a state of running You can load the machine by going to ../devops/geoserver/ and running vagrant up
End of explanation
"""
from IPython.core.display import display, HTML
from geonotebook.config import Config
geoserver = Config().vis_server
display(HTML(geoserver.c.get("/about/status").text))
"""
Explanation: Display geoserver status
This should ensure the client can successfully connect to your VM, if you do not see the Geoserver 'Status' page then something is wrong and the rest of the notebook may not function correctly.
End of explanation
"""
!curl -o /tmp/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff http://golden-tile-geotiffs.s3.amazonaws.com/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff
"""
Explanation: Get the data from S3
Next get some sample data from S3. This GeoTiff represents NBAR data for September from 2010 covering a section of Washington states Glacier National Park. It is aproximately 200Mb and may take some time to download from Amazon's S3.
The tiff itself has been slightly transformed from its original HDF dataset. In particular it only has 4 bands (R,G,B & NDVI) and includes some geotiff tags with band statistics.
End of explanation
"""
# Set the center of the map to the location the data
M.set_center(-120.32, 47.84, 7)
from geonotebook.wrappers import RasterData
rd = RasterData('data/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff')
rd
"""
Explanation: Adding an RGB layer to the map
Here we add our first data layer to the map. To do this we use a RasterData object imported from the geonotebook.wrappers package. By default RasterData objects read tiffs using the rasterio library. RasterData objects are designed to provide a consistent API to raster data across a number of different readers and systems. We will use the add_layer function to add the RasterData object to the map.
End of explanation
"""
M.add_layer(rd[1, 2, 3], opacity=1.0)
M.layers.annotation.points[0].data.next()
from geonotebook.vis.ktile.utils import get_layer_vrt
print get_layer_vrt(M.layers[0])
"""
Explanation: To add the layer we call M.add_layer passing in a subset of the raster data set's bands. In this case we index into rd with the list [1, 2, 3]. This actually returns a new RasterData object with only three bands available (in this case bands 1, 2 and 3 corrispond to Red, Green and Blue). When adding layers you can only add a layer with either 3 bands (R,G,B) or one band (we'll see a one band example in a moment).
End of explanation
"""
M.layers
"""
Explanation: This should have added an RGB dataset to the map for visualization. You can also see what layers are available via the M.layers attribute.
End of explanation
"""
print("Color Min Max")
print("Red: {}, {}".format(rd[1].min, rd[1].max))
print("Green: {}, {}".format(rd[2].min, rd[2].max))
print("Blue: {}, {}".format(rd[3].min, rd[3].max))
"""
Explanation: The dataset may appear alarmingly dark. This is because the data itself is not well formated. We can see this by looking at band min and max values:
End of explanation
"""
M.remove_layer(M.layers[0])
"""
Explanation: R,G,B values should be between 0 and 1. We can remedy this by changing some of the styling options that are available on the layers including setting an interval for scaling our data, and setting a gamma to brighten the image.
First we'll demonstrate removing the layer:
End of explanation
"""
M.add_layer(rd[1, 2, 3], interval=(0,1))
"""
Explanation: Then we can re-add the layer with a color interval of 0 to 1.
End of explanation
"""
M.add_layer(rd[1, 2, 3], interval=(0,1), gamma=0.5)
"""
Explanation: We can also brighten this up by changing the gamma.
Note We don't have to remove the layer before updating it's options. Calling M.add_layer(...) with the same rd object will simply replace any existing layer with the same name. By default the layer's name is inferred from the filename.
End of explanation
"""
M.add_layer(rd[1, 2, 3], interval=(0,1), gamma=0.5, opacity=0.75)
# Remove the layer before moving on to the next section
M.remove_layer(M.layers[0])
"""
Explanation: Finally, let's add a little opacity to layer so we can see some of the underlying base map features.
End of explanation
"""
M.add_layer(rd[4])
"""
Explanation: Adding a single band Layer
Adding a single band layer uses the same M.add_layer(...) interface. Keep in mind that several of the styling options are slightly different. By default single band rasters are rendered with a default mapping of colors to band values.
End of explanation
"""
cmap = plt.get_cmap('winter', 10)
M.add_layer(rd[4], colormap=cmap, opacity=0.8)
"""
Explanation: You may find this colormap a little aggressive, in which case you can replace the colormap with any of the built in matplotlib colormaps:
End of explanation
"""
from matplotlib.colors import LinearSegmentedColormap
# Divergent Blue to Beige to Green colormap
cmap =LinearSegmentedColormap.from_list(
'ndvi', ['blue', 'beige', 'green'], 20)
# Add layer with custom colormap
M.add_layer(rd[4], colormap=cmap, opacity=0.8, min=-1.0, max=1.0)
"""
Explanation: Including custom color maps as in this example. Here we create a linear segmented colormap that transitions from Blue to Beige to Green. When mapped to our NDVI band data -1 will appear blue, 0 will appear beige and 1 will appear green.
End of explanation
"""
M.set_center(-119.25618502500376, 47.349300631765104, 11)
"""
Explanation: What can I do with this data?
We will address the use of annotations for analysis and data comparison in a separate notebook. For now Let's focus on a small agricultural area north of I-90:
End of explanation
"""
layer, data = next(M.layers.annotation.rectangles[0].data)
data
"""
Explanation: Go ahead and start a rectangular annotation (Second button to the right of the 'CellToolbar' button - with the square icon).
Please annotate a small region of the fields.
We can access this data from from the annotation's data attribute. We'll cover exactly what is going on here in another notebook.
End of explanation
"""
import numpy as np
fig, ax = plt.subplots(figsize=(16, 16))
ax.imshow(data, interpolation='none', cmap=cmap, clim=(-1.0, 1.0))
"""
Explanation: As a sanity check we can prove the data is the region we've selected by plotting the data with matplotlib's imshow function:
Note The scale of the matplotlib image may seem slightly different than the rectangle you've selected on the map. This is because the map is displaying in Web Mercator projection (EPSG:3857) while imshow is simply displaying the raw data, selected out of the geotiff (you can think of it as being in a 'row', 'column' projection).
End of explanation
"""
# Adapted from the scikit-image segmentation tutorial
# See: http://scikit-image.org/docs/dev/user_guide/tutorial_segmentation.html
import numpy as np
from skimage import measure
from skimage.filters import sobel
from skimage.morphology import watershed
from scipy import ndimage as ndi
THRESHOLD = 20
WATER_MIN = 0.2
WATER_MAX = 0.6
fig, ax = plt.subplots(figsize=(16, 16))
edges = sobel(data)
markers = np.zeros_like(data)
markers[data > WATER_MIN] = 2
markers[data > WATER_MAX] = 1
mask = (watershed(edges, markers) - 1).astype(bool)
seg = np.zeros_like(mask, dtype=int)
seg[~mask] = 1
# Fill holes
seg = ndi.binary_fill_holes(seg)
# Ignore entities smaller than THRESHOLD
label_objects, _ = ndi.label(seg)
sizes = np.bincount(label_objects.ravel())
mask_sizes = sizes > THRESHOLD
mask_sizes[0] = 0
clean_segs = mask_sizes[label_objects]
# Find contours of the segmented data
contours = measure.find_contours(clean_segs, 0)
ax.imshow(data, interpolation='none', cmap=cmap, clim=(-1.0, 1.0))
ax.axis('tight')
for n, contour in enumerate(contours):
ax.plot(contour[:, 1], contour[:, 0], linewidth=4)
"""
Explanation: NDVI Segmentation analysis
Once we have this data we can run arbitrary analyses on it. In the next cell we use a sobel filter and a watershed transformation to generate a binary mask of the data. We then use an implementation of marching cubes to vectorize the data, effectively segmenting green areas (e.g. fields) from surrounding areas.
This next cell requires both scipy and scikit-image. Check your operating system documentation for how best to install these packages.
End of explanation
"""
|
leewujung/ooi_sonar | notebooks/dB-diff_20150817-20151017.ipynb | apache-2.0 | import os, sys, glob, re
import datetime as dt
import numpy as np
from matplotlib.dates import date2num,num2date
import h5py
sys.path.insert(0,'..')
sys.path.insert(0,'../mi_instrument/')
import db_diff
import decomp_plot
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
%matplotlib inline
"""
Explanation: Conventional "dB-differencing" analysis
End of explanation
"""
# Set param
ping_time_param_names = ["hour_all","min_all","sec_all"]
ping_time_param_vals = (range(24),range(20),range(0,60,5))
ping_time_param = dict(zip(ping_time_param_names,ping_time_param_vals))
ping_per_day = len(ping_time_param['hour_all'])*len(ping_time_param['min_all'])*len(ping_time_param['sec_all'])
ping_bin_range = 40
depth_bin_range = 10
tvg_correction_factor = 2
ping_per_day_mvbs = ping_per_day/ping_bin_range
MVBS_path = '/media/wu-jung/wjlee_apl_2/ooi_zplsc_new/'
MVBS_fname = '20150817-20151017_MVBS.h5'
f = h5py.File(os.path.join(MVBS_path,MVBS_fname),"r")
MVBS = np.array(f['MVBS'])
depth_bin_size = np.array(f['depth_bin_size'])
ping_time = np.array(f['ping_time'])
f.close()
# db_diff.plot_echogram(MVBS,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),'magma')
db_diff.plot_echogram(MVBS,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),db_diff.e_cmap)
"""
Explanation: Set params and load clean MVBS data
End of explanation
"""
Sv_1 = MVBS[2,:,:]
Sv_2 = MVBS[0,:,:]
yes_1 = ~np.isnan(Sv_1)
yes_2 = ~np.isnan(Sv_2)
Sv_diff_12 = Sv_1 - Sv_2
Sv_diff_12[yes_1 & ~yes_2] = np.inf
Sv_diff_12[~yes_1 & yes_2] = -np.inf
idx_fish = (np.isneginf(Sv_diff_12) | (Sv_diff_12<=2)) & (Sv_diff_12>-16)
idx_zoop = np.isposinf(Sv_diff_12) | ((Sv_diff_12>2) & (Sv_diff_12<30))
idx_other = (Sv_diff_12<=-16) | (Sv_diff_12>=30)
MVBS_fish = np.ma.empty(MVBS.shape)
for ff in range(MVBS.shape[0]):
MVBS_fish[ff,:,:] = np.ma.masked_where(~idx_fish,MVBS[ff,:,:])
MVBS_zoop = np.ma.empty(MVBS.shape)
for ff in range(MVBS.shape[0]):
MVBS_zoop[ff,:,:] = np.ma.masked_where(~idx_zoop,MVBS[ff,:,:])
MVBS_others = np.ma.empty(MVBS.shape)
for ff in range(MVBS.shape[0]):
MVBS_others[ff,:,:] = np.ma.masked_where(~idx_other,MVBS[ff,:,:])
# db_diff.plot_echogram(MVBS_fish,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),'magma')
db_diff.plot_echogram(MVBS_fish,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),db_diff.e_cmap)
plt.gcf()
plt.savefig(os.path.join(MVBS_path,'echogram_day01-62_ek60_fish.png'),dpi=150)
# db_diff.plot_echogram(MVBS_zoop,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),'magma')
db_diff.plot_echogram(MVBS_zoop,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),db_diff.e_cmap)
plt.gcf()
plt.savefig(os.path.join(MVBS_path,'echogram_day01-62_ek60_zoop.png'),dpi=150)
# db_diff.plot_echogram(MVBS_others,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),'magma')
db_diff.plot_echogram(MVBS_others,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),db_diff.e_cmap)
plt.gcf()
plt.savefig(os.path.join(MVBS_path,'echogram_day01-62_ek60_others.png'),dpi=150)
"""
Explanation: dB-differencing operation
Here I used the criteria from Sato et al. 2015 for dB-differencing. The rationale is that this is the latest publication in nearby region and the classification threshold was selected based on trawl-verified animal groups. The classification rules are:
- Fish: -16dB < Sv_200-Sv_38 <= 2dB
- Zooplankton: 2dB < Sv_200-Sv_38 < 30dB
- Others: 30dB < Sv_200-Sv_38 or Sv_200-Sv_38 <= -16dB
End of explanation
"""
|
JAmarel/Phys202 | ODEs/ODEsEx02.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
"""
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
"""
def lorentz_derivs(yvec, t, sigma, rho, beta):
"""Compute the the derivatives for the Lorentz system at yvec(t)."""
x = yvec[0]
y = yvec[1]
z = yvec[2]
dx = sigma*(y-x)
dy = x*(rho-z)-y
dz = x*y - beta*z
return np.array([dx,dy,dz])
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
"""
Explanation: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:
$$ \frac{dx}{dt} = \sigma(y-x) $$
$$ \frac{dy}{dt} = x(\rho-z) - y $$
$$ \frac{dz}{dt} = xy - \beta z $$
The solution vector is $[x(t),y(t),z(t)]$ and $\sigma$, $\rho$, and $\beta$ are parameters that govern the behavior of the solutions.
Write a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.
End of explanation
"""
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
"""Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
"
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
"""
t = np.linspace(0,max_time,250*max_time)
soln = odeint(lorentz_derivs,ic,t,args=(sigma,rho,beta))
return t,soln
assert True # leave this to grade solve_lorenz
"""
Explanation: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
End of explanation
"""
colors = plt.cm.hot(1)
colors
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
"""Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
"""
colors = plt.cm.hot(np.linspace(0,1,N))
np.random.seed(1)
for i in range(N):
ic = (np.random.random(3)-.5)*30
t,soln = solve_lorentz(ic, max_time, sigma, rho, beta)
x = [e[0] for e in soln]
y = [e[1] for e in soln]
z = [e[2] for e in soln]
plt.plot(x,z,color=colors[i])
plt.title('Lorenz System for Multiple Trajectories')
plt.xlabel('X Position')
plt.ylabel('Y Position')
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
"""
Explanation: Write a function plot_lorentz that:
Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.
Plot $[x(t),z(t)]$ using a line to show each trajectory.
Color each line using the hot colormap from Matplotlib.
Label your plot and choose an appropriate x and y limit.
The following cell shows how to generate colors that can be used for the lines:
End of explanation
"""
interact(plot_lorentz,max_time=(1,11,1),N=(1,51,1),sigma=(0.0,50.0),rho=(0.0,50.0),beta = fixed(8/3));
"""
Explanation: Use interact to explore your plot_lorenz function with:
max_time an integer slider over the interval $[1,10]$.
N an integer slider over the interval $[1,50]$.
sigma a float slider over the interval $[0.0,50.0]$.
rho a float slider over the interval $[0.0,50.0]$.
beta fixed at a value of $8/3$.
End of explanation
"""
|
Vvkmnn/books | AutomateTheBoringStuffWithPython/lesson25.ipynb | gpl-3.0 | import re
batRegex = re.compile(r'Bat(wo)?man') # The ()? says this group can appear 0 or 1 times to match; it is optional
mo = batRegex.search('The Adventures of Batman')
print(mo.group())
mo = batRegex.search('The Adventures of Batwoman')
print(mo.group())
"""
Explanation: Lesson 25:
RegEx groups and the Pipe Character
The | pipe character can match one of many groups, but you may want a certain number of repitions of a group.
The '?' Regex Operater
The ? RegEx operater allows for optional (0 or 1) matches:
End of explanation
"""
mo = batRegex.search('The Adventures of Batwowowowoman')
print(mo.group())
"""
Explanation: However, it cannot match multiple repititions:
End of explanation
"""
phoneNumRegex = re.compile(r'\d\d\d\-\d\d\d-\d\d\d\d') # this requires an area code.
mo = phoneNumRegex.search('My number is 415-555-4242') # matches
print(mo.group())
mo2 = phoneNumRegex.search('My number is 555-4242') # will not match
print(mo2)
phoneNumRegex = re.compile(r'(\d\d\d\-)?\d\d\d-\d\d\d\d') # Make first three digits and dash optional
mo = phoneNumRegex.search('My number is 415-555-4242') # matches
print(mo.group())
mo2 = phoneNumRegex.search('My number is 555-4242') # matches
print(mo2.group())
"""
Explanation: We can use this to find strings that may or may not include elements, like phone numbers with and without area codes.
End of explanation
"""
import re
batRegex = re.compile(r'Bat(wo)*man') # The ()* says this group can appear 0 or n times to match
print(batRegex.search('The Adventures of Batwoman').group())
print(batRegex.search('The Adventures of Batwowowowoman').group())
"""
Explanation: The '*' Regex Operater
The * character can be used to match many (0 or n) times.
End of explanation
"""
import re
batRegex = re.compile(r'Bat(wo)+man') # The ()+ says this group can appear 1 or n times; it is NOT optional
print(batRegex.search('The Adventures of Batwoman').group())
print(batRegex.search('The Adventures of Batwowowowoman').group())
print(batRegex.search('The Adventures of Batman').group())
"""
Explanation: The '+' Regex Operater
The + character can match one or more (1 or n) times.
End of explanation
"""
import re
batRegex = re.compile(r'\+\*\?') # The +,*, and ? are escaped.
print(batRegex.search('I learned about +*? RegEx syntax').group())
"""
Explanation: All of these characters can be escaped for literal matches:
End of explanation
"""
haRegex = re.compile(r'(Ha){3}')
print(haRegex.search('HaHaHa').group())
print(haRegex.search('HaHaHaHa').group()) # Matches only three times, so returns only 3
#print(haRegex.search('HaHa').group()) # No Match
phoneRegex = re.compile(r'((\d)?\d\d\d(\d)?){3}') # Useful to avoid repitition
phoneNumRegex.search('My number is 415-555-4242').group()
"""
Explanation: The '{}' Regex Operater
The {x} character can match x times.
End of explanation
"""
haRegex = re.compile(r'(Ha){3,5}')
print(haRegex.search('HaHaHa').group())
print(haRegex.search('HaHaHaHa').group())
print(haRegex.search('HaHaHaHaHa').group())
print(haRegex.search('HaHaHaHaHaHaHaHa').group()) # Matches max of 5
haRegex = re.compile(r'(Ha){,5}') # Can drop one or the other for unbounded matches
print(haRegex.search('Ha').group())
print(haRegex.search('HaHa').group())
print(haRegex.search('HaHaHa').group())
print(haRegex.search('HaHaHaHa').group())
print(haRegex.search('HaHaHaHaHa').group())
print(haRegex.search('HaHaHaHaHaHaHaHa').group()) # Matches max of 5
"""
Explanation: This operator can also take the {x,y} argument to create a minimum or maximum number of repititions.
End of explanation
"""
haRegex = re.compile(r'(Ha){1,6}') # at least 1, or 6
print(haRegex.search('HaHaHaHaHaHaHaHa').group()) # Matches longest string; 6
"""
Explanation: RegEx does greedy matches, which means it will try to find the longest string that matches, not the shortest.
End of explanation
"""
haRegex = re.compile(r'(Ha){1,6}?') # The }? says favor the first condition, not the second; non-greedy
print(haRegex.search('HaHaHaHaHaHaHaHa').group()) # Matches shortest string, 1
"""
Explanation: You can do a non-greedy match by using a '}?' operator.
End of explanation
"""
|
tensorflow/docs | site/en/tutorials/images/data_augmentation.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.keras import layers
"""
Explanation: Data augmentation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/images/data_augmentation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
This tutorial demonstrates data augmentation: a technique to increase the diversity of your training set by applying random (but realistic) transformations, such as image rotation.
You will learn how to apply data augmentation in two ways:
Use the Keras preprocessing layers, such as tf.keras.layers.Resizing, tf.keras.layers.Rescaling, tf.keras.layers.RandomFlip, and tf.keras.layers.RandomRotation.
Use the tf.image methods, such as tf.image.flip_left_right, tf.image.rgb_to_grayscale, tf.image.adjust_brightness, tf.image.central_crop, and tf.image.stateless_random*.
Setup
End of explanation
"""
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
"""
Explanation: Download a dataset
This tutorial uses the tf_flowers dataset. For convenience, download the dataset using TensorFlow Datasets. If you would like to learn about other ways of importing data, check out the load images tutorial.
End of explanation
"""
num_classes = metadata.features['label'].num_classes
print(num_classes)
"""
Explanation: The flowers dataset has five classes.
End of explanation
"""
get_label_name = metadata.features['label'].int2str
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
"""
Explanation: Let's retrieve an image from the dataset and use it to demonstrate data augmentation.
End of explanation
"""
IMG_SIZE = 180
resize_and_rescale = tf.keras.Sequential([
layers.Resizing(IMG_SIZE, IMG_SIZE),
layers.Rescaling(1./255)
])
"""
Explanation: Use Keras preprocessing layers
Resizing and rescaling
You can use the Keras preprocessing layers to resize your images to a consistent shape (with tf.keras.layers.Resizing), and to rescale pixel values (with tf.keras.layers.Rescaling).
End of explanation
"""
result = resize_and_rescale(image)
_ = plt.imshow(result)
"""
Explanation: Note: The rescaling layer above standardizes pixel values to the [0, 1] range. If instead you wanted it to be [-1, 1], you would write tf.keras.layers.Rescaling(1./127.5, offset=-1).
You can visualize the result of applying these layers to an image.
End of explanation
"""
print("Min and max pixel values:", result.numpy().min(), result.numpy().max())
"""
Explanation: Verify that the pixels are in the [0, 1] range:
End of explanation
"""
data_augmentation = tf.keras.Sequential([
layers.RandomFlip("horizontal_and_vertical"),
layers.RandomRotation(0.2),
])
# Add the image to a batch.
image = tf.cast(tf.expand_dims(image, 0), tf.float32)
plt.figure(figsize=(10, 10))
for i in range(9):
augmented_image = data_augmentation(image)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image[0])
plt.axis("off")
"""
Explanation: Data augmentation
You can use the Keras preprocessing layers for data augmentation as well, such as tf.keras.layers.RandomFlip and tf.keras.layers.RandomRotation.
Let's create a few preprocessing layers and apply them repeatedly to the same image.
End of explanation
"""
model = tf.keras.Sequential([
# Add the preprocessing layers you created earlier.
resize_and_rescale,
data_augmentation,
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
# Rest of your model.
])
"""
Explanation: There are a variety of preprocessing layers you can use for data augmentation including tf.keras.layers.RandomContrast, tf.keras.layers.RandomCrop, tf.keras.layers.RandomZoom, and others.
Two options to use the Keras preprocessing layers
There are two ways you can use these preprocessing layers, with important trade-offs.
Option 1: Make the preprocessing layers part of your model
End of explanation
"""
aug_ds = train_ds.map(
lambda x, y: (resize_and_rescale(x, training=True), y))
"""
Explanation: There are two important points to be aware of in this case:
Data augmentation will run on-device, synchronously with the rest of your layers, and benefit from GPU acceleration.
When you export your model using model.save, the preprocessing layers will be saved along with the rest of your model. If you later deploy this model, it will automatically standardize images (according to the configuration of your layers). This can save you from the effort of having to reimplement that logic server-side.
Note: Data augmentation is inactive at test time so input images will only be augmented during calls to Model.fit (not Model.evaluate or Model.predict).
Option 2: Apply the preprocessing layers to your dataset
End of explanation
"""
batch_size = 32
AUTOTUNE = tf.data.AUTOTUNE
def prepare(ds, shuffle=False, augment=False):
# Resize and rescale all datasets.
ds = ds.map(lambda x, y: (resize_and_rescale(x), y),
num_parallel_calls=AUTOTUNE)
if shuffle:
ds = ds.shuffle(1000)
# Batch all datasets.
ds = ds.batch(batch_size)
# Use data augmentation only on the training set.
if augment:
ds = ds.map(lambda x, y: (data_augmentation(x, training=True), y),
num_parallel_calls=AUTOTUNE)
# Use buffered prefetching on all datasets.
return ds.prefetch(buffer_size=AUTOTUNE)
train_ds = prepare(train_ds, shuffle=True, augment=True)
val_ds = prepare(val_ds)
test_ds = prepare(test_ds)
"""
Explanation: With this approach, you use Dataset.map to create a dataset that yields batches of augmented images. In this case:
Data augmentation will happen asynchronously on the CPU, and is non-blocking. You can overlap the training of your model on the GPU with data preprocessing, using Dataset.prefetch, shown below.
In this case the preprocessing layers will not be exported with the model when you call Model.save. You will need to attach them to your model before saving it or reimplement them server-side. After training, you can attach the preprocessing layers before export.
You can find an example of the first option in the Image classification tutorial. Let's demonstrate the second option here.
Apply the preprocessing layers to the datasets
Configure the training, validation, and test datasets with the Keras preprocessing layers you created earlier. You will also configure the datasets for performance, using parallel reads and buffered prefetching to yield batches from disk without I/O become blocking. (Learn more dataset performance in the Better performance with the tf.data API guide.)
Note: Data augmentation should only be applied to the training set.
End of explanation
"""
model = tf.keras.Sequential([
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
"""
Explanation: Train a model
For completeness, you will now train a model using the datasets you have just prepared.
The Sequential model consists of three convolution blocks (tf.keras.layers.Conv2D) with a max pooling layer (tf.keras.layers.MaxPooling2D) in each of them. There's a fully-connected layer (tf.keras.layers.Dense) with 128 units on top of it that is activated by a ReLU activation function ('relu'). This model has not been tuned for accuracy (the goal is to show you the mechanics).
End of explanation
"""
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
"""
Explanation: Choose the tf.keras.optimizers.Adam optimizer and tf.keras.losses.SparseCategoricalCrossentropy loss function. To view training and validation accuracy for each training epoch, pass the metrics argument to Model.compile.
End of explanation
"""
epochs=5
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
loss, acc = model.evaluate(test_ds)
print("Accuracy", acc)
"""
Explanation: Train for a few epochs:
End of explanation
"""
def random_invert_img(x, p=0.5):
if tf.random.uniform([]) < p:
x = (255-x)
else:
x
return x
def random_invert(factor=0.5):
return layers.Lambda(lambda x: random_invert_img(x, factor))
random_invert = random_invert()
plt.figure(figsize=(10, 10))
for i in range(9):
augmented_image = random_invert(image)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image[0].numpy().astype("uint8"))
plt.axis("off")
"""
Explanation: Custom data augmentation
You can also create custom data augmentation layers.
This section of the tutorial shows two ways of doing so:
First, you will create a tf.keras.layers.Lambda layer. This is a good way to write concise code.
Next, you will write a new layer via subclassing, which gives you more control.
Both layers will randomly invert the colors in an image, according to some probability.
End of explanation
"""
class RandomInvert(layers.Layer):
def __init__(self, factor=0.5, **kwargs):
super().__init__(**kwargs)
self.factor = factor
def call(self, x):
return random_invert_img(x)
_ = plt.imshow(RandomInvert()(image)[0])
"""
Explanation: Next, implement a custom layer by subclassing:
End of explanation
"""
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
"""
Explanation: Both of these layers can be used as described in options 1 and 2 above.
Using tf.image
The above Keras preprocessing utilities are convenient. But, for finer control, you can write your own data augmentation pipelines or layers using tf.data and tf.image. (You may also want to check out TensorFlow Addons Image: Operations and TensorFlow I/O: Color Space Conversions.)
Since the flowers dataset was previously configured with data augmentation, let's reimport it to start fresh:
End of explanation
"""
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
"""
Explanation: Retrieve an image to work with:
End of explanation
"""
def visualize(original, augmented):
fig = plt.figure()
plt.subplot(1,2,1)
plt.title('Original image')
plt.imshow(original)
plt.subplot(1,2,2)
plt.title('Augmented image')
plt.imshow(augmented)
"""
Explanation: Let's use the following function to visualize and compare the original and augmented images side-by-side:
End of explanation
"""
flipped = tf.image.flip_left_right(image)
visualize(image, flipped)
"""
Explanation: Data augmentation
Flip an image
Flip an image either vertically or horizontally with tf.image.flip_left_right:
End of explanation
"""
grayscaled = tf.image.rgb_to_grayscale(image)
visualize(image, tf.squeeze(grayscaled))
_ = plt.colorbar()
"""
Explanation: Grayscale an image
You can grayscale an image with tf.image.rgb_to_grayscale:
End of explanation
"""
saturated = tf.image.adjust_saturation(image, 3)
visualize(image, saturated)
"""
Explanation: Saturate an image
Saturate an image with tf.image.adjust_saturation by providing a saturation factor:
End of explanation
"""
bright = tf.image.adjust_brightness(image, 0.4)
visualize(image, bright)
"""
Explanation: Change image brightness
Change the brightness of image with tf.image.adjust_brightness by providing a brightness factor:
End of explanation
"""
cropped = tf.image.central_crop(image, central_fraction=0.5)
visualize(image, cropped)
"""
Explanation: Center crop an image
Crop the image from center up to the image part you desire using tf.image.central_crop:
End of explanation
"""
rotated = tf.image.rot90(image)
visualize(image, rotated)
"""
Explanation: Rotate an image
Rotate an image by 90 degrees with tf.image.rot90:
End of explanation
"""
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_brightness = tf.image.stateless_random_brightness(
image, max_delta=0.95, seed=seed)
visualize(image, stateless_random_brightness)
"""
Explanation: Random transformations
Warning: There are two sets of random image operations: tf.image.random* and tf.image.stateless_random*. Using tf.image.random* operations is strongly discouraged as they use the old RNGs from TF 1.x. Instead, please use the random image operations introduced in this tutorial. For more information, refer to Random number generation.
Applying random transformations to the images can further help generalize and expand the dataset. The current tf.image API provides eight such random image operations (ops):
tf.image.stateless_random_brightness
tf.image.stateless_random_contrast
tf.image.stateless_random_crop
tf.image.stateless_random_flip_left_right
tf.image.stateless_random_flip_up_down
tf.image.stateless_random_hue
tf.image.stateless_random_jpeg_quality
tf.image.stateless_random_saturation
These random image ops are purely functional: the output only depends on the input. This makes them simple to use in high performance, deterministic input pipelines. They require a seed value be input each step. Given the same seed, they return the same results independent of how many times they are called.
Note: seed is a Tensor of shape (2,) whose values are any integers.
In the following sections, you will:
1. Go over examples of using random image operations to transform an image.
2. Demonstrate how to apply random transformations to a training dataset.
Randomly change image brightness
Randomly change the brightness of image using tf.image.stateless_random_brightness by providing a brightness factor and seed. The brightness factor is chosen randomly in the range [-max_delta, max_delta) and is associated with the given seed.
End of explanation
"""
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_contrast = tf.image.stateless_random_contrast(
image, lower=0.1, upper=0.9, seed=seed)
visualize(image, stateless_random_contrast)
"""
Explanation: Randomly change image contrast
Randomly change the contrast of image using tf.image.stateless_random_contrast by providing a contrast range and seed. The contrast range is chosen randomly in the interval [lower, upper] and is associated with the given seed.
End of explanation
"""
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_crop = tf.image.stateless_random_crop(
image, size=[210, 300, 3], seed=seed)
visualize(image, stateless_random_crop)
"""
Explanation: Randomly crop an image
Randomly crop image using tf.image.stateless_random_crop by providing target size and seed. The portion that gets cropped out of image is at a randomly chosen offset and is associated with the given seed.
End of explanation
"""
(train_datasets, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
"""
Explanation: Apply augmentation to a dataset
Let's first download the image dataset again in case they are modified in the previous sections.
End of explanation
"""
def resize_and_rescale(image, label):
image = tf.cast(image, tf.float32)
image = tf.image.resize(image, [IMG_SIZE, IMG_SIZE])
image = (image / 255.0)
return image, label
"""
Explanation: Next, define a utility function for resizing and rescaling the images. This function will be used in unifying the size and scale of images in the dataset:
End of explanation
"""
def augment(image_label, seed):
image, label = image_label
image, label = resize_and_rescale(image, label)
image = tf.image.resize_with_crop_or_pad(image, IMG_SIZE + 6, IMG_SIZE + 6)
# Make a new seed.
new_seed = tf.random.experimental.stateless_split(seed, num=1)[0, :]
# Random crop back to the original size.
image = tf.image.stateless_random_crop(
image, size=[IMG_SIZE, IMG_SIZE, 3], seed=seed)
# Random brightness.
image = tf.image.stateless_random_brightness(
image, max_delta=0.5, seed=new_seed)
image = tf.clip_by_value(image, 0, 1)
return image, label
"""
Explanation: Let's also define the augment function that can apply the random transformations to the images. This function will be used on the dataset in the next step.
End of explanation
"""
# Create a `Counter` object and `Dataset.zip` it together with the training set.
counter = tf.data.experimental.Counter()
train_ds = tf.data.Dataset.zip((train_datasets, (counter, counter)))
"""
Explanation: Option 1: Using tf.data.experimental.Counter
Create a tf.data.experimental.Counter object (let's call it counter) and Dataset.zip the dataset with (counter, counter). This will ensure that each image in the dataset gets associated with a unique value (of shape (2,)) based on counter which later can get passed into the augment function as the seed value for random transformations.
End of explanation
"""
train_ds = (
train_ds
.shuffle(1000)
.map(augment, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
test_ds = (
test_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
"""
Explanation: Map the augment function to the training dataset:
End of explanation
"""
# Create a generator.
rng = tf.random.Generator.from_seed(123, alg='philox')
# Create a wrapper function for updating seeds.
def f(x, y):
seed = rng.make_seeds(2)[0]
image, label = augment((x, y), seed)
return image, label
"""
Explanation: Option 2: Using tf.random.Generator
Create a tf.random.Generator object with an initial seed value. Calling the make_seeds function on the same generator object always returns a new, unique seed value.
Define a wrapper function that: 1) calls the make_seeds function; and 2) passes the newly generated seed value into the augment function for random transformations.
Note: tf.random.Generator objects store RNG state in a tf.Variable, which means it can be saved as a checkpoint or in a SavedModel. For more details, please refer to Random number generation.
End of explanation
"""
train_ds = (
train_datasets
.shuffle(1000)
.map(f, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
test_ds = (
test_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
"""
Explanation: Map the wrapper function f to the training dataset, and the resize_and_rescale function—to the validation and test sets:
End of explanation
"""
|
sbitzer/pyEPABC | examples/narrow_posteriors.ipynb | bsd-3-clause | def plot_mean_with_std(mean, std, std_mult=2, xvals=None, ax=None):
if xvals is None:
xvals = np.arange(mean.shape[0])
if ax is None:
ax = plt.axes()
ax.plot(mean, 'k', lw=3)
ax.fill_between(xvals, mean + std_mult*std, mean - std_mult*std,
edgecolor='k', facecolor='0.7')
def plot_single_trajectory(means, stds, rep=None):
if rep is None:
rep = np.random.randint(means.shape[0])
xvals = np.arange(means.shape[1])
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True)
ax1.set_title('repetition %d' % rep)
plot_mean_with_std(means[rep, :], stds[rep, :], xvals=xvals, ax=ax1)
ax2.set_ylabel('mean')
ax2.plot(xvals, means[rep, :], 'k', lw=3, label='mean')
ax3.set_ylabel('std')
ax3.plot(xvals, stds[rep, :], 'k', lw=1, label='std')
diff = means[rep, :] - means[rep, 0]
print('largest deviation from initial mean: %6.1f%% (of initial std)'
% (diff[np.abs(diff).argmax()] / stds[rep, 0] * 100, ) )
diff = stds[rep, :] - stds[rep, 0]
print('largest deviation from initial std: %6.1f%%'
% (diff[np.abs(diff).argmax()] / stds[rep, 0] * 100, ) )
def plot_distribution_drift(means, stds):
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
ax1.set_title('average drift of mean (+- 2*std)')
plot_mean_with_std(means.mean(axis=0), means.std(axis=0), ax=ax1)
ax2.set_title('average drift of std (+- 2*std)')
plot_mean_with_std(stds.mean(axis=0), stds.std(axis=0), ax=ax2)
"""
Explanation: Why EP-ABC can produce too narrow posteriors
When you use EP-ABC for inference you may notice that your posterior distributions appear suspiciously narrow, i.e., you may not belief the certainty indicated by EP-ABC inference. Your suspicions can be correct: The posteriors inferred by EP-ABC sometimes tend to be too narrow. The fault lies within the recursive sampling process used in EP-ABC: The main mechanism is to maintain an estimate of the posterior distribution from which you sample and then re-estimate the posterior distribution based on a subset of the samples compatible with a data point. If the distribution from which you sample has some sampling error, your next estimate of that distribution will deviate even more from the underlying distribution, especially, if the sampling error consistently deviates in one direction. This is exactly what can happen in individual runs of EP-ABC.
In the following, I will demonstrate drifting distribution estimates by recursively sampling from a Gaussian. Ideally, the estimated mean and standard deviation would remain stable, but sampling error lets them drift. The question, therefore, is how strong the drift is and whether it has a trend.
Sampling theory
It is known, but rarely appreciated, that the square root of an unbiased estimate of variance consistently underestimates the standard deviation. Because we use standard deviations when sampling from a Gaussian (also in EP-ABC), recursively sampling from a Gaussian will shrink its standard deviation, even though we may have no bias in estimating the variance of the distribution. Actually, it's not so hard to compute an unbiased estimate of the standard deviation, at least approximately: Instead of dividing by $N-1$, as in the unbiased estimate of variance, we only have to divide by $N-1.5$. I have implemented this in EP-ABC to reduce the shrinking of posteriors. Below you can see for yourself what effect this has.
Some plotting functions
I here define some plotting functions used below.
End of explanation
"""
# number of recursive steps
nsteps = 200
# number of samples drawn from distribution
nsamples = 500
# number of repetitions of recursive sampling
nrep = 100
# initial mean
mu = 23
# initial standard deviation
sigma = 100
# degrees of freedom determining the divisor for
# the estimation of standard deviation (1.5~unbiased)
ddof = 1.5
means = np.full((nrep, nsteps), np.nan)
means[:, 0] = mu
stds = np.full((nrep, nsteps), np.nan)
stds[:, 0] = sigma
for r in range(nrep):
for s in range(1, nsteps):
S = np.random.normal(means[r, s-1], stds[r, s-1], nsamples)
means[r, s] = S.mean()
stds[r, s] = S.std(ddof=ddof)
plot_distribution_drift(means, stds)
print('after %d steps with %d samples:' % (nsteps, nsamples))
print('difference in mean mean: %6.1f%% (of initial std)' % (
(means[:, -1].mean() - mu) / sigma * 100, ))
print('difference in mean std: %6.1f%%' % ((stds[:, -1].mean() - sigma) /
sigma * 100, ))
"""
Explanation: Recursive Sampling of a Gaussian
The following cell repeatedly runs a recursive sampling process and plots the average drifts of mean and standard deviation across the repetitions.
End of explanation
"""
plot_single_trajectory(means, stds)
"""
Explanation: The generated plots should have a relatively flat, thick, black line in the middle which suggests that mean and standard deviation did not change much when averaged across repetitions. The shading, representing the area of double standard deviation across repetitions, however, should become wider as more recursive sampling steps are taken, indicating that in some repetitions there was a considerable drift of the distribution (see below for plots of individual trajectories).
You may now want to see how these curves change, when you manipulate the number of samples, or degrees of freedom (ddof) used to estimate standard deviations from samples. Increasing the number of samples is generally good and leads to a reduction of mean drift and a narrowing of the shading, meaning that also the drift in individual repetitions of recursive sampling is small. Setting ddof=0, which is the standard setting of numpy (!), leads to severe shrinkage of the sampled distribution.
Individual recursive sampling trajectories
To get a feeling for the variability of recursive sampling trajectories across repetitions the following cell will plot a single (random) trajectory computed above.
End of explanation
"""
|
steven-murray/halomod | docs/examples/extension.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from halomod import TracerHaloModel
import halomod
import hmf
import scipy
print("halomod version: ", halomod.__version__)
print("hmf version:", hmf.__version__)
"""
Explanation: Customised extensions with halomod
In this tutorial, we use the existing infrastructure of halomod and plug in a new type of tracer, namely HI using the model from arxiv:2010.07985. This model requires three additions: a new density profile for HI; a new concentration-mass relation for HI; and finally a new HI HOD.
halomod is extremely flexible. You can add custom models for any "component" (eg. mass functions, density profiles, bias functions, mass definitions,...).
However, most likely you'll need to add something to build a new type of tracer, which is the case here.
Let's import a few basic things first:
End of explanation
"""
from halomod.profiles import ProfileInf,Profile
from scipy.special import gamma
class PowerLawWithExpCut(ProfileInf):
"""
A simple power law with exponential cut-off, assuming f(x)=1/x**b * exp[-ax].
"""
_defaults = {"a": 0.049, "b": 2.248}
def _f(self, x):
return 1. / (x**self.params['b']) * np.exp(-self.params['a']*x)
def _h(self,c=None):
return gamma(3-self.params['b']) * self.params['a']**(self.params['b']-3)
def _p(self, K, c=None):
b = self.params['b']
a = self.params['a']
if b==2:
return np.arctan(K/a)/K
else:
return -1 / K * ((a**2+K**2)**(b/2-1)*gamma(2-b)*np.sin((b-2)*np.arctan(K/a)))
"""
Explanation: Creating a new density profile
The HI density profile used here is:
$$
\rho_{\rm HI} = \rho_s \bigg(\frac{r_s}{r}\bigg)^b {\rm exp}\bigg[-a \frac{r}{r_s}\bigg]
$$
Notice that all the infrastructure has been set up by the Profile or ProfileInf class, depending on if you truncate the halos or not. And all you need to modify is the _f function, and optionally its integration _h and its Fourier transform _p (see documentation for profiles.py). Note that these latter two are useful to speed up calculations, but are in principle optional and will be calculated numerically.
The _f function is:
$$
f(x) = \frac{1}{x^b}{\rm exp}\big[-ax\big]
$$
The integration in this case is:
$$
h = \Gamma(3-b)\times a^{b-3}
$$
where $\Gamma$ is the Gamma function
The Fourier Transformed profile is:
$$
p(K)= {\rm tan}^{-1}(K/a)/K,\,b=2
$$
$$
\begin{split}
p(K)=&\frac{-1}{1+(K/a)^2}\Bigg(\bigg(1+K^2/a^2\bigg)^{b/2}\times\
&\Gamma(2-b)\sin\bigg[(b-2){\rm arctan}\big[K/a\big]\bigg]\Bigg),\,b\ne2
\end{split}
$$
End of explanation
"""
hm = TracerHaloModel(
tracer_profile_model=PowerLawWithExpCut,
transfer_model='EH'
)
"""
Explanation: At bare minimum, you must specify _f which is just the density profile itself, and all the integration and Fourier transformation will be done numerically. However, that is very inefficient so you should always find analytical expression and specify if you can.
Now let's plug it into a halo model:
End of explanation
"""
plt.plot(hm.k,hm.tracer_profile_ukm[:,1000]);
plt.xscale('log')
plt.yscale('log')
plt.xlim((1e-2,1e5));
#plt.ylim((1e-1,1))
"""
Explanation: Note that as of v2 of halomod, defining a new model for any particular component will automatically add it to a registry of 'plugins', and you may then construct the overall model using a string reference to the model (i.e. we could have passed tracer_profile_model="PowerLawWithExpCut").
And see the profile for a halo of mass $10^{10} M_\odot h^{-1}$ in Fourier space:
End of explanation
"""
hm.tracer_profile_params = {"a": 0.049, "b": 2.248}
"""
Explanation: And we've set up our profile model.
This profile model has 2 additional model parameters. You can always update these parameters using:
End of explanation
"""
from halomod.concentration import CMRelation
from hmf.halos.mass_definitions import SOMean
class Maccio07(CMRelation):
"""
HI concentration-mass relation based on Maccio et al.(2007).
Default value taken from 1611.06235.
"""
_defaults = {'c_0': 28.65, "gamma": 1.45}
native_mdefs = (SOMean(),)
def cm(self,m,z):
return self.params['c_0']*(m*10**(-11))**(-0.109)*4/(1+z)**self.params['gamma']
"""
Explanation: Creating a new concentration-mass relation
The concentration-mass relation we use here follows the one from Maccio et al.(2007):
$$
c_{\rm HI}(M,z) = c_0 \Big(\frac{M}{10^{11}{\rm M_\odot h^{-1}}}\Big)^{-0.109}\frac{4}{(1+z)^\gamma}
$$
Again, because halomod already has a generic CMRelation class in place, all you really need is to specify a cm function for the equation above:
End of explanation
"""
hm.tracer_concentration_model = Maccio07
"""
Explanation: Note that for the concentration-mass relation that you put in, you need to specify the mass definition that this relation is defined in. In this case, it is defined in Spherical-Overdensity method, which is the default.
And set this to be the model for the tracer concentration-mass relation:
End of explanation
"""
plt.plot(hm.m,hm.tracer_concentration.cm(hm.m,0))
plt.xscale('log')
plt.xlim(1e5,1e15)
plt.ylim((1e1,1e3))
plt.yscale('log');
"""
Explanation: And check the concentration-mass relation at z=0:
End of explanation
"""
hm.tracer_concentration_params={'c_0': 28.65, "gamma": 1.45}
"""
Explanation: Notice that this model has two additional parameters, which can be updated using:
End of explanation
"""
from halomod.hod import HODPoisson
import scipy.constants as const
import astropy.constants as astroconst
"""
Explanation: See the documentation for concentration.py for more.
Creating a new HOD
The HI HOD we use here is:
$$
\begin{split}
\langle M_{\rm HI}^{\rm cen}(M_h) \rangle = M_h& \Bigg[a_1^{\rm cen}\bigg(\frac{M_h}{10^{10} M_\odot}\bigg)^{\beta_{\rm cen}} {\rm exp}\Big[{-\bigg(\frac{M_h}{M^{\rm cen}{\rm break}}\bigg)^{\alpha{\rm cen}}}\Big] \&+a_2^{\rm cen}\Bigg] {\rm exp}\Big[{-\bigg(\frac{M_{\rm min}^{\rm cen}}{M_h}\bigg)^{0.5}}\Big]
\end{split}
$$
$$
\begin{split}
\langle M_{\rm HI}^{\rm sat}(M_h) \rangle =
M_0^{\rm sat}\bigg( \frac{M_h}{M^{\rm sat}{\rm min}}\bigg)^{\beta{\rm sat}}
{\rm exp}\Big[{-\bigg(\frac{M^{\rm sat}{\rm min}}{M_h}\bigg)^{\alpha{\rm sat}}}\Big]
\end{split}
$$
For the HOD, it's a bit more complicated. First, one needs to decide what type of tracer it is. The most generic class to use is HOD, however, you may prefer HODBulk where the tracer is considered to be continuously distributed ; or HODPoisson, which assumes Poisson distributed discrete satellite components, which is commonly used for galaxies.
Second, if your model has a minimum halo mass to host any tracer as a sharp cut-off (or not), you should specify in your model definition
python
sharp_cut = True # or False
If your model has a seperation of central and satellite components, you need to specify if the satellite occupation is inherently dependent on the existence of central galaxies:
python
central_condition_inherent = False # or True
If False, the actual satellite component will be your satellite occupation times occupation for central galaxies.
See the documentation for hod.py, or the reference paper for more.
Finally, you need to specify how to convert units between your HOD and the resulting power spectrum. For example, for HI the HOD is written in mass units, whereas the power spectrum is in temperature units. This is done by specifying a unit_conversion method.
Additionally, sometimes your HOD contains methods that need to be calculated, such as virial velocity of the halos, which you can just put into the class.
End of explanation
"""
class Spinelli19(HODPoisson):
"""
Six-parameter model of Spinelli et al. (2019)
Default is taken to be z=1(need to set it up manually via hm.update)
"""
_defaults = {"a1": 0.0016, # gives HI mass amplitude of the power law
"a2": 0.00011, # gives HI mass amplitude of the power law
"alpha": 0.56, # slope of exponential break
"beta": 0.43, # slope of mass
"M_min": 9, # Truncation Mass
"M_break": 11.86, # Characteristic Mass
"M_1": -2.99, # mass of exponential cutoff
"sigma_A": 0, # The (constant) standard deviation of the tracer
"M_max": 18, # Truncation mass
"M_0": 8.31, # Amplitude of satellite HOD
"M_break_sat": 11.4, # characteristic mass for satellite HOD
"alpha_sat": 0.84, # slope of exponential cut-off for satellite
"beta_sat": 1.10, # slope of mass for satellite
"M_1_counts": 12.851,
"alpha_counts": 1.049,
"M_min_counts": 11, # Truncation Mass
"M_max_counts": 15, # Truncation Mass
"a": 0.049,
"b": 2.248,
"eta": 1.0
}
sharp_cut = False
central_condition_inherent = False
def _central_occupation(self, m):
alpha = self.params['alpha']
beta = self.params['beta']
m_1 = 10 ** self.params['M_1']
a1 = self.params['a1']
a2 = self.params['a2']
m_break = 10 ** self.params['M_break']
out = m * (a1 * (m / 1e10) ** beta
* np.exp(-(m / m_break) ** alpha)
+ a2) * np.exp(-(m_1 / m) ** 0.5)
return out
def _satellite_occupation(self, m):
alpha = self.params['alpha_sat']
beta = self.params['beta_sat']
amp = 10 ** self.params['M_0']
m1 = 10 ** self.params['M_break_sat']
array = np.zeros_like(m)
array[m >= 10 ** 11] = 1
return amp * (m/m1) ** beta * np.exp(-(m1/m)**alpha) * array
#return 10**8
def unit_conversion(self, cosmo, z):
"A factor (potentially with astropy units) to convert the total occupation to a desired unit."
A12=2.869e-15
nu21cm=1.42e9
Const=(3.0*A12*const.h*const.c**3.0 )/(32.0*np.pi*(const.m_p+const.m_e)
*const.Boltzmann * nu21cm**2);
Mpcoverh_3=((astroconst.kpc.value*1e3)/(cosmo.H0.value/100.0))**3
hubble = cosmo.H0.value * cosmo.efunc(z)*1.0e3/(astroconst.kpc.value*1e3)
temp_conv=Const*((1.0+z)**2/hubble)
# convert to Mpc^3, solar mass
temp_conv=temp_conv/Mpcoverh_3 * astroconst.M_sun.value
return temp_conv
def _tracer_per_central(self, M):
"""Number of tracers per central tracer source"""
n_c = np.zeros_like(M)
n_c[
np.logical_and(
M >= 10 ** self.params["M_min_counts"],
M <= 10 ** self.params["M_max_counts"],
)
] = 1
return n_c
def _tracer_per_satellite(self, M):
"""Number of tracers per satellite tracer source"""
n_s = np.zeros_like(M)
index = np.logical_and(
M >= 10 ** self.params["M_min_counts"],
M <= 10 ** self.params["M_max_counts"],
)
n_s[index] = (M[index] / 10 ** self.params["M_1_counts"]) ** self.params[
"alpha_counts"
]
return n_s
"""
Explanation: And in our case for utility, we also specify the number of galaxies in this model, which is defined by _tracer_per_central and _tracer_per_satellite:
End of explanation
"""
hm.hod_model = Spinelli19
"""
Explanation: And now update the halo model with this newly defined HOD:
End of explanation
"""
hm.mean_tracer_den
"""
Explanation: You can check out the mean density of HI in units of $M_\odot h^2 {\rm Mpc}^{-3}$:
End of explanation
"""
hm.mean_tracer_den_unit
"""
Explanation: And in temperature units:
End of explanation
"""
hm.hod_params={"beta": 1.0}
"""
Explanation: You can easily update the parameters using:
End of explanation
"""
print(hm.mean_tracer_den)
print(hm.mean_tracer_den_unit)
"""
Explanation: And to confirm it is indeed updated let's check the density again:
End of explanation
"""
plt.plot(hm.k_hm, hm.power_auto_tracer)
plt.xscale('log')
plt.yscale('log')
plt.xlabel("k [$Mpc^{-1} h$]")
plt.ylabel(r"$\rm P(k) \ [{\rm Mpc^3}h^{-3}]$");
"""
Explanation: And the power spectrum in length units:
End of explanation
"""
plt.plot(hm.k_hm, hm.power_auto_tracer*hm.mean_tracer_den_unit**2 * hm.k_hm**3 / (2*np.pi**2))
plt.xscale('log')
plt.yscale('log')
plt.xlabel("k [$Mpc^{-1} h$]")
plt.ylabel(r"$\Delta^2(k) \ \ [{\rm K}^2]$");
"""
Explanation: And in temperature units:
End of explanation
"""
hm=TracerHaloModel(
hod_model="Spinelli19",
tracer_concentration_model="Maccio07",
tracer_profile_model="PowerLawWithExpCut",
z=1, #for default value of hod parameters,
transfer_model='EH'
)
"""
Explanation: From v2.0.0, all of these HI component models, from arxiv:2010.07985 are available in halomod. To use this model in full, simply call:
End of explanation
"""
|
nikbearbrown/Deep_Learning | NEU/Tejas_Bawaskar _DL/Keras Tutorial.ipynb | mit | #Loading In The Data from uci repositories
# Import pandas
import pandas as pd
# Read in white wine data
white = pd.read_csv("http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv", sep=';')
# Read in red wine data
red = pd.read_csv("http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv", sep=';')
"""
Explanation: Keras Tutorial - Predicting Wine Types: Red or White?
Wine industry shows a recent growth spurt, as social drinking is on the rise. A key factor in wine certification and quality assessment is physicochemical tests which are laboratory-based and takes into account factors like acidity, pH level, presence of sugar and other chemical properties. It would be interesting if we could predict the wine type given some of its properties. This could later be scaled up to predict prices of each individual wine which every wine seller dreams of.
To predict we'll use basic neural network models. The simplest and easiest Python library out there to use is Keras. Its a really simple library to get started with to learn about deep learning.
Let’s get started now!
Understanding The Data
However, before we start loading in the data, it might be a good idea to check how much we really know about wine (in relation with the dataset, of course).
The data consists of two datasets that are related to red and white variants of the Portuguese “Vinho Verde” wine.
End of explanation
"""
# Print info on white wine
white.info()
red.info()
"""
Explanation: Here’s a short description of each variable:
1) Fixed acidity: acids are major wine properties and contribute greatly to the wine’s taste. Usually, the total acidity is divided into two groups: the volatile acids and the nonvolatile or fixed acids. Among the fixed acids that you can find in wines are the following: tartaric, malic, citric, and succinic.
2) Volatile acidity: the volatile acidity is basically the process of wine turning into vinegar. In the U.S, the legal limits of Volatile Acidity are 1.2 g/L for red table wine and 1.1 g/L for white table wine.
3) Citric acid is one of the fixed acids that you’ll find in wines. It’s expressed in g/dm3dm3 in the two data sets.
4) Residual sugar typically refers to the sugar remaining after fermentation stops, or is stopped. It’s expressed in g/dm3dm3 in the red and white data.
5) Chlorides can be a major contributor to saltiness in wine. Here, you’ll see that it’s expressed in.
6) Free sulfur dioxide: the part of the sulphur dioxide that is added to a wine and that is lost into it is said to be bound, while the active part is said to be free. Winemaker will always try to get the highest proportion of free sulphur to bind. This variables is expressed in mg/dm3dm3 in the data.
7) Total sulfur dioxide is the sum of the bound and the free sulfur dioxide (SO2). Here, it’s expressed in mg/dm3dm3. There are legal limits for sulfur levels in wines: in the EU, red wines can only have 160mg/L, while white and rose wines can have about 210mg/L. Sweet wines are allowed to have 400mg/L. For the US, the legal limits are set at 350mg/L and for Australia, this is 250mg/L.
8) Density is generally used as a measure of the conversion of sugar to alcohol. Here, it’s expressed in g/cm3cm3.
9) Sulphates are to wine as gluten is to food. You might already know sulphites from the headaches that they can cause. They are a regular part of the winemaking around the world and are considered necessary. In this case, they are expressed in g(potassiumsulphatepotassiumsulphate)/dm3dm3.
10) Alcohol: wine is an alcoholic beverage and as you know, the percentage of alcohol can vary from wine to wine. It shouldn’t surprised that this variable is inclued in the data sets, where it’s expressed in % vol.
11) Quality: wine experts graded the wine quality between 0 (very bad) and 10 (very excellent). The eventual number is the median of at least three evaluations made by those same wine experts.
12) pH or the potential of hydrogen is a numeric scale to specify the acidity or basicity the wine. As you might know, solutions with a pH less than 7 are acidic, while solutions with a pH greater than 7 are basic. With a pH of 7, pure water is neutral. Most wines have a pH between 2.9 and 3.9 and are therefore acidic.
Exploratory Data Analysis
Lets start of by getting a quick view of each dataset.
End of explanation
"""
# Print info on red wine
red.head()
#Print a random sample of 5 obs in white wine dataset
white.sample(5)
# Double check for null values in `red`
pd.isnull(red).sum()
"""
Explanation: Our red wine dataframe has relatively lower (1599) observations than that of wine (4898). All the values are float except for the quality variable which is ratings given by wine experts on a scale of 1-10.
End of explanation
"""
import matplotlib.pyplot as plt
#Split with same y-axis
fig, ax = plt.subplots(1, 2,figsize=(10, 8))
ax[0].hist(red.alcohol, 15, facecolor='red', ec="black", lw=0.5, alpha=0.5)
ax[1].hist(white.alcohol, 15, facecolor='white', ec="black", lw=0.5, alpha=0.5)
fig.subplots_adjust(left=0, right=1, bottom=0, top=0.8, hspace=0.05, wspace=0.2)
ax[0].set_title("Red Wine")
ax[1].set_title("White Wine")
#ax[0].set_ylim([0, 800])
ax[0].set_xlabel("Alcohol (% Vol)")
ax[0].set_ylabel("Frequency")
ax[1].set_xlabel("Alcohol (% Vol)")
ax[1].set_ylabel("Frequency")
#ax[1].set_ylim([0, 800])
fig.suptitle("Distribution of Alcohol in % Vol")
plt.show()
"""
Explanation: Visualizing The Data
One way to do this is by looking at the distribution of some of the dataset’s variables and make scatter plots to see possible correlations
End of explanation
"""
fig, ax = plt.subplots(1, 2, figsize=(15, 5))
ax[0].scatter(red['quality'], red["sulphates"], color="red", label="Red wine")
ax[1].scatter(white['quality'], white['sulphates'], color="white", edgecolors="black", lw=0.5, label="White wine")
ax[0].set_xlabel("Quality")
ax[1].set_xlabel("Quality")
ax[0].set_ylabel("Sulphate")
ax[1].set_ylabel("Sulphate")
ax[0].set_xlim([0,10])
ax[1].set_xlim([0,10])
ax[0].set_ylim([0,2.5])
ax[1].set_ylim([0,2.5])
fig.subplots_adjust(wspace=0.5)
ax[0].legend(loc='best')
ax[1].legend(loc='best')
fig.suptitle("Wine Quality v/s Sulphate")
plt.show()
"""
Explanation: One would notice that most of the wines made have a 9~10% alcohol present in them. Ofcourse some would notice that 10-12% is also frequent though not as much as 9%. Moreover the y-axis scales are different because of the disproportionate dataset (unbalanced) observations between red and white.
Sulphates
Next, one thing that interests me is the relation between the sulphates and the quality of the wine. As you may know, sulphates can cause people to have headaches and I’m wondering if this infuences the quality of the wine. What’s more, I often hear that women especially don’t want to drink wine exactly because it causes headaches. Maybe this affects the ratings for the red wine?
End of explanation
"""
import numpy as np
np.random.seed(570)
redlabels = np.unique(red['quality'])
whitelabels = np.unique(white['quality'])
fig, ax = plt.subplots(1, 2, figsize=(10, 8))
redcolors = np.random.rand(6,4)
whitecolors = np.append(redcolors, np.random.rand(1,4), axis=0)
for i in range(len(redcolors)):
redy = red['alcohol'][red.quality == redlabels[i]]
redx = red['volatile acidity'][red.quality == redlabels[i]]
ax[0].scatter(redx, redy, c=redcolors[i])
for i in range(len(whitecolors)):
whitey = white['alcohol'][white.quality == whitelabels[i]]
whitex = white['volatile acidity'][white.quality == whitelabels[i]]
ax[1].scatter(whitex, whitey, c=whitecolors[i])
ax[0].set_title("Red Wine")
ax[1].set_title("White Wine")
ax[0].set_xlim([0,1.5])
ax[1].set_xlim([0,1.5])
ax[0].set_ylim([6,15.5])
ax[1].set_ylim([6,15.5])
ax[0].set_xlabel("Volatile Acidity")
ax[0].set_ylabel("Alcohol")
ax[1].set_xlabel("Volatile Acidity")
ax[1].set_ylabel("Alcohol")
ax[0].legend(redlabels, loc='best', bbox_to_anchor=(1.3, 1))
ax[1].legend(whitelabels, loc='best', bbox_to_anchor=(1.3, 1))
fig.suptitle("Alcohol - Volatile Acidity")
fig.subplots_adjust(top=.85, wspace=0.7)
plt.show()
"""
Explanation: So, from the graphs above we can see that for the same values (most of it) the ratings stay constant which doesn't necessarily affect the quality as i guessed earlier. Although, we can see that the red wine has higher amount of sulphates in them which explains why drinking red wine causes headache and women prefer white over red.
Acidity
Apart from the sulphates, acidity is one of the important wine characteristics that is necessary to achieve quality wines. Great wines often balance out acidity, tannin, alcohol and sweetness. Some more research taught me that in quantities of 0.2 to 0.4 g/L, volatile acidity doesn’t affect a wine’s quality. At higher levels, however, volatile acidity can give wine a sharp, vinegary tactile sensation. Extreme volatile acidity signifies a seriously flawed wine.
End of explanation
"""
import seaborn as sns
corr = red.append(white, ignore_index=True).corr()
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
sns.heatmap(corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values,cmap="autumn",linewidths=.2)
plt.show()
"""
Explanation: Correlation Matrix
Since it can be somewhat difficult to interpret graphs, it’s also a good idea to plot a correlation matrix. This will give insights more quickly about which variables correlate:
End of explanation
"""
# Add `type` column to `red` with value 1
red['type'] = 1
# Add `type` column to `white` with value 0
white['type'] = 0
# Row bind white to red
wines = red.append(white, ignore_index=True)
wines.tail()
"""
Explanation: As you would expect, there are some variables that correlate, such as density and residual sugar. Also volatile acidity and type are more closely connected than you originally could have guessed by looking at the two data sets separately, and it was kind of to be expected that free sulfur dioxide and total sulfur dioxide were going to correlate.
Data Preprocessing
Create a column to distinguish between red and white by giving red value 1 and white value 0. Why not simply label them 'red' and 'white'? This is because neural networks only works with numerical data and not labels, so it outputs probabilities which we have to later compute to one of the labels. (it's not that difficult!)
End of explanation
"""
wines.shape
from sklearn.model_selection import train_test_split
# Specify the data
X=wines.ix[:,0:11]
# Specify the target labels and flatten the array
y= wines['type']
# Split the data up in train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
"""
Explanation: Train and Test Sets
In this case, there seems to be an imbalance, but we will go with this for the moment. Afterwards, we can evaluate the model and if it underperforms, we can resort to undersampling or oversampling to cover up the difference in observations.
End of explanation
"""
# Import `StandardScaler` from `sklearn.preprocessing`
from sklearn.preprocessing import StandardScaler
# Define the scaler
scaler = StandardScaler().fit(X_train)
# Scale the train set
X_train = scaler.transform(X_train)
# Scale the test set
X_test = scaler.transform(X_test)
"""
Explanation: Standardize The Data
Standardization is a way to deal with these values that lie really far apart. The main reason we standardize values is that NN's behave well with smaller numbers. When given a choice its always preferrable to chose smaller numbers which makes it easier to train and reduce to chances of falling for the local optima.
End of explanation
"""
# Import `Sequential` from `keras.models`
from keras.models import Sequential
# Import `Dense` from `keras.layers`
from keras.layers import Dense
# Initialize the constructor
model = Sequential()
# Add an input layer
model.add(Dense(12, activation='relu', input_shape=(11,)))
# Add one hidden layer
model.add(Dense(8, activation='relu'))
# Add an output layer
model.add(Dense(1, activation='sigmoid'))
# Model output shape
model.output_shape
# Model summary
model.summary()
# Model config
model.get_config()
# List all weight tensors
model.get_weights()
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X_train, y_train,epochs=10, batch_size=1, verbose=1)
"""
Explanation: Now that we have our data preprocessed, we can move on to the real work: building our own neural network to classify wines.
Model Data
A quick way to get started is to use the Keras Sequential model: it’s a linear stack of layers. You can easily create the model by passing a list of layer instances to the constructor, which you set up by running: model = Sequential()
End of explanation
"""
y_pred = model.predict(X_test)
# round predictions
y_pred = [round(x[0]) for x in y_pred]
y_pred[:5]
y_test[:5]
score = model.evaluate(X_test, y_test,verbose=1)
print(score)
# evaluate the model
scores = model.evaluate(X_test, y_test)
print("\n%s: %.2f%%" % (model.metrics_names[0], scores[0]*100))
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
# Import the modules from `sklearn.metrics`
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score, cohen_kappa_score
# Confusion matrix
confusion_matrix(y_test, y_pred)
# Precision
precision_score(y_test, y_pred)
# Recall
recall_score(y_test, y_pred)
# F1 score
f1_score(y_test,y_pred)
# Cohen's kappa
cohen_kappa_score(y_test, y_pred)
"""
Explanation: Predict Values
Let’s put your model to use! You can make predictions for the labels of the test set with it. Just use predict() and pass the test set to it to predict the labels for the data. In this case, the result is stored in y_pred:
End of explanation
"""
|
tensorflow/probability | tensorflow_probability/examples/jupyter_notebooks/Variational_Inference_and_Joint_Distributions.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
!pip3 install -q tf-nightly tfp-nightly
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
import warnings
tfd = tfp.distributions
tfb = tfp.bijectors
plt.rcParams['figure.facecolor'] = '1.'
# Load the Radon dataset from `tensorflow_datasets` and filter to data from
# Minnesota.
dataset = tfds.as_numpy(
tfds.load('radon', split='train').filter(
lambda x: x['features']['state'] == 'MN').batch(10**9))
# Dependent variable: Radon measurements by house.
dataset = next(iter(dataset))
radon_measurement = dataset['activity'].astype(np.float32)
radon_measurement[radon_measurement <= 0.] = 0.1
log_radon = np.log(radon_measurement)
# Measured uranium concentrations in surrounding soil.
uranium_measurement = dataset['features']['Uppm'].astype(np.float32)
log_uranium = np.log(uranium_measurement)
# County indicator.
county_strings = dataset['features']['county'].astype('U13')
unique_counties, county = np.unique(county_strings, return_inverse=True)
county = county.astype(np.int32)
num_counties = unique_counties.size
# Floor on which the measurement was taken.
floor_of_house = dataset['features']['floor'].astype(np.int32)
# Average floor by county (contextual effect).
county_mean_floor = []
for i in range(num_counties):
county_mean_floor.append(floor_of_house[county == i].mean())
county_mean_floor = np.array(county_mean_floor, dtype=log_radon.dtype)
floor_by_county = county_mean_floor[county]
"""
Explanation: Variational Inference on Probabilistic Graphical Models with Joint Distributions
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Variational_Inference_and_Joint_Distributions"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Variational_Inference_and_Joint_Distributions.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Variational_Inference_and_Joint_Distributions.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/examples/jupyter_notebooks/Variational_Inference_and_Joint_Distributions.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Variational Inference (VI) casts approximate Bayesian inference as an optimization problem and seeks a 'surrogate' posterior distribution that minimizes the KL divergence with the true posterior. Gradient-based VI is often faster than MCMC methods, composes naturally with optimization of model parameters, and provides a lower bound on model evidence that can be used directly for model comparison, convergence diagnosis, and composable inference.
TensorFlow Probability offers tools for fast, flexible, and scalable VI that fit naturally into the TFP stack. These tools enable the construction of surrogate posteriors with covariance structures induced by linear transformations or normalizing flows.
VI can be used to estimate Bayesian credible intervals for parameters of a regression model to estimate the effects of various treatments or observed features on an outcome of interest. Credible intervals bound the values of an unobserved parameter with a certain probability, according to the posterior distribution of the parameter conditioned on observed data and given an assumption on the parameter's prior distribution.
In this Colab, we demonstrate how to use VI to obtain credible intervals for parameters of a Bayesian linear regression model for radon levels measured in homes (using Gelman et al.'s (2007) Radon dataset; see similar examples in Stan). We demonstrate how TFP JointDistributions combine with bijectors to build and fit two types of expressive surrogate posteriors:
a standard Normal distribution transformed by a block matrix. The matrix may reflect independence among some components of the posterior and dependence among others, relaxing the assumption of a mean-field or full-covariance posterior.
a more complex, higher-capacity inverse autoregressive flow.
The surrogate posteriors are trained and compared with results from a mean-field surrogate posterior baseline, as well as ground-truth samples from Hamiltonian Monte Carlo.
Overview of Bayesian Variational Inference
Suppose we have the following generative process, where $\theta$ represents random parameters, $\omega$ represents deterministic parameters, and the $x_i$ are features and the $y_i$ are target values for $i=1,\ldots,n$ observed data points:
\begin{align}
&\theta \sim r(\Theta) && \text{(Prior)}\
&\text{for } i = 1 \ldots n: \nonumber \
&\quad y_i \sim p(Y_i|x_i, \theta, \omega) && \text{(Likelihood)}
\end{align}
VI is then characterized by:
$\newcommand{\E}{\operatorname{\mathbb{E}}}
\newcommand{\K}{\operatorname{\mathbb{K}}}
\newcommand{\defeq}{\overset{\tiny\text{def}}{=}}
\DeclareMathOperator*{\argmin}{arg\,min}$
\begin{align}
-\log p({y_i}i^n|{x_i}_i^n, \omega)
&\defeq -\log \int \textrm{d}\theta\, r(\theta) \prod_i^n p(y_i|x_i,\theta, \omega) && \text{(Really hard integral)} \
&= -\log \int \textrm{d}\theta\, q(\theta) \frac{1}{q(\theta)} r(\theta) \prod_i^n p(y_i|x_i,\theta, \omega) && \text{(Multiply by 1)}\
&\le - \int \textrm{d}\theta\, q(\theta) \log \frac{r(\theta) \prod_i^n p(y_i|x_i,\theta, \omega)}{q(\theta)} && \text{(Jensen's inequality)}\
&\defeq \E{q(\Theta)}[ -\log p(y_i|x_i,\Theta, \omega) ] + \K[q(\Theta), r(\Theta)]\
&\defeq \text{expected negative log likelihood"} +\text{kl regularizer"}
\end{align}
(Technically we're assuming $q$ is absolutely continuous with respect to $r$. See also, Jensen's inequality.)
Since the bound holds for all q, it is obviously tightest for:
$$q^,w^ = \argmin_{q \in \mathcal{Q},\omega\in\mathbb{R}^d} \left{ \sum_i^n\E_{q(\Theta)}\left[ -\log p(y_i|x_i,\Theta, \omega) \right] + \K[q(\Theta), r(\Theta)] \right}$$
Regarding terminology, we call
$q^*$ the "surrogate posterior," and,
$\mathcal{Q}$ the "surrogate family."
$\omega^*$ represents the maximum-likelihood values of the deterministic parameters on the VI loss. See this survey for more information on variational inference.
Example: Bayesian hierarchical linear regression on Radon measurements
Radon is a radioactive gas that enters homes through contact points with the
ground. It is a carcinogen that is the primary cause of lung cancer in
non-smokers. Radon levels vary greatly from household to household.
The EPA did a study of radon levels in 80,000 houses. Two important predictors
are:
- Floor on which the measurement was taken (radon higher in basements)
- County uranium level (positive correlation with radon levels)
Predicting radon levels in houses grouped by county is a classic problem in Bayesian hierarchical modeling, introduced by Gelman and Hill (2006). We will build a hierarchical linear model to predict radon measurements in houses, in which the hierarchy is the grouping of houses by county. We are interested in credible intervals for the effect of location (county) on the radon level of houses in Minnesota. In order to isolate this effect, the effects of floor and uranium level are also included in the model. Additionaly, we will incorporate a contextual effect corresponding to the mean floor on which the measurement was taken, by county, so that if there is variation among counties of the floor on which the measurements were taken, this is not attributed to the county effect.
End of explanation
"""
# Create variables for fixed effects.
floor_weight = tf.Variable(0.)
bias = tf.Variable(0.)
# Variables for scale parameters.
log_radon_scale = tfp.util.TransformedVariable(1., tfb.Exp())
county_effect_scale = tfp.util.TransformedVariable(1., tfb.Exp())
# Define the probabilistic graphical model as a JointDistribution.
@tfd.JointDistributionCoroutineAutoBatched
def model():
uranium_weight = yield tfd.Normal(0., scale=1., name='uranium_weight')
county_floor_weight = yield tfd.Normal(
0., scale=1., name='county_floor_weight')
county_effect = yield tfd.Sample(
tfd.Normal(0., scale=county_effect_scale),
sample_shape=[num_counties], name='county_effect')
yield tfd.Normal(
loc=(log_uranium * uranium_weight + floor_of_house* floor_weight
+ floor_by_county * county_floor_weight
+ tf.gather(county_effect, county, axis=-1)
+ bias),
scale=log_radon_scale[..., tf.newaxis],
name='log_radon')
# Pin the observed `log_radon` values to model the un-normalized posterior.
target_model = model.experimental_pin(log_radon=log_radon)
"""
Explanation: The regression model is specified as follows:
$\newcommand{\Normal}{\operatorname{\sf Normal}}$
\begin{align}
&\text{uranium_weight} \sim \Normal(0, 1) \
&\text{county_floor_weight} \sim \Normal(0, 1) \
&\text{for } j = 1\ldots \text{num_counties}:\
&\quad \text{county_effect}j \sim \Normal (0, \sigma_c)\
&\text{for } i = 1\ldots n:\
&\quad \mu_i = ( \
&\quad\quad \text{bias} \
&\quad\quad + \text{county_effect}{\text{county}i} \
&\quad\quad +\text{log_uranium}_i \times \text{uranium_weight} \
&\quad\quad +\text{floor_of_house}_i \times \text{floor_weight} \
&\quad\quad +\text{floor_by_county}{\text{county}_i} \times \text{county_floor_weight} ) \
&\quad \text{log_radon}_i \sim \Normal(\mu_i, \sigma_y)
\end{align}
in which $i$ indexes the observations and $\text{county}_i$ is the county in which the $i$th observation was taken.
We use a county-level random effect to capture geographical variation. The parameters uranium_weight and county_floor_weight are modeled probabilistically, and floor_weight and the constant bias are deterministic. These modeling choices are largely arbitrary, and are made for the purpose of demonstrating VI on a probabilistic model of reasonable complexity. For a more thorough discussion of multilevel modeling with fixed and random effects in TFP, using the radon dataset, see Multilevel Modeling Primer and Fitting Generalized Linear Mixed-effects Models Using Variational Inference.
End of explanation
"""
# Determine the `event_shape` of the posterior, and calculate the size of each
# `event_shape` component. These determine the sizes of the components of the
# underlying standard Normal distribution, and the dimensions of the blocks in
# the blockwise matrix transformation.
event_shape = target_model.event_shape_tensor()
flat_event_shape = tf.nest.flatten(event_shape)
flat_event_size = tf.nest.map_structure(tf.reduce_prod, flat_event_shape)
# The `event_space_bijector` maps unconstrained values (in R^n) to the support
# of the prior -- we'll need this at the end to constrain Multivariate Normal
# samples to the prior's support.
event_space_bijector = target_model.experimental_default_event_space_bijector()
"""
Explanation: Expressive surrogate posteriors
Next we estimate the posterior distributions of the random effects using VI with two different types of surrogate posteriors:
- A constrained multivariate Normal distribution, with covariance structure induced by a blockwise matrix transformation.
- A multivariate Standard Normal distribution transformed by an Inverse Autoregressive Flow, which is then split and restructured to match the support of the posterior.
Multivariate Normal surrogate posterior
To build this surrogate posterior, a trainable linear operator is used to induce correlation among the components of the posterior.
End of explanation
"""
base_standard_dist = tfd.JointDistributionSequential(
[tfd.Sample(tfd.Normal(0., 1.), s) for s in flat_event_size])
"""
Explanation: Construct a JointDistribution with vector-valued standard Normal components, with sizes determined by the corresponding prior components. The components should be vector-valued so they can be transformed by the linear operator.
End of explanation
"""
operators = (
(tf.linalg.LinearOperatorDiag,), # Variance of uranium weight (scalar).
(tf.linalg.LinearOperatorFullMatrix, # Covariance between uranium and floor-by-county weights.
tf.linalg.LinearOperatorDiag), # Variance of floor-by-county weight (scalar).
(None, # Independence between uranium weight and county effects.
None, # Independence between floor-by-county and county effects.
tf.linalg.LinearOperatorDiag) # Independence among the 85 county effects.
)
block_tril_linop = (
tfp.experimental.vi.util.build_trainable_linear_operator_block(
operators, flat_event_size))
scale_bijector = tfb.ScaleMatvecLinearOperatorBlock(block_tril_linop)
"""
Explanation: Build a trainable blockwise lower-triangular linear operator. We'll apply it to the standard Normal distribution to implement a (trainable) blockwise matrix transformation and induce the correlation structure of the posterior.
Within the blockwise linear operator, a trainable full-matrix block represents full covariance between two components of the posterior, while a block of zeros (or None) expresses independence. Blocks on the diagonal are either lower-triangular or diagonal matrices, so that the entire block structure represents a lower-triangular matrix.
Applying this bijector to the base distribution results in a multivariate Normal distribution with mean 0 and (Cholesky-factored) covariance equal to the lower-triangular block matrix.
End of explanation
"""
loc_bijector = tfb.JointMap(
tf.nest.map_structure(
lambda s: tfb.Shift(
tf.Variable(tf.random.uniform(
(s,), minval=-2., maxval=2., dtype=tf.float32))),
flat_event_size))
"""
Explanation: After applying the linear operator to the standard Normal distribution, apply a multipart Shift bijector to allow the mean to take nonzero values.
End of explanation
"""
# Reshape each component to match the prior, using a nested structure of
# `Reshape` bijectors wrapped in `JointMap` to form a multipart bijector.
reshape_bijector = tfb.JointMap(
tf.nest.map_structure(tfb.Reshape, flat_event_shape))
# Restructure the flat list of components to match the prior's structure
unflatten_bijector = tfb.Restructure(
tf.nest.pack_sequence_as(
event_shape, range(len(flat_event_shape))))
"""
Explanation: The resulting multivariate Normal distribution, obtained by transforming the standard Normal distribution with the scale and location bijectors, must be reshaped and restructured to match the prior, and finally constrained to the support of the prior.
End of explanation
"""
surrogate_posterior = tfd.TransformedDistribution(
base_standard_dist,
bijector = tfb.Chain( # Note that the chained bijectors are applied in reverse order
[
event_space_bijector, # constrain the surrogate to the support of the prior
unflatten_bijector, # pack the reshaped components into the `event_shape` structure of the posterior
reshape_bijector, # reshape the vector-valued components to match the shapes of the posterior components
loc_bijector, # allow for nonzero mean
scale_bijector # apply the block matrix transformation to the standard Normal distribution
]))
"""
Explanation: Now, put it all together -- chain the trainable bijectors together and apply them to the base standard Normal distribution to construct the surrogate posterior.
End of explanation
"""
optimizer = tf.optimizers.Adam(learning_rate=1e-2)
mvn_loss = tfp.vi.fit_surrogate_posterior(
target_model.unnormalized_log_prob,
surrogate_posterior,
optimizer=optimizer,
num_steps=10**4,
sample_size=16,
jit_compile=True)
mvn_samples = surrogate_posterior.sample(1000)
mvn_final_elbo = tf.reduce_mean(
target_model.unnormalized_log_prob(*mvn_samples)
- surrogate_posterior.log_prob(mvn_samples))
print('Multivariate Normal surrogate posterior ELBO: {}'.format(mvn_final_elbo))
plt.plot(mvn_loss)
plt.xlabel('Training step')
_ = plt.ylabel('Loss value')
"""
Explanation: Train the multivariate Normal surrogate posterior.
End of explanation
"""
st_louis_co = 69 # Index of St. Louis, the county with the most observations.
hennepin_co = 25 # Index of Hennepin, with the second-most observations.
def pack_samples(samples):
return {'County effect (St. Louis)': samples.county_effect[..., st_louis_co],
'County effect (Hennepin)': samples.county_effect[..., hennepin_co],
'Uranium weight': samples.uranium_weight,
'Floor-by-county weight': samples.county_floor_weight}
def plot_boxplot(posterior_samples):
fig, axes = plt.subplots(1, 4, figsize=(16, 4))
# Invert the results dict for easier plotting.
k = list(posterior_samples.values())[0].keys()
plot_results = {
v: {p: posterior_samples[p][v] for p in posterior_samples} for v in k}
for i, (var, var_results) in enumerate(plot_results.items()):
sns.boxplot(data=list(var_results.values()), ax=axes[i],
width=0.18*len(var_results), whis=(2.5, 97.5))
# axes[i].boxplot(list(var_results.values()), whis=(2.5, 97.5))
axes[i].title.set_text(var)
fs = 10 if len(var_results) < 4 else 8
axes[i].set_xticklabels(list(var_results.keys()), fontsize=fs)
results = {'Multivariate Normal': pack_samples(mvn_samples)}
print('Bias is: {:.2f}'.format(bias.numpy()))
print('Floor fixed effect is: {:.2f}'.format(floor_weight.numpy()))
plot_boxplot(results)
"""
Explanation: Since the trained surrogate posterior is a TFP distribution, we can take samples from it and process them to produce posterior credible intervals for the parameters.
The box-and-whiskers plots below show 50% and 95% credible intervals for the county effect of the two largest counties and the regression weights on soil uranium measurements and mean floor by county. The posterior credible intervals for county effects indicate that location in St. Louis county is associated with lower radon levels, after accounting for other variables, and that the effect of location in Hennepin county is near neutral.
Posterior credible intervals on the regression weights show that higher levels of soil uranium are associated with higher radon levels, and counties where measurements were taken on higher floors (likely because the house didn't have a basement) tend to have higher levels of radon, which could relate to soil properties and their effect on the type of structures built.
The (deterministic) coefficient of floor is negative, indicating that lower floors have higher radon levels, as expected.
End of explanation
"""
# Build a standard Normal with a vector `event_shape`, with length equal to the
# total number of degrees of freedom in the posterior.
base_distribution = tfd.Sample(
tfd.Normal(0., 1.), sample_shape=[tf.reduce_sum(flat_event_size)])
# Apply an IAF to the base distribution.
num_iafs = 2
iaf_bijectors = [
tfb.Invert(tfb.MaskedAutoregressiveFlow(
shift_and_log_scale_fn=tfb.AutoregressiveNetwork(
params=2, hidden_units=[256, 256], activation='relu')))
for _ in range(num_iafs)
]
# Split the base distribution's `event_shape` into components that are equal
# in size to the prior's components.
split = tfb.Split(flat_event_size)
# Chain these bijectors and apply them to the standard Normal base distribution
# to build the surrogate posterior. `event_space_bijector`,
# `unflatten_bijector`, and `reshape_bijector` are the same as in the
# multivariate Normal surrogate posterior.
iaf_surrogate_posterior = tfd.TransformedDistribution(
base_distribution,
bijector=tfb.Chain([
event_space_bijector, # constrain the surrogate to the support of the prior
unflatten_bijector, # pack the reshaped components into the `event_shape` structure of the prior
reshape_bijector, # reshape the vector-valued components to match the shapes of the prior components
split] + # Split the samples into components of the same size as the prior components
iaf_bijectors # Apply a flow model to the Tensor-valued standard Normal distribution
))
"""
Explanation: Inverse Autoregressive Flow surrogate posterior
Inverse Autoregressive Flows (IAFs) are normalizing flows that use neural networks to capture complex, nonlinear dependencies among components of the distribution. Next we build an IAF surrogate posterior to see whether this higher-capacity, more fiexible model outperforms the constrained multivariate Normal.
End of explanation
"""
optimizer=tf.optimizers.Adam(learning_rate=1e-2)
iaf_loss = tfp.vi.fit_surrogate_posterior(
target_model.unnormalized_log_prob,
iaf_surrogate_posterior,
optimizer=optimizer,
num_steps=10**4,
sample_size=4,
jit_compile=True)
iaf_samples = iaf_surrogate_posterior.sample(1000)
iaf_final_elbo = tf.reduce_mean(
target_model.unnormalized_log_prob(*iaf_samples)
- iaf_surrogate_posterior.log_prob(iaf_samples))
print('IAF surrogate posterior ELBO: {}'.format(iaf_final_elbo))
plt.plot(iaf_loss)
plt.xlabel('Training step')
_ = plt.ylabel('Loss value')
"""
Explanation: Train the IAF surrogate posterior.
End of explanation
"""
results['IAF'] = pack_samples(iaf_samples)
plot_boxplot(results)
"""
Explanation: The credible intervals for the IAF surrogate posterior appear similar to those of the constrained multivariate Normal.
End of explanation
"""
# A block-diagonal linear operator, in which each block is a diagonal operator,
# transforms the standard Normal base distribution to produce a mean-field
# surrogate posterior.
operators = (tf.linalg.LinearOperatorDiag,
tf.linalg.LinearOperatorDiag,
tf.linalg.LinearOperatorDiag)
block_diag_linop = (
tfp.experimental.vi.util.build_trainable_linear_operator_block(
operators, flat_event_size))
mean_field_scale = tfb.ScaleMatvecLinearOperatorBlock(block_diag_linop)
mean_field_loc = tfb.JointMap(
tf.nest.map_structure(
lambda s: tfb.Shift(
tf.Variable(tf.random.uniform(
(s,), minval=-2., maxval=2., dtype=tf.float32))),
flat_event_size))
mean_field_surrogate_posterior = tfd.TransformedDistribution(
base_standard_dist,
bijector = tfb.Chain( # Note that the chained bijectors are applied in reverse order
[
event_space_bijector, # constrain the surrogate to the support of the prior
unflatten_bijector, # pack the reshaped components into the `event_shape` structure of the posterior
reshape_bijector, # reshape the vector-valued components to match the shapes of the posterior components
mean_field_loc, # allow for nonzero mean
mean_field_scale # apply the block matrix transformation to the standard Normal distribution
]))
optimizer=tf.optimizers.Adam(learning_rate=1e-2)
mean_field_loss = tfp.vi.fit_surrogate_posterior(
target_model.unnormalized_log_prob,
mean_field_surrogate_posterior,
optimizer=optimizer,
num_steps=10**4,
sample_size=16,
jit_compile=True)
mean_field_samples = mean_field_surrogate_posterior.sample(1000)
mean_field_final_elbo = tf.reduce_mean(
target_model.unnormalized_log_prob(*mean_field_samples)
- mean_field_surrogate_posterior.log_prob(mean_field_samples))
print('Mean-field surrogate posterior ELBO: {}'.format(mean_field_final_elbo))
plt.plot(mean_field_loss)
plt.xlabel('Training step')
_ = plt.ylabel('Loss value')
"""
Explanation: Baseline: Mean-field surrogate posterior
VI surrogate posteriors are often assumed to be mean-field (independent) Normal distributions, with trainable means and variances, that are constrained to the support of the prior with a bijective transformation. We define a mean-field surrogate posterior in addition to the two more expressive surrogate posteriors, using the same general formula as the multivariate Normal surrogate posterior.
End of explanation
"""
results['Mean Field'] = pack_samples(mean_field_samples)
plot_boxplot(results)
"""
Explanation: In this case, the mean field surrogate posterior gives similar results to the more expressive surrogate posteriors, indicating that this simpler model may be adequate for the inference task.
End of explanation
"""
num_chains = 8
num_leapfrog_steps = 3
step_size = 0.4
num_steps=20000
flat_event_shape = tf.nest.flatten(target_model.event_shape)
enum_components = list(range(len(flat_event_shape)))
bijector = tfb.Restructure(
enum_components,
tf.nest.pack_sequence_as(target_model.event_shape, enum_components))(
target_model.experimental_default_event_space_bijector())
current_state = bijector(
tf.nest.map_structure(
lambda e: tf.zeros([num_chains] + list(e), dtype=tf.float32),
target_model.event_shape))
hmc = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_model.unnormalized_log_prob,
num_leapfrog_steps=num_leapfrog_steps,
step_size=[tf.fill(s.shape, step_size) for s in current_state])
hmc = tfp.mcmc.TransformedTransitionKernel(
hmc, bijector)
hmc = tfp.mcmc.DualAveragingStepSizeAdaptation(
hmc,
num_adaptation_steps=int(num_steps // 2 * 0.8),
target_accept_prob=0.9)
chain, is_accepted = tf.function(
lambda current_state: tfp.mcmc.sample_chain(
current_state=current_state,
kernel=hmc,
num_results=num_steps // 2,
num_burnin_steps=num_steps // 2,
trace_fn=lambda _, pkr:
(pkr.inner_results.inner_results.is_accepted),
),
autograph=False,
jit_compile=True)(current_state)
accept_rate = tf.reduce_mean(tf.cast(is_accepted, tf.float32))
ess = tf.nest.map_structure(
lambda c: tfp.mcmc.effective_sample_size(
c,
cross_chain_dims=1,
filter_beyond_positive_pairs=True),
chain)
r_hat = tf.nest.map_structure(tfp.mcmc.potential_scale_reduction, chain)
hmc_samples = pack_samples(
tf.nest.pack_sequence_as(target_model.event_shape, chain))
print('Acceptance rate is {}'.format(accept_rate))
"""
Explanation: Ground truth: Hamiltonian Monte Carlo (HMC)
We use HMC to generate "ground truth" samples from the true posterior, for comparison with results of the surrogate posteriors.
End of explanation
"""
def plot_traces(var_name, samples):
fig, axes = plt.subplots(1, 2, figsize=(14, 1.5), sharex='col', sharey='col')
for chain in range(num_chains):
s = samples.numpy()[:, chain]
axes[0].plot(s, alpha=0.7)
sns.kdeplot(s, ax=axes[1], shade=False)
axes[0].title.set_text("'{}' trace".format(var_name))
axes[1].title.set_text("'{}' distribution".format(var_name))
axes[0].set_xlabel('Iteration')
warnings.filterwarnings('ignore')
for var, var_samples in hmc_samples.items():
plot_traces(var, var_samples)
"""
Explanation: Plot sample traces to sanity-check HMC results.
End of explanation
"""
results['HMC'] = hmc_samples
plot_boxplot(results)
"""
Explanation: All three surrogate posteriors produced credible intervals that are visually similar to the HMC samples, though sometimes under-dispersed due to the effect of the ELBO loss, as is common in VI.
End of explanation
"""
#@title Plotting functions
plt.rcParams.update({'axes.titlesize': 'medium', 'xtick.labelsize': 'medium'})
def plot_loss_and_elbo():
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
axes[0].scatter([0, 1, 2],
[mvn_final_elbo.numpy(),
iaf_final_elbo.numpy(),
mean_field_final_elbo.numpy()])
axes[0].set_xticks(ticks=[0, 1, 2])
axes[0].set_xticklabels(labels=[
'Multivariate Normal', 'IAF', 'Mean Field'])
axes[0].title.set_text('Evidence Lower Bound (ELBO)')
axes[1].plot(mvn_loss, label='Multivariate Normal')
axes[1].plot(iaf_loss, label='IAF')
axes[1].plot(mean_field_loss, label='Mean Field')
axes[1].set_ylim([1000, 4000])
axes[1].set_xlabel('Training step')
axes[1].set_ylabel('Loss (negative ELBO)')
axes[1].title.set_text('Loss')
plt.legend()
plt.show()
plt.rcParams.update({'axes.titlesize': 'medium', 'xtick.labelsize': 'small'})
def plot_kdes(num_chains=8):
fig, axes = plt.subplots(2, 2, figsize=(12, 8))
k = list(results.values())[0].keys()
plot_results = {
v: {p: results[p][v] for p in results} for v in k}
for i, (var, var_results) in enumerate(plot_results.items()):
ax = axes[i % 2, i // 2]
for posterior, posterior_results in var_results.items():
if posterior == 'HMC':
label = posterior
for chain in range(num_chains):
sns.kdeplot(
posterior_results[:, chain],
ax=ax, shade=False, color='k', linestyle=':', label=label)
label=None
else:
sns.kdeplot(
posterior_results, ax=ax, shade=False, label=posterior)
ax.title.set_text('{}'.format(var))
ax.legend()
"""
Explanation: Additional results
End of explanation
"""
plot_loss_and_elbo()
"""
Explanation: Evidence Lower Bound (ELBO)
IAF, by far the largest and most flexible surrogate posterior, converges to the highest Evidence Lower Bound (ELBO).
End of explanation
"""
plot_kdes()
"""
Explanation: Posterior samples
Samples from each surrogate posterior, compared with HMC ground truth samples (a different visualization of the samples shown in the box plots).
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.18/_downloads/91bce2f7850f38d948be352bfc02e16c/plot_montage.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Joan Massich <mailsik@gmail.com>
#
# License: BSD Style.
from mayavi import mlab
import os.path as op
import mne
from mne.channels.montage import get_builtin_montages
from mne.datasets import fetch_fsaverage
from mne.viz import plot_alignment
subjects_dir = op.dirname(fetch_fsaverage())
"""
Explanation: Plotting sensor layouts of EEG Systems
This example illustrates how to load all the EEG system montages
shipped in MNE-python, and display it on fsaverage template.
End of explanation
"""
for current_montage in get_builtin_montages():
montage = mne.channels.read_montage(current_montage,
unit='auto',
transform=False)
info = mne.create_info(ch_names=montage.ch_names,
sfreq=1,
ch_types='eeg',
montage=montage)
fig = plot_alignment(info, trans=None,
subject='fsaverage',
subjects_dir=subjects_dir,
eeg=['projected'],
)
mlab.view(135, 80)
mlab.title(montage.kind, figure=fig)
"""
Explanation: check all montages
End of explanation
"""
|
karenlmasters/ComputationalPhysicsUnit | StochasticMethods/RandomNumbersLecture1.ipynb | apache-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Random Processes in Computational Physics
The contents of this Jupyter Notebook lecture notes are:
Introduction to Random Numbers in Physics
Random Number Generation
Python Packages for Random Numbers
Coding for Probability (atomic decay example)
Non-uniform random numbers
As usual I recommend you follow along by typing the code snippets into your own file. Don't forget to call the packages etc. at the start of each code file.
End of explanation
"""
#Review the documentation for NumPy's random module:
np.random?
"""
Explanation: Random Processes in Physics
Examples of physical processes that are/can be modelled as random include:
Radioactive decay - we know the probability of decay per unit time from quantum physics, but the exact time of the decay is random.
Brownian motion - if we could track the motion of all atomic particles, this would not actually be random, but appears random as we cannot.
Youtube Video of Brownian Motion: https://www.youtube.com/watch?v=cDcprgWiQEY
Chaotic systems - again not truely random in the sense of radioactive decay, but can be modelled as random.
Human or animal behaviour can also be modelled as random in some circumstances.
Random Number Generation
There are many different ways to generate uniform random numbers over a specified range (such as 0-1). Physically, we can for example:
spin a roulette wheel
draw balls from a lottery
throw darts at a board
thow dice
However, when we wish to use the numbers in a computer, we need a way to generate the numbers algorithmically.
Numerically/arithmetically - use a sequential method where each new number is a deterministic function of the previous numbers.
But: this destroys their true randomness and makes them at best, "pseudo-random".
However, in most cases, it is sufficient if the numbers “look” uniformly distributed and have no correlation between them. i.e. they pass statistical tests and obey the central limit theorem.
For example consider the function:
$x' = (ax + c) \mod m$
where $a$, $c$ and $m$ are integer constants, and $x$ is an integer variable. Recall that "$n \mod m$" means you calculate the remainder when $n$ is divided by $m$.
Now we can use this to generate a sequence of numbers by putting the outcome of this equation ($x'$) back in as the new starting value ($x$). These will act like random numbers. Try it.....
Class Exercise
Starting from $x = 1$ write a short programme which generates 100 values in this sequence and plots them on a graph. Please use the following inputs:
a = 1664525
c = 1013904223
m = 4294967296
Tip 1: python syntax for "mod m" is:
%m
So your base code will look like:
xp = (a*x+c)%m
Extension problem: this won't work for all values of a, c and m. Can you find some which don't generate pseudo-random numbers?
This is an example of a simple pseudo-random number generator (PRNG). Technically it's a "linear congruential random number generator". Things to note:
It's not really random
It can only generate numbers betwen 0 and m-1.
The choices of a, c and m matter.
The choice of x also matters. Do you get the same values for x=2?
For many codes this is sufficient, but you can do better. Fortunately python (Numpy) comes with a number of better versions as in built packages, so we can benefit from the expertise of others in our computational physics codes.
Good Pseudo-Random Number Generators
All pseudo-random number generators (PRNG) should possess a few key properties. Namely, they should
be fast and not memory intensive
be able to reproduce a given stream of random numbers (for debugging/verification of computer programs or so we can use identical numbers to compare different systems)
be able to produce several different independent “streams” of random numbers
have a long periodicity, so that they do not wrap around and produce the same numbers again within a reasonably long window.
To obtain a sequence of pseudo-random numbers:
initilize the state of the generator with a truly random "seed" value
generator uses that seed to create an initial "state", then produces a pseudo-random sequence of numbers from that state.
But note:
* The sequence will eventually repeat when the generator's state returns to that initial one.
The length of the sequence of non-repeating numbers is the period* of the PRNG.
It is relatively easy to build PRNGs with periods long enough for many practical applications, but one must be cautious in applying PRNG's to problems that require very large quantities of random numbers.
Almost all languages and simulation packages have good built-in generators. In Python, we can use the NumPy random library, which is based on the Mersenne-Twister algorithm developed in 1997.
Python Random Number Library
End of explanation
"""
#print 5 uniformly distributed numbers between 0 and 1
print(np.random.random(5))
#print another 5 - should be different
print(np.random.random(5))
#print 5 uniformly distributed integers between 1 and 10
print(np.random.randint(1,11,5))
#print another 5 - should be different
print(np.random.randint(1,11,5))
"""
Explanation: Some basic functions to point out (we'll get to others in a bit):
random() - Uniformly distributed floats over [0, 1]. Will include zero, but not one. If you inclue a number, n in the bracket you get n random floats.
randint(n,m) - A single random integer from n to m-1
End of explanation
"""
#If you want to save a random number for future use:
z=np.random.random()
print("The number is ",z)
#Rerun random
print(np.random.random())
print("The number is still",z)
"""
Explanation: Notice you have to use 1-11 for the range. Why?
End of explanation
"""
np.random.seed(42)
for i in range(4):
print(np.random.random())
np.random.seed(42)
for i in range(4):
print(np.random.random())
np.random.seed(39)
for i in range(4):
print(np.random.random())
"""
Explanation: In Class Exercise - Rolling Dice
Write a programme that generates and prints out two random numbers between 1 and 6. This simulates the rolling of two dice.
Now modify the programme to simulate making 2 million rolls of two dice. What fraction of the time do you get double six?
Extension: Plot a histogram of the frequency of the total of the two dice over the 2 million rolls.
Seeded Random Numbers
Sometimes in computational physics we want to generate the same series of pseudo-random numbers many times. This can be done with 'seeds'.
End of explanation
"""
for i in range(10):
if np.random.random()<0.2:
print("Heads")
else:
print("Tails")
"""
Explanation: You might want to do this for:
Debugging
Code repeatability (i.e. when you hand in code for marking!).
Coding For Probability
In some circumstances you will want to write code which simulates various events, each of which happen with a probability, $p$.
This can be coded with random numbers. You generate a random number between zero and 1, and allow the event to occur if that number is greater than $p$.
For example, consider a biased coin, which returns a head 20% of the time:
End of explanation
"""
|
ampl/amplpy | notebooks/pattern_enumeration.ipynb | bsd-3-clause | !pip install -q amplpy ampltools amplpy matplotlib numpy
"""
Explanation: AMPLPY: Pattern Enumeration
Documentation: http://amplpy.readthedocs.io
GitHub Repository: https://github.com/ampl/amplpy
PyPI Repository: https://pypi.python.org/pypi/amplpy
Jupyter Notebooks: https://github.com/ampl/amplpy/tree/master/notebooks
Setup
End of explanation
"""
MODULES=['ampl', 'gurobi']
from ampltools import cloud_platform_name, ampl_notebook
from amplpy import AMPL, register_magics
if cloud_platform_name() is None:
ampl = AMPL() # Use local installation of AMPL
else:
ampl = ampl_notebook(modules=MODULES) # Install AMPL and use it
register_magics(ampl_object=ampl) # Evaluate %%ampl_eval cells with ampl.eval()
"""
Explanation: Google Colab & Kaggle interagration
End of explanation
"""
%%ampl_eval
param nPatterns integer > 0;
set PATTERNS = 1..nPatterns; # patterns
set WIDTHS; # finished widths
param order {WIDTHS} >= 0; # rolls of width j ordered
param overrun; # permitted overrun on any width
param rolls {WIDTHS,PATTERNS} >= 0 default 0; # rolls of width i in pattern j
var Cut {PATTERNS} integer >= 0; # raw rolls to cut in each pattern
minimize TotalRawRolls: sum {p in PATTERNS} Cut[p];
subject to FinishedRollLimits {w in WIDTHS}:
order[w] <= sum {p in PATTERNS} rolls[w,p] * Cut[p] <= order[w] + overrun;
"""
Explanation: Basic pattern-cutting model
End of explanation
"""
from math import floor
def patternEnum(roll_width, widths, prefix=[]):
max_rep = int(floor(roll_width/widths[0]))
if len(widths) == 1:
patmat = [prefix+[max_rep]]
else:
patmat = []
for n in reversed(range(max_rep+1)):
patmat += patternEnum(roll_width-n*widths[0], widths[1:], prefix+[n])
return patmat
"""
Explanation: Enumeration routine
End of explanation
"""
def cuttingPlot(roll_width, widths, solution):
import numpy as np
import matplotlib.pyplot as plt
ind = np.arange(len(solution))
acc = [0]*len(solution)
for p, (patt, rep) in enumerate(solution):
for i in range(len(widths)):
for j in range(patt[i]):
vec = [0]*len(solution)
vec[p] = widths[i]
plt.bar(ind, vec, width=0.35, bottom=acc)
acc[p] += widths[i]
plt.title('Solution')
plt.xticks(ind, tuple("x {:}".format(rep) for patt, rep in solution))
plt.yticks(np.arange(0, roll_width, 10))
plt.show()
"""
Explanation: Plotting routine
End of explanation
"""
roll_width = 64.5
overrun = 6
orders = {
6.77: 10,
7.56: 40,
17.46: 33,
18.76: 10
}
widths = list(sorted(orders.keys(), reverse=True))
patmat = patternEnum(roll_width, widths)
"""
Explanation: Set & generate data
End of explanation
"""
# Send scalar values
ampl.getParameter('overrun').set(overrun)
ampl.getParameter('nPatterns').set(len(patmat))
# Send order vector
ampl.getSet('WIDTHS').setValues(widths)
ampl.getParameter('order').setValues(orders)
# Send pattern matrix
ampl.getParameter('rolls').setValues({
(widths[i], 1+p): patmat[p][i]
for i in range(len(widths))
for p in range(len(patmat))
})
"""
Explanation: Send data to AMPL (Java/C++ style)
End of explanation
"""
# Send scalar values
ampl.param['overrun'] = overrun
ampl.param['nPatterns'] = len(patmat)
# Send order vector
ampl.set['WIDTHS'] = widths
ampl.param['order'] = orders
# Send pattern matrixc
ampl.param['rolls'] = {
(widths[i], 1+p): patmat[p][i]
for i in range(len(widths))
for p in range(len(patmat))
}
"""
Explanation: Send data to AMPL (alternative style)
End of explanation
"""
# Solve
ampl.option['solver'] = 'gurobi'
ampl.solve()
# Retrieve solution
cutting_plan = ampl.var['Cut'].getValues()
cutvec = list(cutting_plan.getColumn('Cut.val'))
# Display solution
solution = [
(patmat[p], cutvec[p])
for p in range(len(patmat))
if cutvec[p] > 0
]
cuttingPlot(roll_width, widths, solution)
"""
Explanation: Solve and report
End of explanation
"""
|
rueedlinger/machine-learning-snippets | notebooks/unsupervised/dimensionality_reduction/eigen/dimensionality_reduction_eigen.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from numpy import linalg as LA
from sklearn import datasets
iris = datasets.load_iris()
"""
Explanation: Dimensionality Reduction with Eigenvector / Eigenvalues and Correlation Matrix (PCA)
inspired by http://sebastianraschka.com/Articles/2015_pca_in_3_steps.html#eigendecomposition---computing-eigenvectors-and-eigenvalues
End of explanation
"""
df = pd.DataFrame(iris.data, columns=iris.feature_names)
corr = df.corr()
df.corr()
_ = sns.heatmap(corr)
eig_vals, eig_vecs = LA.eig(corr)
eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))]
eig_pairs.sort(key=lambda x: x[0], reverse=True)
"""
Explanation: First we need the correlation matrix
End of explanation
"""
pd.DataFrame([eig_vals])
"""
Explanation: Eigenvalues
End of explanation
"""
pd.DataFrame(eig_vecs)
"""
Explanation: Eigenvector as Principal component
End of explanation
"""
matrix_w = np.hstack((eig_pairs[0][1].reshape(len(corr),1),
eig_pairs[1][1].reshape(len(corr),1)))
pd.DataFrame(matrix_w, columns=['PC1', 'PC2'])
new_dim = np.dot(np.array(iris.data), matrix_w)
df = pd.DataFrame(new_dim, columns=['X', 'Y'])
df['label'] = iris.target
df.head()
fig = plt.figure()
fig.suptitle('PCA with Eigenvector', fontsize=14, fontweight='bold')
ax = fig.add_subplot(111)
plt.scatter(df[df.label == 0].X, df[df.label == 0].Y, color='red', label=iris.target_names[0])
plt.scatter(df[df.label == 1].X, df[df.label == 1].Y, color='blue', label=iris.target_names[1])
plt.scatter(df[df.label == 2].X, df[df.label == 2].Y, color='green', label=iris.target_names[2])
_ = plt.legend(bbox_to_anchor=(1.25, 1))
"""
Explanation: Create the projection matrix for a new two dimensional space
End of explanation
"""
|
m2dsupsdlclass/lectures-labs | labs/05_conv_nets_2/Fully_Convolutional_Neural_Networks_rendered.ipynb | mit | %matplotlib inline
import warnings
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1)
# Load a pre-trained ResNet50
# We use include_top = False for now,
# as we'll import output Dense Layer later
import tensorflow as tf
from tensorflow.keras.applications.resnet50 import ResNet50
base_model = ResNet50(include_top=False)
print(base_model.output_shape)
#print(base_model.summary())
res5c = base_model.layers[-1]
type(res5c)
res5c.output_shape
"""
Explanation: Fully Convolutional Neural Networks
Objectives:
- Load a CNN model pre-trained on ImageNet
- Transform the network into a Fully Convolutional Network
- Apply the network perform weak segmentation on images
End of explanation
"""
from tensorflow.keras import layers
# A custom layer in Keras must implement the four following methods:
class SoftmaxMap(layers.Layer):
# Init function
def __init__(self, axis=-1, **kwargs):
self.axis = axis
super(SoftmaxMap, self).__init__(**kwargs)
# There's no parameter, so we don't need this one
def build(self, input_shape):
pass
# This is the layer we're interested in:
# very similar to the regular softmax but note the additional
# that we accept x.shape == (batch_size, w, h, n_classes)
# which is not the case in Keras by default.
# Note also that we substract the logits by their maximum to
# make the softmax numerically stable.
def call(self, x, mask=None):
e = tf.exp(x - tf.math.reduce_max(x, axis=self.axis, keepdims=True))
s = tf.math.reduce_sum(e, axis=self.axis, keepdims=True)
return e / s
# The output shape is the same as the input shape
def get_output_shape_for(self, input_shape):
return input_shape
"""
Explanation: Fully convolutional ResNet
Out of the res5c residual block, the resnet outputs a tensor of shape $W \times H \times 2048$.
For the default ImageNet input, $224 \times 224$, the output size is $7 \times 7 \times 2048$
Regular ResNet layers
The regular ResNet head after the base model is as follows:
py
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1000)(x)
x = Softmax()(x)
Here is the full definition of the model: https://github.com/keras-team/keras-applications/blob/master/keras_applications/resnet50.py
Our Version
We want to retrieve the labels information, which is stored in the Dense layer. We will load these weights afterwards
We will change the Dense Layer to a Convolution2D layer to keep spatial information, to output a $W \times H \times 1000$.
We can use a kernel size of (1, 1) for that new Convolution2D layer to pass the spatial organization of the previous layer unchanged (it's called a pointwise convolution).
We want to apply a softmax only on the last dimension so as to preserve the $W \times H$ spatial information.
A custom Softmax
We build the following Custom Layer to apply a softmax only to the last dimension of a tensor:
End of explanation
"""
n_samples, w, h, n_classes = 10, 3, 4, 5
random_data = np.random.randn(n_samples, w, h, n_classes).astype("float32")
random_data.shape
"""
Explanation: Let's check that we can use this layer to normalize the classes probabilities of some random spatial predictions:
End of explanation
"""
random_data[0].sum(axis=-1)
"""
Explanation: Because those predictions are random, if we some accross the classes dimensions we get random values instead of class probabilities that would need to some to 1:
End of explanation
"""
softmaxMap = SoftmaxMap()
softmax_mapped_data = softmaxMap(random_data).numpy()
softmax_mapped_data.shape
"""
Explanation: Let's create a SoftmaxMap function from the layer and process our test data:
End of explanation
"""
softmax_mapped_data[0]
"""
Explanation: All the values are now in the [0, 1] range:
End of explanation
"""
softmax_mapped_data[0].sum(axis=-1)
"""
Explanation: The last dimension now approximately sum to one, we can therefore be used as class probabilities (or parameters for a multinouli distribution):
End of explanation
"""
random_data[0].argmax(axis=-1)
softmax_mapped_data[0].argmax(axis=-1)
"""
Explanation: Note that the highest activated channel for each spatial location is still the same before and after the softmax map. The ranking of the activations is preserved as softmax is a monotonic function (when considered element-wise):
End of explanation
"""
from tensorflow.keras.layers import Convolution2D
from tensorflow.keras.models import Model
input = base_model.layers[0].input
# TODO: compute per-area class probabilites
output = input
fully_conv_ResNet = Model(inputs=input, outputs=output)
# %load solutions/fully_conv.py
from tensorflow.keras.layers import Convolution2D
from tensorflow.keras.models import Model
input = base_model.layers[0].input
# Take the output of the last layer of the convnet
# layer:
x = base_model.layers[-1].output
# A 1x1 convolution, with 1000 output channels, one per class
x = Convolution2D(1000, (1, 1), name='conv1000')(x)
# Softmax on last axis of tensor to normalize the class
# predictions in each spatial area
output = SoftmaxMap(axis=-1)(x)
fully_conv_ResNet = Model(inputs=input, outputs=output)
# A 1x1 convolution applies a Dense to each spatial grid location
"""
Explanation: Exercise
What is the shape of the convolution kernel we want to apply to replace the Dense ?
Build the fully convolutional model as described above. We want the output to preserve the spatial dimensions but output 1000 channels (one channel per class).
You may introspect the last elements of base_model.layers to find which layer to remove
You may use the Keras Convolution2D(output_channels, filter_w, filter_h) layer and our SotfmaxMap to normalize the result as per-class probabilities.
For now, ignore the weights of the new layer(s) (leave them initialized at random): just focus on making the right architecture with the right output shape.
End of explanation
"""
prediction_maps = fully_conv_ResNet(np.random.randn(1, 200, 300, 3)).numpy()
prediction_maps.shape
"""
Explanation: You can use the following random data to check that it's possible to run a forward pass on a random RGB image:
End of explanation
"""
prediction_maps.sum(axis=-1)
"""
Explanation: How do you explain the resulting output shape?
The class probabilities should sum to one in each area of the output map:
End of explanation
"""
import h5py
with h5py.File('weights_dense.h5', 'r') as h5f:
w = h5f['w'][:]
b = h5f['b'][:]
last_layer = fully_conv_ResNet.layers[-2]
print("Loaded weight shape:", w.shape)
print("Last conv layer weights shape:", last_layer.get_weights()[0].shape)
# reshape the weights
w_reshaped = w.reshape((1, 1, 2048, 1000))
# set the conv layer weights
last_layer.set_weights([w_reshaped, b])
"""
Explanation: Loading Dense weights
We provide the weights and bias of the last Dense layer of ResNet50 in file weights_dense.h5
Our last layer is now a 1x1 convolutional layer instead of a fully connected layer
End of explanation
"""
from tensorflow.keras.applications.imagenet_utils import preprocess_input
from skimage.io import imread
from skimage.transform import resize
def forward_pass_resize(img_path, img_size):
img_raw = imread(img_path)
print("Image shape before resizing: %s" % (img_raw.shape,))
img = resize(img_raw, img_size, mode='reflect', preserve_range=True)
img = preprocess_input(img[np.newaxis])
print("Image batch size shape before forward pass:", img.shape)
prediction_map = fully_conv_ResNet(img).numpy()
return prediction_map
output = forward_pass_resize("dog.jpg", (800, 600))
print("prediction map shape", output.shape)
"""
Explanation: A forward pass
We define the following function to test our new network.
It resizes the input to a given size, then uses model.predict to compute the output
End of explanation
"""
# Helper file for importing synsets from imagenet
import imagenet_tool
synset = "n02084071" # synset corresponding to dogs
ids = imagenet_tool.synset_to_dfs_ids(synset)
print("All dog classes ids (%d):" % len(ids))
print(ids)
for dog_id in ids[:10]:
print(imagenet_tool.id_to_words(dog_id))
print('...')
"""
Explanation: Finding dog-related classes
ImageNet uses an ontology of concepts, from which classes are derived. A synset corresponds to a node in the ontology.
For example all species of dogs are children of the synset n02084071 (Dog, domestic dog, Canis familiaris):
End of explanation
"""
def build_heatmap(prediction_map, synset):
class_ids = imagenet_tool.synset_to_dfs_ids(synset)
class_ids = np.array([id_ for id_ in class_ids if id_ is not None])
each_dog_proba_map = prediction_map[0, :, :, class_ids]
# this style of indexing a tensor by an other array has the following shape effect:
# (H, W, 1000) indexed by (118) ==> (118, H, W)
any_dog_proba_map = each_dog_proba_map.sum(axis=0)
print("size of heatmap: " + str(any_dog_proba_map.shape))
return any_dog_proba_map
def display_img_and_heatmap(img_path, heatmap):
dog = imread(img_path)
plt.figure(figsize=(12, 8))
plt.subplot(1, 2, 1)
plt.imshow(dog)
plt.axis('off')
plt.subplot(1, 2, 2)
plt.imshow(heatmap, interpolation='nearest', cmap="viridis")
plt.axis('off')
"""
Explanation: Unsupervised heatmap of the class "dog"
The following function builds a heatmap from a forward pass. It sums the representation for all ids corresponding to a synset
End of explanation
"""
# dog synset
s = "n02084071"
# TODO
# %load solutions/build_heatmaps.py
s = "n02084071"
probas_1 = forward_pass_resize("dog.jpg", (200, 320))
heatmap_1 = build_heatmap(probas_1, synset=s)
display_img_and_heatmap("dog.jpg", heatmap_1)
probas_2 = forward_pass_resize("dog.jpg", (400, 640))
heatmap_2 = build_heatmap(probas_2, synset=s)
display_img_and_heatmap("dog.jpg", heatmap_2)
probas_3 = forward_pass_resize("dog.jpg", (800, 1280))
heatmap_3 = build_heatmap(probas_3, synset=s)
display_img_and_heatmap("dog.jpg", heatmap_3)
# We observe that heatmap_1 and heatmap_2 gave coarser
# segmentations than heatmap_3. However, heatmap_3
# has small artifacts outside of the dog area
# heatmap_3 encodes more local, texture level information
# about the dog, while lower resolutions will encode more
# semantic information about the full object
# combining them is probably a good idea!
"""
Explanation: Exercise
- What is the size of the heatmap compared to the input image?
- Build 3 or 4 dog heatmaps from "dog.jpg", with the following sizes:
- (200, 320)
- (400, 640)
- (800, 1280)
- (1600, 2560) (optional, requires a lot of memory)
- What do you observe?
You may plot a heatmap using the above function display_img_and_heatmap. You might also want to reuse forward_pass_resize to compute the class maps them-selves
End of explanation
"""
from skimage.transform import resize
# TODO
# %load solutions/geom_avg.py
from skimage.transform import resize
heatmap_1_r = resize(heatmap_1, (50,80), mode='reflect',
preserve_range=True, anti_aliasing=True)
heatmap_2_r = resize(heatmap_2, (50,80), mode='reflect',
preserve_range=True, anti_aliasing=True)
heatmap_3_r = resize(heatmap_3, (50,80), mode='reflect',
preserve_range=True, anti_aliasing=True)
heatmap_geom_avg = np.power(heatmap_1_r * heatmap_2_r * heatmap_3_r, 0.333)
display_img_and_heatmap("dog.jpg", heatmap_geom_avg)
"""
Explanation: Combining the 3 heatmaps
By combining the heatmaps at different scales, we obtain a much better information about the location of the dog.
Bonus
- Combine the three heatmap by resizing them to a similar shape, and averaging them
- A geometric norm will work better than standard average!
End of explanation
"""
|
TimothyHelton/k2datascience | notebooks/Clustering_Exercises.ipynb | bsd-3-clause | from bokeh.plotting import figure, show
import bokeh.io as bkio
import pandas as pd
from k2datascience import cluster
from k2datascience import plotting
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
bkio.output_notebook()
%matplotlib inline
"""
Explanation: Unsupervised Learning - Clustering and PCA
Timothy Helton
<br>
<font color="red">
NOTE:
<br>
This notebook uses code found in the
<a href="https://github.com/TimothyHelton/k2datascience/blob/master/k2datascience/cluster.py">
<strong>k2datascience.cluster</strong></a> module.
To execute all the cells do one of the following items:
<ul>
<li>Install the k2datascience package to the active Python interpreter.</li>
<li>Add k2datascience/k2datascience to the PYTHON_PATH system variable.</li>
<li>Create a link to the cluster.py file in the same directory as this notebook.</li>
</font>
Imports
End of explanation
"""
arrests = cluster.Arrests()
arrests.data.info()
arrests.data.head()
arrests.data.describe()
plotting.correlation_heatmap_plot(arrests.data)
plotting.correlation_pair_plot(arrests.data)
"""
Explanation: Load Data
US Arrests Dataset
End of explanation
"""
genes = cluster.Genes()
genes.data.info()
genes.data.head()
genes.data.describe()
"""
Explanation: Genes Dataset
End of explanation
"""
arrests.n_components=4
arrests.calc_pca()
arrests.var_pct
plotting.pca_variance(arrests.var_pct)
"""
Explanation: Exercise 1
We mentioned the use of correlation-based distance and Euclidean distance as dissimilarity measures for hierarchical clustering. It turns out that these two measures are almost equivalent: if each observation has been centered to have mean zero and standard deviation one, and if we let $r_{ij}$ denote the correlation between the ith and jth observations, then the quantity $1−r_{ij}$ is proportional to the squared Euclidean distance between the ith and jth observations. On the USArrests data, show that this proportionality holds.
Correlation
$$ r_{xy} = \frac{\sum_i^n (x_i - \overline{x}) (y_i - \overline{y})}{\sigma_x \sigma_y} $$
Euclidean Distance
$$ d(x,y) = \sqrt{ \sum_i^n (x_i - y_i)^2 } $$
$$ d^2(x,y) = \sum_i^n (x_i - y_i)^2 $$
$$ d^2(x,y) = \sum_i^n x_i^2 - 2\sum_i^n x_i y_i + \sum_i^n y_i^2 $$
When the data is scaled so the mean is zero and standard deviation is 1.
$$ r_{xy} = \sum_i^n x_i y_i $$
$$ d^2(x,y) = n - 2\sum_i^n x_i y_i + n $$
$$ d^2(x,y) = 2n - 2\sum_i^n x_i y_i $$
$$ d^2(x,y) = 1 - \frac{\sum_i^n x_i y_i}{n} $$
$$ d^2(x,y) = 1 - \frac{r}{n} $$
Exercise 2
A formula for calculating PVE is given below. PVE can also be obtained by accessing the explained_variance_ratio_ attribute of the PCA function. On the USArrests data, calculate PVE in two ways:
By accessing the explained_variance_ratio_ attribute of the PCA function.
By applying the Equation directly. That is, use the PCA function to compute the principal component loadings. Then, use those loadings in the Equation to obtain the PVE.
1) By accessing the explained_variance_ratio_ attribute of the PCA function.
End of explanation
"""
arrests.calc_pca_eq()
"""
Explanation: 2) By applying the Equation directly. That is, use the PCA function to compute the principal component loadings. Then, use those loadings in the Equation to obtain the PVE.
End of explanation
"""
plotting.agglomerative_dendrogram_plot(
data=arrests.data,
labels=arrests.data.index,
title='US Arrests',
method='complete',
metric='euclidean',
)
"""
Explanation: Exercise 3
Consider the “USArrests” data. We will now perform hierarchical clustering on the states.
1. Using hierarchical clustering with complete linkage and Euclidean distance, cluster the states.
2. Cut the dendrogram at a height that results in three distinct clusters. Which states belong to which clusters?
3. Hierachically cluster the states using complete linkage and Euclidean distance, after scaling the variables to have standard deviation one.
4. What effect does scaling the variables have on the hierarchical clustering obtained? In your opinion, should the variables be scaled before the inter-observation dissimilarities are computed? Provide a justification for your answer.
1) Using hierarchical clustering with complete linkage and Euclidean distance, cluster the states.
End of explanation
"""
arrests.hierarchical_cluster(n_clusters=3,
criterion='maxclust',
method='complete',
metric='euclidean')
arrests.us_map_clusters()
"""
Explanation: 2) Cut the dendrogram at a height that results in three distinct clusters. Which states belong to which clusters?
End of explanation
"""
plotting.agglomerative_dendrogram_plot(
data=arrests.std_x,
labels=arrests.data.index,
title='US Arrests',
method='complete',
metric='euclidean',
)
original_data = arrests.data
arrests.data = pd.DataFrame(arrests.std_x, index=arrests.data.index)
arrests.hierarchical_cluster(n_clusters=3,
criterion='maxclust',
method='complete',
metric='euclidean')
arrests.us_map_clusters()
arrests.data = original_data
"""
Explanation: 3) Hierachically cluster the states using complete linkage and Euclidean distance, after scaling the variables to have standard deviation one.
End of explanation
"""
sim = cluster.Simulated()
sim.data.head()
sim.data.describe()
"""
Explanation: 4) What effect does scaling the variables have on the hierarchical clustering obtained? In your opinion, should the variables be scaled before the inter-observation dissimilarities are computed? Provide a justification for your answer.
FINDINGS
The data is able to be sectioned into more uniform clusters once standardized.
The data should be standardized to reduce the effect of a single response dominating the cluster only due to larger magnitude values.
Exercise 4
In this problem, you will generate simulated data, and then perform PCA and K-means clustering on the data.
1. Generate a simulated data set with 20 observations in each of three classes (i.e. 60 observations total), and 50 variables.
2. Perform PCA on the 60 observations and plot the first two principal component score vectors. Use a different color to indicate the observations in each of the three classes. If the three classes appear separated in this plot, then continue on to part (3). If not, the return to part (1) and modify the simulation so that there is greater separation between the three classes. Do not continue to part (3) until the three classes show at least some separation in the first two principal component score vectors.
3. Perform K-means clustering of the observations with K = 3. How well do the clusters that you obtained in K-means clustering compare to the true class labels?
4. Perform K-means clustering with K = 2. Describe your results.
5. Now perform K-means clustering with K = 4, and describe your results.
6. Now perform K-means clustering with K = 3 on the first two principal component score vectors, rather than on the raw data. That is, perform K-means clustering on the 60x2 matrix of which the first column is the first principal component score vector, and the second column is the second principal component score vector. Comment on the results.
7. Using the scale() function, perform K-means clustering with K = 3 on the data after scaling each variable to have standard deviation one. How do these results compare to those obtained in (2)? Explain.
1) Generate a simulated data set with 20 observations in each of three classes (i.e. 60 observations total), and 50 variables.
End of explanation
"""
sim.calc_pca()
plotting.pca_variance(sim.var_pct)
sim.plot_pca()
"""
Explanation: 2) Perform PCA on the 60 observations and plot the first two principal component score vectors. Use a different color to indicate the observations in each of the three classes. If the three classes appear separated in this plot, then continue on to part (3). If not, the return to part (1) and modify the simulation so that there is greater separation between the three classes. Do not continue to part (3) until the three classes show at least some separation in the first two principal component score vectors.
End of explanation
"""
sim.calc_kmeans(sim.data, n_clusters=3)
"""
Explanation: 3) Perform K-means clustering of the observations with K = 3. How well do the clusters that you obtained in K-means clustering compare to the true class labels?
End of explanation
"""
sim.calc_kmeans(sim.data, n_clusters=2)
"""
Explanation: 4) Perform K-means clustering with K = 2. Describe your results.
End of explanation
"""
sim.calc_kmeans(sim.data, n_clusters=4)
"""
Explanation: 5) Now perform K-means clustering with K = 4, and describe your results.
End of explanation
"""
sim.calc_kmeans(sim.trans[:, [0, 1]], n_clusters=3)
"""
Explanation: 6) Now perform K-means clustering with K = 3 on the first two principal component score vectors, rather than on the raw data. That is, perform K-means clustering on the 60x2 matrix of which the first column is the first principal component score vector, and the second column is the second principal component score vector. Comment on the results.
End of explanation
"""
sim.calc_kmeans(sim.std_x, n_clusters=3)
"""
Explanation: 7) Using the scale() function, perform K-means clustering with K = 3 on the data after scaling each variable to have standard deviation one. How do these results compare to those obtained in (2)? Explain.
End of explanation
"""
plotting.agglomerative_dendrogram_plot(
data=genes.data,
labels=list(range(1, 41)),
title='Genes (Complete)',
method='complete',
metric='correlation',
)
plotting.agglomerative_dendrogram_plot(
data=genes.data,
labels=list(range(1, 41)),
title='Genes (Average)',
method='average',
metric='correlation',
)
plotting.agglomerative_dendrogram_plot(
data=genes.data,
labels=list(range(1, 41)),
title='Genes (Single)',
method='single',
metric='correlation',
)
plotting.agglomerative_dendrogram_plot(
data=genes.data,
labels=list(range(1, 41)),
title='Genes',
method='ward',
metric='euclidean',
)
"""
Explanation: Exercise 5
We will use a gene expression data set that consists of 40 tissue samples with measurements on 1000 genes. The first 20 samples are from healthy patients, while the second 20 are from a diseased group.
1. Load the data.
2. Apply hierarchical clustering to the samples using correlation-based distance, and plot the dendrogram. Do the genes separate the samples into two groups? Do your results depend on the type of linkage used?
3. Your collaborator wants to know which genes differ the most across the two groups. Suggest a way to answer this question, and apply it here.
1) Load the data.
Completed Above
2) Apply hierarchical clustering to the samples using correlation-based distance, and plot the dendrogram. Do the genes separate the samples into two groups? Do your results depend on the type of linkage used?
End of explanation
"""
genes.std_x = genes.data
genes.n_components = None
genes.calc_pca()
genes.var_pct.head()
genes.unique_genes()
"""
Explanation: 3) Your collaborator wants to know which genes differ the most across the two groups. Suggest a way to answer this question, and apply it here.
End of explanation
"""
|
dietmarw/EK5312_ElectricalMachines | Chapman/Ch6-Problem_6-10.ipynb | unlicense | %pylab notebook
"""
Explanation: Excercises Electric Machinery Fundamentals
Chapter 6
Problem 6-10
End of explanation
"""
fe = 60 # [Hz]
p = 2
n_nl = 3580 # [r/min]
n_fl = 3440 # [r/min]
"""
Explanation: Description
A three-phase 60-Hz two-pole induction motor runs at a no-load speed of 3580 r/min and a full-load speed of 3440 r/min.
Calculate the slip and the electrical frequency of the rotor at no-load and full-load conditions.
What is the speed regulation of this motor?
End of explanation
"""
n_sync = 120*fe / p
print('n_sync = {:.0f} r/min'.format(n_sync))
"""
Explanation: SOLUTION
The synchronous speed of this machine is:
$$n_\text{sync} = \frac{120f_{se}}{p}$$
End of explanation
"""
s_nl = (n_sync - n_nl) / n_sync
print('''
s_nl = {:.2f} %
============='''.format(s_nl*100))
"""
Explanation: The slip and electrical frequency at no-load conditions is:
$$S_\text{nl} = \frac{n_\text{sync} - n_\text{nl}}{n_\text{sync}} \cdot 100\%$$
End of explanation
"""
f_rnl = s_nl * fe
print('''
f_rnl = {:.2f} Hz
==============='''.format(f_rnl))
"""
Explanation: $$f_\text{r,nl} = sf_e$$
End of explanation
"""
s_fl = (n_sync - n_fl) / n_sync
print('''
s_fl = {:.2f} %
============='''.format(s_fl*100))
"""
Explanation: The slip and electrical frequency at full load conditions is:
$$ S_\text{fl} = \frac{n_\text{sync} - n_\text{fl}}{n_\text{sync}} \cdot 100\%$$
End of explanation
"""
f_rfl = s_fl * fe
print('''
f_rfl = {:.2f} Hz
==============='''.format(f_rfl))
"""
Explanation: $$f_\text{r,fl} = sf_e$$
End of explanation
"""
SR = (n_nl - n_fl) / n_fl
print('''
SR = {:.2f} %
==========='''.format(SR*100))
"""
Explanation: The speed regulation is:
$$SR = \frac{n_\text{nl} - n_\text{fl}}{n_\text{fl}} \cdot 100\%$$
End of explanation
"""
|
4dsolutions/Python5 | Applied Voxelization Computations.ipynb | mit | def square_nums(n):
return n**2
def partial_sums(num):
squares = []
partials = [ ]
for i in range(1, num):
squares.append(square_nums(i))
partials.append(sum(squares)) # partial sums of 2nd powers
return partials
print(partial_sums(21), end=" ")
"""
Explanation: <a data-flickr-embed="true" href="https://www.flickr.com/photos/155335734@N04/41452472841/in/photolist-26a1KZB-a6MhGf-rqgvmv-24yjAHW-qPMmxy-zKaHfp-iTmpzN-A4fTFc-zHqd2M-A3sn5e-zZKYHU-a2F4GW-zKqZB3-zJwWNu-A1Y2n7-rqfkbT-A3iDDm-zLP9fW-zYBcGJ-z6bqS4-zHv1KD-A37B5i-zHDBrf-zKwfgx-z8ptBp-A42DEt-A58hy6-z7qK9m-z7thUv-zKu3tu-zKsiC1-z7cPqH-zLPPpA-zZM9Dd-zLj6Y7-z62CN7-zMiD5n-A44fZz-z6byjZ-zMSmtm-zPAve7-zKGTgK-z6daHH-zKwD7S-zKu5d1-rDr27o-A4QEfx-A3S8Gw-A1Nv81-rpuQH5" title="Michelangelo's David"><img src="https://farm1.staticflickr.com/896/41452472841_603d7dbcf8.jpg" width="500" height="375" alt="Michelangelo's David"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
Voxelization of Michelangelo's David
Suppose you wanted to voxelize Michelangelo's David. Our voxel: the space-filling rhombic dodecahedron (RD) encasing each uniform-sized ball in a voxel matrix.
In reality the ball light sources may consist mostly of unoccupied space or gas, consistent with the idea of an octet-truss wherein slender rods link to a minimally sized hub, as in Zometool or vZome (a virtualization).
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/42241528181/in/photolist-22Tq2ju-5y8idx-27mJSE2-KdPHDE-279TEzK-279TEeK-25MGEDT-25MGEoH-CwvwpP-g2w6Kf-9WvZKt-9dSGxn-8dUDpo-7tqppn-7tq2Rp-7qH7kr-669WKs-65aCay-5Dvfav" title="Space-filling with Rhombic Dodecahedrons"><img src="https://farm1.staticflickr.com/967/42241528181_a7333ae2ed.jpg" width="500" height="312" alt="Space-filling with Rhombic Dodecahedrons"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
Regardless of the actual size of the lightbulbs, the containing voxels, the polyhedral voronoi cells will be flat against one another, face to face, with no space left over. That's what it means to say polyhedrons are space-filling: they jam together without gaps. Like bricks.
Kepler was especially fond of the RD for this very reason, that it's a space-filler. In our "Sesame Street" Concentric Hierarchy, it has a canonical volume of six, twice that of the cube made from the twelve inscribed short diagonals.
The twelve diamonds of the RD, being rhombi, and not squares, have long and short diagonals criss-crossing at their face centers. These diagonals terminate in what we might call the "sharp" and "shallow" vertexes respectively. A diamond shape has shallow angles that are wider, in terms of degrees, than its narrower or sharper angles, of fewer degrees.
At the sharp vertexes, four faces come together, whereas at the shallow vertexes, only three.
In our reference design, the twelve long diagonals define an octahedron of relative volume 4:6 or 2/3rds. The octahedron consisting of RD long diagonals has a volume 2/3rds that of the original RD, while the cube made of short diagonals has a volume of precisely 1/2 that of the original RD.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/4248447415/in/album-72157623211669418/" title="2F Tetrahedron"><img src="https://farm5.staticflickr.com/4043/4248447415_be03908737.jpg" width="500" height="457" alt="2F Tetrahedron"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
The octet-truss locates hubs at the RD centers and consists of rods radiating outward to twelve neighors, unless at a boundary.
Flextegrity manages the matrix differently, by affixing the connecting rods to the faces or edges of larger hubs, leaving centers free for other apparatus (such as lightbulbs and circuitry). The Flextegrity hub is often an icosahedron.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/4310129635/in/album-72157623211669418/" title="Version 0"><img src="https://farm5.staticflickr.com/4013/4310129635_4745a318ee.jpg" width="500" height="375" alt="Version 0"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
Our method will be to assess the volume displacement of a minitiature 3D printed David and then scale up as a 3rd power of the linear ratio. In other words, height_original / height_model to the 3rd power, times volume_model, will give us volume_original:
$V_o = (V_m)(H_o/H_m)^3$
The next computation is to take $V_o$ and divide it into N voxels of some radius, where radius is from the RD center to any of its face centers.
Again, R is not with respect to anything physical other than it's half the distance between two RD centers.
These RD shaped voronoi cells are imaginary compartments, at the center of which we may place any kind of hub. Our goal is to get an approximate count of the number of hubs. We assume one hub per RD without needing to think about density e.g. how much of the RD is actually filled with physical material.
Lets assume the diameter of a baseball as our RD center inter-distance. Imagine filling space of voxels each encasing a baseball, each of which is tangent to twelve around it at the twelve "kissing point" centers of the rhombic (diamond) faces.
2,870 baseballs
OEIS A000330
End of explanation
"""
scale_factor = (517/10)**3
print("Factor to multiply by model David's volume in cm3: {:10.3f}".format(scale_factor))
"""
Explanation: What is the diameter of a baseball in centimeters? Lets go with 7.4 cm based on this source.
What is the height of the original David in Florence, Italy? 517 cm.
What is the height of the 3D printed model of David? 10 cm.
What is the 3rd power scale factor we need to apply to $V_m$?
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo("0v86Yk14rf8")
"""
Explanation: What is the volume of the model David? TBD
What is the volume of the original David? 2.16 m3 i.e. 2,160,000 cm3.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/29437945807/in/dateposted-public/" title="david_3d_printed"><img src="https://farm2.staticflickr.com/1860/29437945807_e919271d75.jpg" width="500" height="271" alt="david_3d_printed"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
Michelangelo's David for 3D Printer
Our technique will be to start with the number of cubic centimeters in the original David, a figure we might obtain directly, by searching, thereby short circuiting the 3D printing and displacement step.
However for generic objects, a 3D printed approximation with water displacement and then a scale factor applied, may be the most practical way to get a measure.
End of explanation
"""
S3 = (9/8)**(1/2)
RD_cm3 = (1/S3) * 6 * (7.4/2) ** 3 # 2 cm unit i.e. 2R
print("Voronoi Cell volume in cubic centimeters (cm3): ", RD_cm3)
from math import ceil
conversion = 100**3 # cubic centimeters
david_cm3 = 2.16 * conversion
N = ceil(david_cm3 / RD_cm3) # round up
print("Number of Voxels per Original David = ~{:,}".format(N))
"""
Explanation: If we take a D = two centimeter edge tetrahedron and call that our unit, six of which then form a rhombic dodecahedron, and then apply 1/S3, a conversion constant, we'll have a corresponding cubic volume measure.
Dividing our total volume by the voxel volume gives the number of voronoi cells, or voxels. That's assuming a two centimeter long face diagonal.
However, that was not our initial assumption, as a baseball has a diameter of 7.4 cm. Lets set R = 1 cm (a reference sphere radius) and D = 2 cm (the same sphere's diameter), but have our larger baseball diameter in terms of D (2 cm intervals).
The tetrahedron of edges 7.4/2 or 3.7 D, will be a 1/6th of the RD in question. That's thinking in tetravolumes, units smaller than R-edged cubes.
The R-cube to D-tetrahedron ratio is what we call S3 or $\sqrt{9/8}$. We switch back and forth between these two units of volume by means of this constant. This practice derives from Buckminster Fuller's Synergetics: Explorations in the Geometry of Thinking.
Tetravolumes times 1/S3 then gives the equivalent volume in plain old cubic centimeters (cm3).
A tetrahedron of edges 7.4 cm has a tetravolume of $(7.4/2)^3$ meaning the RD-shaped voronoi cell surrounding the baseball would have a conventional cubic centimeter volume of...
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/mohc/cmip6/models/sandbox-2/landice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-2', 'landice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation
"""
|
kubeflow/examples | digit-recognition-kaggle-competition/digit-recognizer-kfp.ipynb | apache-2.0 | !pip install --user --upgrade pip
!pip install kfp --upgrade --user --quiet
# confirm the kfp sdk
! pip show kfp
"""
Explanation: Digit Recognizer Kubeflow Pipeline
In this Kaggle competition
MNIST ("Modified National Institute of Standards and Technology") is the de facto “hello world” dataset of computer vision. Since its release in 1999, this classic dataset of handwritten images has served as the basis for benchmarking classification algorithms. As new machine learning techniques emerge, MNIST remains a reliable resource for researchers and learners alike.
In this competition, your goal is to correctly identify digits from a dataset of tens of thousands of handwritten images.
Install relevant libraries
Update pip pip install --user --upgrade pip
Install and upgrade kubeflow sdk pip install kfp --upgrade --user --quiet
You may need to restart your notebook kernel after installing the kfp sdk
End of explanation
"""
import kfp
import kfp.components as comp
import kfp.dsl as dsl
from kfp.components import InputPath, OutputPath
from typing import NamedTuple
"""
Explanation: Import kubeflow pipeline libraries
End of explanation
"""
# download data step
def download_data(download_link: str, data_path: OutputPath(str)):
import zipfile
import sys, subprocess;
subprocess.run(["python", "-m", "pip", "install", "--upgrade", "pip"])
subprocess.run([sys.executable, "-m", "pip", "install", "wget"])
import wget
import os
if not os.path.exists(data_path):
os.makedirs(data_path)
# download files
wget.download(download_link.format(file='train'), f'{data_path}/train_csv.zip')
wget.download(download_link.format(file='test'), f'{data_path}/test_csv.zip')
with zipfile.ZipFile(f"{data_path}/train_csv.zip","r") as zip_ref:
zip_ref.extractall(data_path)
with zipfile.ZipFile(f"{data_path}/test_csv.zip","r") as zip_ref:
zip_ref.extractall(data_path)
return(print('Done!'))
"""
Explanation: Kubeflow pipeline component creation
Component 1: Download the digits Dataset
End of explanation
"""
# load data
def load_data(data_path: InputPath(str),
load_data_path: OutputPath(str)):
# import Library
import sys, subprocess;
subprocess.run(["python", "-m", "pip", "install", "--upgrade", "pip"])
subprocess.run([sys.executable, '-m', 'pip', 'install','pandas'])
# import Library
import os, pickle;
import pandas as pd
import numpy as np
#importing the data
# Data Path
train_data_path = data_path + '/train.csv'
test_data_path = data_path + '/test.csv'
# Loading dataset into pandas
train_df = pd.read_csv(train_data_path)
test_df = pd.read_csv(test_data_path)
# join train and test together
ntrain = train_df.shape[0]
ntest = test_df.shape[0]
all_data = pd.concat((train_df, test_df)).reset_index(drop=True)
print("all_data size is : {}".format(all_data.shape))
#creating the preprocess directory
os.makedirs(load_data_path, exist_ok = True)
#Save the combined_data as a pickle file to be used by the preprocess component.
with open(f'{load_data_path}/all_data', 'wb') as f:
pickle.dump((ntrain, all_data), f)
return(print('Done!'))
"""
Explanation: Component 2: load the digits Dataset
End of explanation
"""
# preprocess data
def preprocess_data(load_data_path: InputPath(str),
preprocess_data_path: OutputPath(str)):
# import Library
import sys, subprocess;
subprocess.run(["python", "-m", "pip", "install", "--upgrade", "pip"])
subprocess.run([sys.executable, '-m', 'pip', 'install','pandas'])
subprocess.run([sys.executable, '-m', 'pip', 'install','scikit-learn'])
import os, pickle;
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
#loading the train data
with open(f'{load_data_path}/all_data', 'rb') as f:
ntrain, all_data = pickle.load(f)
# split features and label
all_data_X = all_data.drop('label', axis=1)
all_data_y = all_data.label
# Reshape image in 3 dimensions (height = 28px, width = 28px , channel = 1)
all_data_X = all_data_X.values.reshape(-1,28,28,1)
# Normalize the data
all_data_X = all_data_X / 255.0
#Get the new dataset
X = all_data_X[:ntrain].copy()
y = all_data_y[:ntrain].copy()
# split into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)
#creating the preprocess directory
os.makedirs(preprocess_data_path, exist_ok = True)
#Save the train_data as a pickle file to be used by the modelling component.
with open(f'{preprocess_data_path}/train', 'wb') as f:
pickle.dump((X_train, y_train), f)
#Save the test_data as a pickle file to be used by the predict component.
with open(f'{preprocess_data_path}/test', 'wb') as f:
pickle.dump((X_test, y_test), f)
return(print('Done!'))
"""
Explanation: Component 3: Preprocess the digits Dataset
End of explanation
"""
def modeling(preprocess_data_path: InputPath(str),
model_path: OutputPath(str)):
# import Library
import sys, subprocess;
subprocess.run(["python", "-m", "pip", "install", "--upgrade", "pip"])
subprocess.run([sys.executable, '-m', 'pip', 'install','pandas'])
subprocess.run([sys.executable, '-m', 'pip', 'install','tensorflow'])
import os, pickle;
import numpy as np
import tensorflow as tf
from tensorflow import keras, optimizers
from tensorflow.keras.metrics import SparseCategoricalAccuracy
from tensorflow.keras.losses import SparseCategoricalCrossentropy
from tensorflow.keras import layers
#loading the train data
with open(f'{preprocess_data_path}/train', 'rb') as f:
train_data = pickle.load(f)
# Separate the X_train from y_train.
X_train, y_train = train_data
#initializing the classifier model with its input, hidden and output layers
hidden_dim1=56
hidden_dim2=100
DROPOUT=0.5
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(filters = hidden_dim1, kernel_size = (5,5),padding = 'Same',
activation ='relu'),
tf.keras.layers.Dropout(DROPOUT),
tf.keras.layers.Conv2D(filters = hidden_dim2, kernel_size = (3,3),padding = 'Same',
activation ='relu'),
tf.keras.layers.Dropout(DROPOUT),
tf.keras.layers.Conv2D(filters = hidden_dim2, kernel_size = (3,3),padding = 'Same',
activation ='relu'),
tf.keras.layers.Dropout(DROPOUT),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation = "softmax")
])
model.build(input_shape=(None,28,28,1))
#Compiling the classifier model with Adam optimizer
model.compile(optimizers.Adam(learning_rate=0.001),
loss=SparseCategoricalCrossentropy(),
metrics=SparseCategoricalAccuracy(name='accuracy'))
# model fitting
history = model.fit(np.array(X_train), np.array(y_train),
validation_split=.1, epochs=1, batch_size=64)
#loading the X_test and y_test
with open(f'{preprocess_data_path}/test', 'rb') as f:
test_data = pickle.load(f)
# Separate the X_test from y_test.
X_test, y_test = test_data
# Evaluate the model and print the results
test_loss, test_acc = model.evaluate(np.array(X_test), np.array(y_test), verbose=0)
print("Test_loss: {}, Test_accuracy: {} ".format(test_loss,test_acc))
#creating the preprocess directory
os.makedirs(model_path, exist_ok = True)
#saving the model
model.save(f'{model_path}/model.h5')
"""
Explanation: Component 4: ML modeling
End of explanation
"""
def prediction(model_path: InputPath(str),
preprocess_data_path: InputPath(str),
mlpipeline_ui_metadata_path: OutputPath(str)) -> NamedTuple('conf_m_result', [('mlpipeline_ui_metadata', 'UI_metadata')]):
# import Library
import sys, subprocess;
subprocess.run(["python", "-m", "pip", "install", "--upgrade", "pip"])
subprocess.run([sys.executable, '-m', 'pip', 'install','scikit-learn'])
subprocess.run([sys.executable, '-m', 'pip', 'install','pandas'])
subprocess.run([sys.executable, '-m', 'pip', 'install','tensorflow'])
import pickle, json;
import pandas as pd
import numpy as np
from collections import namedtuple
from sklearn.metrics import confusion_matrix
from tensorflow.keras.models import load_model
#loading the X_test and y_test
with open(f'{preprocess_data_path}/test', 'rb') as f:
test_data = pickle.load(f)
# Separate the X_test from y_test.
X_test, y_test = test_data
#loading the model
model = load_model(f'{model_path}/model.h5')
# prediction
y_pred = np.argmax(model.predict(X_test), axis=-1)
# confusion matrix
cm = confusion_matrix(y_test, y_pred)
vocab = list(np.unique(y_test))
# confusion_matrix pair dataset
data = []
for target_index, target_row in enumerate(cm):
for predicted_index, count in enumerate(target_row):
data.append((vocab[target_index], vocab[predicted_index], count))
# convert confusion_matrix pair dataset to dataframe
df = pd.DataFrame(data,columns=['target','predicted','count'])
# change 'target', 'predicted' to integer strings
df[['target', 'predicted']] = (df[['target', 'predicted']].astype(int)).astype(str)
# create kubeflow metric metadata for UI
metadata = {
"outputs": [
{
"type": "confusion_matrix",
"format": "csv",
"schema": [
{
"name": "target",
"type": "CATEGORY"
},
{
"name": "predicted",
"type": "CATEGORY"
},
{
"name": "count",
"type": "NUMBER"
}
],
"source": df.to_csv(header=False, index=False),
"storage": "inline",
"labels": [
"0",
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
]
}
]
}
with open(mlpipeline_ui_metadata_path, 'w') as metadata_file:
json.dump(metadata, metadata_file)
conf_m_result = namedtuple('conf_m_result', ['mlpipeline_ui_metadata'])
return conf_m_result(json.dumps(metadata))
# create light weight components
download_op = comp.create_component_from_func(download_data,base_image="python:3.7.1")
load_op = comp.create_component_from_func(load_data,base_image="python:3.7.1")
preprocess_op = comp.create_component_from_func(preprocess_data,base_image="python:3.7.1")
modeling_op = comp.create_component_from_func(modeling, base_image="tensorflow/tensorflow:latest")
predict_op = comp.create_component_from_func(prediction, base_image="tensorflow/tensorflow:latest")
"""
Explanation: Component 5: Prediction and evaluation
End of explanation
"""
# create client that would enable communication with the Pipelines API server
client = kfp.Client()
# define pipeline
@dsl.pipeline(name="digit-recognizer-pipeline",
description="Performs Preprocessing, training and prediction of digits")
# Define parameters to be fed into pipeline
def digit_recognize_pipeline(download_link: str,
data_path: str,
load_data_path: str,
preprocess_data_path: str,
model_path:str
):
# Create download container.
download_container = download_op(download_link)
# Create load container.
load_container = load_op(download_container.output)
# Create preprocess container.
preprocess_container = preprocess_op(load_container.output)
# Create modeling container.
modeling_container = modeling_op(preprocess_container.output)
# Create prediction container.
predict_container = predict_op(modeling_container.output, preprocess_container.output)
download_link = 'https://github.com/josepholaide/examples/blob/master/digit-recognition-kaggle-competition/data/{file}.csv.zip?raw=true'
data_path = "/mnt"
load_data_path = "load"
preprocess_data_path = "preprocess"
model_path = "model"
pipeline_func = digit_recognize_pipeline
experiment_name = 'digit_recognizer_lightweight'
run_name = pipeline_func.__name__ + ' run'
arguments = {"download_link": download_link,
"data_path": data_path,
"load_data_path": load_data_path,
"preprocess_data_path": preprocess_data_path,
"model_path":model_path}
# Compile pipeline to generate compressed YAML definition of the pipeline.
kfp.compiler.Compiler().compile(pipeline_func,
'{}.zip'.format(experiment_name))
# Submit pipeline directly from pipeline function
run_result = client.create_run_from_pipeline_func(pipeline_func,
experiment_name=experiment_name,
run_name=run_name,
arguments=arguments
)
"""
Explanation: Create kubeflow pipeline components from images
Kubeflow pipeline creation
End of explanation
"""
|
tensorflow/quantum | docs/tutorials/quantum_reinforcement_learning.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
"""
!pip install tensorflow==2.7.0
"""
Explanation: Parametrized Quantum Circuits for Reinforcement Learning
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/quantum_reinforcement_learning"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/quantum_reinforcement_learning.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/quantum_reinforcement_learning.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/quantum_reinforcement_learning.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Quantum computers have been shown to provide computational advantages in certain problem areas. The field of quantum reinforcement learning (QRL) aims to harness this boost by designing RL agents that rely on quantum models of computation.
In this tutorial, you will implement two reinforcement learning algorithms based on parametrized/variational quantum circuits (PQCs or VQCs), namely a policy-gradient and a deep Q-learning implementation. These algorithms were introduced by [1] Jerbi et al. and [2] Skolik et al., respectively.
You will implement a PQC with data re-uploading in TFQ, and use it as:
1. an RL policy trained with a policy-gradient method,
2. a Q-function approximator trained with deep Q-learning,
each solving CartPole-v1, a benchmarking task from OpenAI Gym. Note that, as showcased in [1] and [2], these agents can also be used to solve other task-environment from OpenAI Gym, such as FrozenLake-v0, MountainCar-v0 or Acrobot-v1.
Features of this implementation:
- you will learn how to use a tfq.layers.ControlledPQC to implement a PQC with data re-uploading, appearing in many applications of QML. This implementation also naturally allows using trainable scaling parameters at the input of the PQC, to increase its expressivity,
- you will learn how to implement observables with trainable weights at the output of a PQC, to allow a flexible range of output values,
- you will learn how a tf.keras.Model can be trained with non-trivial ML loss functions, i.e., that are not compatible with model.compile and model.fit, using a tf.GradientTape.
Setup
Install TensorFlow:
End of explanation
"""
!pip install tensorflow-quantum
"""
Explanation: Install TensorFlow Quantum:
End of explanation
"""
!pip install gym==0.18.0
"""
Explanation: Install Gym:
End of explanation
"""
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
import tensorflow as tf
import tensorflow_quantum as tfq
import gym, cirq, sympy
import numpy as np
from functools import reduce
from collections import deque, defaultdict
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
tf.get_logger().setLevel('ERROR')
"""
Explanation: Now import TensorFlow and the module dependencies:
End of explanation
"""
def one_qubit_rotation(qubit, symbols):
"""
Returns Cirq gates that apply a rotation of the bloch sphere about the X,
Y and Z axis, specified by the values in `symbols`.
"""
return [cirq.rx(symbols[0])(qubit),
cirq.ry(symbols[1])(qubit),
cirq.rz(symbols[2])(qubit)]
def entangling_layer(qubits):
"""
Returns a layer of CZ entangling gates on `qubits` (arranged in a circular topology).
"""
cz_ops = [cirq.CZ(q0, q1) for q0, q1 in zip(qubits, qubits[1:])]
cz_ops += ([cirq.CZ(qubits[0], qubits[-1])] if len(qubits) != 2 else [])
return cz_ops
"""
Explanation: 1. Build a PQC with data re-uploading
At the core of both RL algorithms you are implementing is a PQC that takes as input the agent's state $s$ in the environment (i.e., a numpy array) and outputs a vector of expectation values. These expectation values are then post-processed, either to produce an agent's policy $\pi(a|s)$ or approximate Q-values $Q(s,a)$. In this way, the PQCs are playing an analog role to that of deep neural networks in modern deep RL algorithms.
A popular way to encode an input vector in a PQC is through the use of single-qubit rotations, where rotation angles are controlled by the components of this input vector. In order to get a highly-expressive model, these single-qubit encodings are not performed only once in the PQC, but in several "re-uploadings", interlayed with variational gates. The layout of such a PQC is depicted below:
<img src="./images/pqc_re-uploading.png" width="700">
As discussed in [1] and [2], a way to further enhance the expressivity and trainability of data re-uploading PQCs is to use trainable input-scaling parameters $\boldsymbol{\lambda}$ for each encoding gate of the PQC, and trainable observable weights $\boldsymbol{w}$ at its output.
1.1 Cirq circuit for ControlledPQC
The first step is to implement in Cirq the quantum circuit to be used as the PQC. For this, start by defining basic unitaries to be applied in the circuits, namely an arbitrary single-qubit rotation and an entangling layer of CZ gates:
End of explanation
"""
def generate_circuit(qubits, n_layers):
"""Prepares a data re-uploading circuit on `qubits` with `n_layers` layers."""
# Number of qubits
n_qubits = len(qubits)
# Sympy symbols for variational angles
params = sympy.symbols(f'theta(0:{3*(n_layers+1)*n_qubits})')
params = np.asarray(params).reshape((n_layers + 1, n_qubits, 3))
# Sympy symbols for encoding angles
inputs = sympy.symbols(f'x(0:{n_layers})'+f'_(0:{n_qubits})')
inputs = np.asarray(inputs).reshape((n_layers, n_qubits))
# Define circuit
circuit = cirq.Circuit()
for l in range(n_layers):
# Variational layer
circuit += cirq.Circuit(one_qubit_rotation(q, params[l, i]) for i, q in enumerate(qubits))
circuit += entangling_layer(qubits)
# Encoding layer
circuit += cirq.Circuit(cirq.rx(inputs[l, i])(q) for i, q in enumerate(qubits))
# Last varitional layer
circuit += cirq.Circuit(one_qubit_rotation(q, params[n_layers, i]) for i,q in enumerate(qubits))
return circuit, list(params.flat), list(inputs.flat)
"""
Explanation: Now, use these functions to generate the Cirq circuit:
End of explanation
"""
n_qubits, n_layers = 3, 1
qubits = cirq.GridQubit.rect(1, n_qubits)
circuit, _, _ = generate_circuit(qubits, n_layers)
SVGCircuit(circuit)
"""
Explanation: Check that this produces a circuit that is alternating between variational and encoding layers.
End of explanation
"""
class ReUploadingPQC(tf.keras.layers.Layer):
"""
Performs the transformation (s_1, ..., s_d) -> (theta_1, ..., theta_N, lmbd[1][1]s_1, ..., lmbd[1][M]s_1,
......., lmbd[d][1]s_d, ..., lmbd[d][M]s_d) for d=input_dim, N=theta_dim and M=n_layers.
An activation function from tf.keras.activations, specified by `activation` ('linear' by default) is
then applied to all lmbd[i][j]s_i.
All angles are finally permuted to follow the alphabetical order of their symbol names, as processed
by the ControlledPQC.
"""
def __init__(self, qubits, n_layers, observables, activation="linear", name="re-uploading_PQC"):
super(ReUploadingPQC, self).__init__(name=name)
self.n_layers = n_layers
self.n_qubits = len(qubits)
circuit, theta_symbols, input_symbols = generate_circuit(qubits, n_layers)
theta_init = tf.random_uniform_initializer(minval=0.0, maxval=np.pi)
self.theta = tf.Variable(
initial_value=theta_init(shape=(1, len(theta_symbols)), dtype="float32"),
trainable=True, name="thetas"
)
lmbd_init = tf.ones(shape=(self.n_qubits * self.n_layers,))
self.lmbd = tf.Variable(
initial_value=lmbd_init, dtype="float32", trainable=True, name="lambdas"
)
# Define explicit symbol order.
symbols = [str(symb) for symb in theta_symbols + input_symbols]
self.indices = tf.constant([symbols.index(a) for a in sorted(symbols)])
self.activation = activation
self.empty_circuit = tfq.convert_to_tensor([cirq.Circuit()])
self.computation_layer = tfq.layers.ControlledPQC(circuit, observables)
def call(self, inputs):
# inputs[0] = encoding data for the state.
batch_dim = tf.gather(tf.shape(inputs[0]), 0)
tiled_up_circuits = tf.repeat(self.empty_circuit, repeats=batch_dim)
tiled_up_thetas = tf.tile(self.theta, multiples=[batch_dim, 1])
tiled_up_inputs = tf.tile(inputs[0], multiples=[1, self.n_layers])
scaled_inputs = tf.einsum("i,ji->ji", self.lmbd, tiled_up_inputs)
squashed_inputs = tf.keras.layers.Activation(self.activation)(scaled_inputs)
joined_vars = tf.concat([tiled_up_thetas, squashed_inputs], axis=1)
joined_vars = tf.gather(joined_vars, self.indices, axis=1)
return self.computation_layer([tiled_up_circuits, joined_vars])
"""
Explanation: 1.2 ReUploadingPQC layer using ControlledPQC
To construct the re-uploading PQC from the figure above, you can create a custom Keras layer. This layer will manage the trainable parameters (variational angles $\boldsymbol{\theta}$ and input-scaling parameters $\boldsymbol{\lambda}$) and resolve the input values (input state $s$) into the appropriate symbols in the circuit.
End of explanation
"""
class Alternating(tf.keras.layers.Layer):
def __init__(self, output_dim):
super(Alternating, self).__init__()
self.w = tf.Variable(
initial_value=tf.constant([[(-1.)**i for i in range(output_dim)]]), dtype="float32",
trainable=True, name="obs-weights")
def call(self, inputs):
return tf.matmul(inputs, self.w)
"""
Explanation: 2. Policy-gradient RL with PQC policies
In this section, you will implement the policy-gradient algorithm presented in <a href="https://arxiv.org/abs/2103.05577" class="external">[1]</a>. For this, you will start by constructing, out of the PQC that was just defined, the softmax-VQC policy (where VQC stands for variational quantum circuit):
$$ \pi_\theta(a|s) = \frac{e^{\beta \langle O_a \rangle_{s,\theta}}}{\sum_{a'} e^{\beta \langle O_{a'} \rangle_{s,\theta}}} $$
where $\langle O_a \rangle_{s,\theta}$ are expectation values of observables $O_a$ (one per action) measured at the output of the PQC, and $\beta$ is a tunable inverse-temperature parameter.
You can adopt the same observables used in <a href="https://arxiv.org/abs/2103.05577" class="external">[1]</a> for CartPole, namely a global $Z_0Z_1Z_2Z_3$ Pauli product acting on all qubits, weighted by an action-specific weight for each action. To implement the weighting of the Pauli product, you can use an extra tf.keras.layers.Layer that stores the action-specific weights and applies them multiplicatively on the expectation value $\langle Z_0Z_1Z_2Z_3 \rangle_{s,\theta}$.
End of explanation
"""
n_qubits = 4 # Dimension of the state vectors in CartPole
n_layers = 5 # Number of layers in the PQC
n_actions = 2 # Number of actions in CartPole
qubits = cirq.GridQubit.rect(1, n_qubits)
"""
Explanation: Prepare the definition of your PQC:
End of explanation
"""
ops = [cirq.Z(q) for q in qubits]
observables = [reduce((lambda x, y: x * y), ops)] # Z_0*Z_1*Z_2*Z_3
"""
Explanation: and its observables:
End of explanation
"""
def generate_model_policy(qubits, n_layers, n_actions, beta, observables):
"""Generates a Keras model for a data re-uploading PQC policy."""
input_tensor = tf.keras.Input(shape=(len(qubits), ), dtype=tf.dtypes.float32, name='input')
re_uploading_pqc = ReUploadingPQC(qubits, n_layers, observables)([input_tensor])
process = tf.keras.Sequential([
Alternating(n_actions),
tf.keras.layers.Lambda(lambda x: x * beta),
tf.keras.layers.Softmax()
], name="observables-policy")
policy = process(re_uploading_pqc)
model = tf.keras.Model(inputs=[input_tensor], outputs=policy)
return model
model = generate_model_policy(qubits, n_layers, n_actions, 1.0, observables)
tf.keras.utils.plot_model(model, show_shapes=True, dpi=70)
"""
Explanation: With this, define a tf.keras.Model that applies, sequentially, the ReUploadingPQC layer previously defined, followed by a post-processing layer that computes the weighted observables using Alternating, which are then fed into a tf.keras.layers.Softmax layer that outputs the softmax-VQC policy of the agent.
End of explanation
"""
def gather_episodes(state_bounds, n_actions, model, n_episodes, env_name):
"""Interact with environment in batched fashion."""
trajectories = [defaultdict(list) for _ in range(n_episodes)]
envs = [gym.make(env_name) for _ in range(n_episodes)]
done = [False for _ in range(n_episodes)]
states = [e.reset() for e in envs]
while not all(done):
unfinished_ids = [i for i in range(n_episodes) if not done[i]]
normalized_states = [s/state_bounds for i, s in enumerate(states) if not done[i]]
for i, state in zip(unfinished_ids, normalized_states):
trajectories[i]['states'].append(state)
# Compute policy for all unfinished envs in parallel
states = tf.convert_to_tensor(normalized_states)
action_probs = model([states])
# Store action and transition all environments to the next state
states = [None for i in range(n_episodes)]
for i, policy in zip(unfinished_ids, action_probs.numpy()):
action = np.random.choice(n_actions, p=policy)
states[i], reward, done[i], _ = envs[i].step(action)
trajectories[i]['actions'].append(action)
trajectories[i]['rewards'].append(reward)
return trajectories
"""
Explanation: You can now train the PQC policy on CartPole-v1, using, e.g., the basic REINFORCE algorithm (see Alg. 1 in <a href="https://arxiv.org/abs/2103.05577" class="external">[1]</a>). Pay attention to the following points:
1. Because scaling parameters, variational angles and observables weights are trained with different learning rates, it is convenient to define 3 separate optimizers with their own learning rates, each updating one of these groups of parameters.
2. The loss function in policy-gradient RL is
$$ \mathcal{L}(\theta) = -\frac{1}{|\mathcal{B}|}\sum_{s_0,a_0,r_1,s_1,a_1, \ldots \in \mathcal{B}} \left(\sum_{t=0}^{H-1} \log(\pi_\theta(a_t|s_t)) \sum_{t'=1}^{H-t} \gamma^{t'} r_{t+t'} \right)$$
for a batch $\mathcal{B}$ of episodes $(s_0,a_0,r_1,s_1,a_1, \ldots)$ of interactions in the environment following the policy $\pi_\theta$. This is different from a supervised learning loss with fixed target values that the model should fit, which make it impossible to use a simple function call like model.fit to train the policy. Instead, using a tf.GradientTape allows to keep track of the computations involving the PQC (i.e., policy sampling) and store their contributions to the loss during the interaction. After running a batch of episodes, you can then apply backpropagation on these computations to get the gradients of the loss with respect to the PQC parameters and use the optimizers to update the policy-model.
Start by defining a function that gathers episodes of interaction with the environment:
End of explanation
"""
def compute_returns(rewards_history, gamma):
"""Compute discounted returns with discount factor `gamma`."""
returns = []
discounted_sum = 0
for r in rewards_history[::-1]:
discounted_sum = r + gamma * discounted_sum
returns.insert(0, discounted_sum)
# Normalize them for faster and more stable learning
returns = np.array(returns)
returns = (returns - np.mean(returns)) / (np.std(returns) + 1e-8)
returns = returns.tolist()
return returns
"""
Explanation: and a function that computes discounted returns $\sum_{t'=1}^{H-t} \gamma^{t'} r_{t+t'}$ out of the rewards $r_t$ collected in an episode:
End of explanation
"""
state_bounds = np.array([2.4, 2.5, 0.21, 2.5])
gamma = 1
batch_size = 10
n_episodes = 1000
"""
Explanation: Define the hyperparameters:
End of explanation
"""
optimizer_in = tf.keras.optimizers.Adam(learning_rate=0.1, amsgrad=True)
optimizer_var = tf.keras.optimizers.Adam(learning_rate=0.01, amsgrad=True)
optimizer_out = tf.keras.optimizers.Adam(learning_rate=0.1, amsgrad=True)
# Assign the model parameters to each optimizer
w_in, w_var, w_out = 1, 0, 2
"""
Explanation: Prepare the optimizers:
End of explanation
"""
@tf.function
def reinforce_update(states, actions, returns, model):
states = tf.convert_to_tensor(states)
actions = tf.convert_to_tensor(actions)
returns = tf.convert_to_tensor(returns)
with tf.GradientTape() as tape:
tape.watch(model.trainable_variables)
logits = model(states)
p_actions = tf.gather_nd(logits, actions)
log_probs = tf.math.log(p_actions)
loss = tf.math.reduce_sum(-log_probs * returns) / batch_size
grads = tape.gradient(loss, model.trainable_variables)
for optimizer, w in zip([optimizer_in, optimizer_var, optimizer_out], [w_in, w_var, w_out]):
optimizer.apply_gradients([(grads[w], model.trainable_variables[w])])
"""
Explanation: Implement a function that updates the policy using states, actions and returns:
End of explanation
"""
env_name = "CartPole-v1"
# Start training the agent
episode_reward_history = []
for batch in range(n_episodes // batch_size):
# Gather episodes
episodes = gather_episodes(state_bounds, n_actions, model, batch_size, env_name)
# Group states, actions and returns in numpy arrays
states = np.concatenate([ep['states'] for ep in episodes])
actions = np.concatenate([ep['actions'] for ep in episodes])
rewards = [ep['rewards'] for ep in episodes]
returns = np.concatenate([compute_returns(ep_rwds, gamma) for ep_rwds in rewards])
returns = np.array(returns, dtype=np.float32)
id_action_pairs = np.array([[i, a] for i, a in enumerate(actions)])
# Update model parameters.
reinforce_update(states, id_action_pairs, returns, model)
# Store collected rewards
for ep_rwds in rewards:
episode_reward_history.append(np.sum(ep_rwds))
avg_rewards = np.mean(episode_reward_history[-10:])
print('Finished episode', (batch + 1) * batch_size,
'Average rewards: ', avg_rewards)
if avg_rewards >= 500.0:
break
"""
Explanation: Now implement the main training loop of the agent.
Note: This agent may need to simulate several million quantum circuits and can take as much as ~20 minutes to finish training.
End of explanation
"""
plt.figure(figsize=(10,5))
plt.plot(episode_reward_history)
plt.xlabel('Epsiode')
plt.ylabel('Collected rewards')
plt.show()
"""
Explanation: Plot the learning history of the agent:
End of explanation
"""
# from PIL import Image
# env = gym.make('CartPole-v1')
# state = env.reset()
# frames = []
# for t in range(500):
# im = Image.fromarray(env.render(mode='rgb_array'))
# frames.append(im)
# policy = model([tf.convert_to_tensor([state/state_bounds])])
# action = np.random.choice(n_actions, p=policy.numpy()[0])
# state, _, done, _ = env.step(action)
# if done:
# break
# env.close()
# frames[1].save('./images/gym_CartPole.gif',
# save_all=True, append_images=frames[2:], optimize=False, duration=40, loop=0)
"""
Explanation: Congratulations, you have trained a quantum policy gradient model on Cartpole! The plot above shows the rewards collected by the agent per episode throughout its interaction with the environment. You should see that after a few hundred episodes, the performance of the agent gets close to optimal, i.e., 500 rewards per episode.
You can now visualize the performance of your agent using env.render() in a sample episode (uncomment/run the following cell only if your notebook has access to a display):
End of explanation
"""
class Rescaling(tf.keras.layers.Layer):
def __init__(self, input_dim):
super(Rescaling, self).__init__()
self.input_dim = input_dim
self.w = tf.Variable(
initial_value=tf.ones(shape=(1,input_dim)), dtype="float32",
trainable=True, name="obs-weights")
def call(self, inputs):
return tf.math.multiply((inputs+1)/2, tf.repeat(self.w,repeats=tf.shape(inputs)[0],axis=0))
"""
Explanation: <img src="./images/gym_CartPole.gif" width="700">
3. Deep Q-learning with PQC Q-function approximators
In this section, you will move to the implementation of the deep Q-learning algorithm presented in <a href="https://arxiv.org/abs/2103.15084" class="external">[2]</a>. As opposed to a policy-gradient approach, the deep Q-learning method uses a PQC to approximate the Q-function of the agent. That is, the PQC defines a function approximator:
$$ Q_\theta(s,a) = \langle O_a \rangle_{s,\theta} $$
where $\langle O_a \rangle_{s,\theta}$ are expectation values of observables $O_a$ (one per action) measured at the ouput of the PQC.
These Q-values are updated using a loss function derived from Q-learning:
$$ \mathcal{L}(\theta) = \frac{1}{|\mathcal{B}|}\sum_{s,a,r,s' \in \mathcal{B}} \left(Q_\theta(s,a) - [r +\max_{a'} Q_{\theta'}(s',a')]\right)^2$$
for a batch $\mathcal{B}$ of $1$-step interactions $(s,a,r,s')$ with the environment, sampled from the replay memory, and parameters $\theta'$ specifying the target PQC (i.e., a copy of the main PQC, whose parameters are sporadically copied from the main PQC throughout learning).
You can adopt the same observables used in <a href="https://arxiv.org/abs/2103.15084" class="external">[2]</a> for CartPole, namely a $Z_0Z_1$ Pauli product for action $0$ and a $Z_2Z_3$ Pauli product for action $1$. Both observables are re-scaled so their expectation values are in $[0,1]$ and weighted by an action-specific weight. To implement the re-scaling and weighting of the Pauli products, you can define again an extra tf.keras.layers.Layer that stores the action-specific weights and applies them multiplicatively on the expectation values $\left(1+\langle Z_0Z_1 \rangle_{s,\theta}\right)/2$ and $\left(1+\langle Z_2Z_3 \rangle_{s,\theta}\right)/2$.
End of explanation
"""
n_qubits = 4 # Dimension of the state vectors in CartPole
n_layers = 5 # Number of layers in the PQC
n_actions = 2 # Number of actions in CartPole
qubits = cirq.GridQubit.rect(1, n_qubits)
ops = [cirq.Z(q) for q in qubits]
observables = [ops[0]*ops[1], ops[2]*ops[3]] # Z_0*Z_1 for action 0 and Z_2*Z_3 for action 1
"""
Explanation: Prepare the definition of your PQC and its observables:
End of explanation
"""
def generate_model_Qlearning(qubits, n_layers, n_actions, observables, target):
"""Generates a Keras model for a data re-uploading PQC Q-function approximator."""
input_tensor = tf.keras.Input(shape=(len(qubits), ), dtype=tf.dtypes.float32, name='input')
re_uploading_pqc = ReUploadingPQC(qubits, n_layers, observables, activation='tanh')([input_tensor])
process = tf.keras.Sequential([Rescaling(len(observables))], name=target*"Target"+"Q-values")
Q_values = process(re_uploading_pqc)
model = tf.keras.Model(inputs=[input_tensor], outputs=Q_values)
return model
model = generate_model_Qlearning(qubits, n_layers, n_actions, observables, False)
model_target = generate_model_Qlearning(qubits, n_layers, n_actions, observables, True)
model_target.set_weights(model.get_weights())
tf.keras.utils.plot_model(model, show_shapes=True, dpi=70)
tf.keras.utils.plot_model(model_target, show_shapes=True, dpi=70)
"""
Explanation: Define a tf.keras.Model that, similarly to the PQC-policy model, constructs a Q-function approximator that is used to generate the main and target models of our Q-learning agent.
End of explanation
"""
def interact_env(state, model, epsilon, n_actions, env):
# Preprocess state
state_array = np.array(state)
state = tf.convert_to_tensor([state_array])
# Sample action
coin = np.random.random()
if coin > epsilon:
q_vals = model([state])
action = int(tf.argmax(q_vals[0]).numpy())
else:
action = np.random.choice(n_actions)
# Apply sampled action in the environment, receive reward and next state
next_state, reward, done, _ = env.step(action)
interaction = {'state': state_array, 'action': action, 'next_state': next_state.copy(),
'reward': reward, 'done':float(done)}
return interaction
"""
Explanation: You can now implement the deep Q-learning algorithm and test it on the CartPole-v1 environment. For the policy of the agent, you can use an $\varepsilon$-greedy policy:
$$ \pi(a|s) =
\begin{cases}
\delta_{a,\text{argmax}{a'} Q\theta(s,a')}\quad \text{w.p.}\quad 1 - \varepsilon\
\frac{1}{\text{num_actions}}\quad \quad \quad \quad \text{w.p.}\quad \varepsilon
\end{cases} $$
where $\varepsilon$ is multiplicatively decayed at each episode of interaction.
Start by defining a function that performs an interaction step in the environment:
End of explanation
"""
@tf.function
def Q_learning_update(states, actions, rewards, next_states, done, model, gamma, n_actions):
states = tf.convert_to_tensor(states)
actions = tf.convert_to_tensor(actions)
rewards = tf.convert_to_tensor(rewards)
next_states = tf.convert_to_tensor(next_states)
done = tf.convert_to_tensor(done)
# Compute their target q_values and the masks on sampled actions
future_rewards = model_target([next_states])
target_q_values = rewards + (gamma * tf.reduce_max(future_rewards, axis=1)
* (1.0 - done))
masks = tf.one_hot(actions, n_actions)
# Train the model on the states and target Q-values
with tf.GradientTape() as tape:
tape.watch(model.trainable_variables)
q_values = model([states])
q_values_masked = tf.reduce_sum(tf.multiply(q_values, masks), axis=1)
loss = tf.keras.losses.Huber()(target_q_values, q_values_masked)
# Backpropagation
grads = tape.gradient(loss, model.trainable_variables)
for optimizer, w in zip([optimizer_in, optimizer_var, optimizer_out], [w_in, w_var, w_out]):
optimizer.apply_gradients([(grads[w], model.trainable_variables[w])])
"""
Explanation: and a function that updates the Q-function using a batch of interactions:
End of explanation
"""
gamma = 0.99
n_episodes = 2000
# Define replay memory
max_memory_length = 10000 # Maximum replay length
replay_memory = deque(maxlen=max_memory_length)
epsilon = 1.0 # Epsilon greedy parameter
epsilon_min = 0.01 # Minimum epsilon greedy parameter
decay_epsilon = 0.99 # Decay rate of epsilon greedy parameter
batch_size = 16
steps_per_update = 10 # Train the model every x steps
steps_per_target_update = 30 # Update the target model every x steps
"""
Explanation: Define the hyperparameters:
End of explanation
"""
optimizer_in = tf.keras.optimizers.Adam(learning_rate=0.001, amsgrad=True)
optimizer_var = tf.keras.optimizers.Adam(learning_rate=0.001, amsgrad=True)
optimizer_out = tf.keras.optimizers.Adam(learning_rate=0.1, amsgrad=True)
# Assign the model parameters to each optimizer
w_in, w_var, w_out = 1, 0, 2
"""
Explanation: Prepare the optimizers:
End of explanation
"""
env = gym.make("CartPole-v1")
episode_reward_history = []
step_count = 0
for episode in range(n_episodes):
episode_reward = 0
state = env.reset()
while True:
# Interact with env
interaction = interact_env(state, model, epsilon, n_actions, env)
# Store interaction in the replay memory
replay_memory.append(interaction)
state = interaction['next_state']
episode_reward += interaction['reward']
step_count += 1
# Update model
if step_count % steps_per_update == 0:
# Sample a batch of interactions and update Q_function
training_batch = np.random.choice(replay_memory, size=batch_size)
Q_learning_update(np.asarray([x['state'] for x in training_batch]),
np.asarray([x['action'] for x in training_batch]),
np.asarray([x['reward'] for x in training_batch], dtype=np.float32),
np.asarray([x['next_state'] for x in training_batch]),
np.asarray([x['done'] for x in training_batch], dtype=np.float32),
model, gamma, n_actions)
# Update target model
if step_count % steps_per_target_update == 0:
model_target.set_weights(model.get_weights())
# Check if the episode is finished
if interaction['done']:
break
# Decay epsilon
epsilon = max(epsilon * decay_epsilon, epsilon_min)
episode_reward_history.append(episode_reward)
if (episode+1)%10 == 0:
avg_rewards = np.mean(episode_reward_history[-10:])
print("Episode {}/{}, average last 10 rewards {}".format(
episode+1, n_episodes, avg_rewards))
if avg_rewards >= 500.0:
break
"""
Explanation: Now implement the main training loop of the agent.
Note: This agent may need to simulate several million quantum circuits and can take as much as ~40 minutes to finish training.
End of explanation
"""
plt.figure(figsize=(10,5))
plt.plot(episode_reward_history)
plt.xlabel('Epsiode')
plt.ylabel('Collected rewards')
plt.show()
"""
Explanation: Plot the learning history of the agent:
End of explanation
"""
|
abulbasar/machine-learning | Scikit - 12 Neural Network using Numpy.ipynb | apache-2.0 | class NeuralNetwork:
def __init__(self, layers, learning_rate, random_state = None):
self.layers_ = layers
self.num_features = layers[0]
self.num_classes = layers[-1]
self.hidden = layers[1:-1]
self.learning_rate = learning_rate
if not random_state:
np.random.seed(random_state)
self.W_sets = []
for i in range(len(self.layers_) - 1):
n_prev = layers[i]
n_next = layers[i + 1]
m = np.random.normal(0.0, pow(n_next, -0.5), (n_next, n_prev))
self.W_sets.append(m)
def activation_function(self, z):
return 1 / (1 + np.exp(-z))
def fit(self, training, targets):
inputs0 = inputs = np.array(training, ndmin=2).T
assert inputs.shape[0] == self.num_features, \
"no of features {0}, it must be {1}".format(inputs.shape[0], self.num_features)
targets = np.array(targets, ndmin=2).T
assert targets.shape[0] == self.num_classes, \
"no of classes {0}, it must be {1}".format(targets.shape[0], self.num_classes)
outputs = []
for i in range(len(self.layers_) - 1):
W = self.W_sets[i]
inputs = self.activation_function(W.dot(inputs))
outputs.append(inputs)
errors = [None] * (len(self.layers_) - 1)
errors[-1] = targets - outputs[-1]
#print("Last layer", targets.shape, outputs[-1].shape, errors[-1].shape)
#print("Last layer", targets, outputs[-1])
#Back propagation
for i in range(len(self.layers_) - 1)[::-1]:
W = self.W_sets[i]
E = errors[i]
O = outputs[i]
I = outputs[i - 1] if i > 0 else inputs0
#print("i: ", i, ", E: ", E.shape, ", O:", O.shape, ", I: ", I.shape, ",W: ", W.shape)
W += self.learning_rate * (E * O * (1 - O)).dot(I.T)
if i > 0:
errors[i-1] = W.T.dot(E)
def predict(self, inputs, cls = False):
inputs = np.array(inputs, ndmin=2).T
assert inputs.shape[0] == self.num_features, \
"no of features {0}, it must be {1}".format(inputs.shape[0], self.num_features)
for i in range(len(self.layers_) - 1):
W = self.W_sets[i]
input_next = W.dot(inputs)
inputs = activated = self.activation_function(input_next)
return np.argmax(activated.T, axis=1) if cls else activated.T
def score(self, X_test, y_test):
y_test = np.array(y_test).flatten()
y_test_pred = nn.predict(X_test, cls=True)
return np.sum(y_test_pred == y_test) / y_test.shape[0]
"""
Explanation: Neural Networks Classifier
Author: Abul Basar
End of explanation
"""
nn = NeuralNetwork([784,100,10], 0.3, random_state=0)
for i in np.arange(X_train.shape[0]):
nn.fit(X_train[i], y_train_ohe[i])
nn.predict(X_train[2]), nn.predict(X_train[2], cls=True)
print("Testing accuracy: ", nn.score(X_test, y_test), ", training accuracy: ", nn.score(X_train, y_train))
#list(zip(y_test_pred, y_test))
"""
Explanation: Run neural net classifier on small dataset
Training set size: 100, testing set size 10
End of explanation
"""
train = pd.read_csv("../data/MNIST/mnist_train.csv", header=None, dtype="float64")
X_train = normalize_fetures(train.iloc[:, 1:].values)
y_train = train.iloc[:, [0]].values.astype("int32")
y_train_ohe = normalize_labels(y_train)
print(y_train.shape, y_train_ohe.shape)
test = pd.read_csv("../data/MNIST/mnist_test.csv", header=None, dtype="float64")
X_test = normalize_fetures(test.iloc[:, 1:].values)
y_test = test.iloc[:, 0].values.astype("int32")
"""
Explanation: Load full MNIST dataset.
Training set size 60,000 and test set size 10,000
Original: http://yann.lecun.com/exdb/mnist/
CSV version:
training: https://pjreddie.com/media/files/mnist_train.csv
testing: https://pjreddie.com/media/files/mnist_test.csv
End of explanation
"""
timer.reset()
nn = NeuralNetwork([784,100,10], 0.3, random_state=0)
for i in range(X_train.shape[0]):
nn.fit(X_train[i], y_train_ohe[i])
timer("training time")
accuracy = nn.score(X_test, y_test)
print("Testing accuracy: ", nn.score(X_test, y_test), ", Training accuracy: ", nn.score(X_train, y_train))
"""
Explanation: Runt the Neural Network classifier and measure performance
End of explanation
"""
params = 10 ** - np.linspace(0.01, 2, 10)
scores_train = []
scores_test = []
timer.reset()
for p in params:
nn = NeuralNetwork([784,100,10], p, random_state = 0)
for i in range(X_train.shape[0]):
nn.fit(X_train[i], y_train_ohe[i])
scores_train.append(nn.score(X_train, y_train))
scores_test.append(nn.score(X_test, y_test))
timer()
plt.plot(params, scores_test, label = "Test score")
plt.plot(params, scores_train, label = "Training score")
plt.xlabel("Learning Rate")
plt.ylabel("Accuracy")
plt.legend()
plt.title("Effect of learning rate")
print("Accuracy scores")
pd.DataFrame({"learning_rate": params, "train": scores_train, "test": scores_test})
"""
Explanation: Effect of learning rate
End of explanation
"""
epochs = np.arange(20)
learning_rate = 0.077
scores_train, scores_test = [], []
nn = NeuralNetwork([784,100,10], learning_rate, random_state = 0)
indices = np.arange(X_train.shape[0])
timer.reset()
for _ in epochs:
np.random.shuffle(indices)
for i in indices:
nn.fit(X_train[i], y_train_ohe[i])
scores_train.append(nn.score(X_train, y_train))
scores_test.append(nn.score(X_test, y_test))
timer("test score: %f, training score: %f" % (scores_test[-1], scores_train[-1]))
plt.plot(epochs, scores_test, label = "Test score")
plt.plot(epochs, scores_train, label = "Training score")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(loc = "lower right")
plt.title("Effect of Epochs")
print("Accuracy scores")
pd.DataFrame({"epochs": epochs, "train": scores_train, "test": scores_test})
"""
Explanation: Effect of Epochs
End of explanation
"""
num_layers = 50 * (np.arange(10) + 1)
learning_rate = 0.077
scores_train, scores_test = [], []
timer.reset()
for p in num_layers:
nn = NeuralNetwork([784, p,10], learning_rate, random_state = 0)
indices = np.arange(X_train.shape[0])
for i in indices:
nn.fit(X_train[i], y_train_ohe[i])
scores_train.append(nn.score(X_train, y_train))
scores_test.append(nn.score(X_test, y_test))
timer("size: %d, test score: %f, training score: %f" % (p, scores_test[-1], scores_train[-1]))
plt.plot(num_layers, scores_test, label = "Test score")
plt.plot(num_layers, scores_train, label = "Training score")
plt.xlabel("Hidden Layer Size")
plt.ylabel("Accuracy")
plt.legend(loc = "lower right")
plt.title("Effect of size (num of nodes) of the hidden layer")
print("Accuracy scores")
pd.DataFrame({"layer": num_layers, "train": scores_train, "test": scores_test})
"""
Explanation: Effect of size (num of nodes) of the single hidden layer
End of explanation
"""
num_layers = np.arange(5) + 1
learning_rate = 0.077
scores_train, scores_test = [], []
timer.reset()
for p in num_layers:
layers = [100] * p
layers.insert(0, 784)
layers.append(10)
nn = NeuralNetwork(layers, learning_rate, random_state = 0)
indices = np.arange(X_train.shape[0])
for i in indices:
nn.fit(X_train[i], y_train_ohe[i])
scores_train.append(nn.score(X_train, y_train))
scores_test.append(nn.score(X_test, y_test))
timer("size: %d, test score: %f, training score: %f" % (p, scores_test[-1], scores_train[-1]))
plt.plot(num_layers, scores_test, label = "Test score")
plt.plot(num_layers, scores_train, label = "Training score")
plt.xlabel("No of hidden layers")
plt.ylabel("Accuracy")
plt.legend(loc = "upper right")
plt.title("Effect of using multiple hidden layers, \nNodes per layer=100")
print("Accuracy scores")
pd.DataFrame({"layer": num_layers, "train": scores_train, "test": scores_test})
"""
Explanation: Effect of using multiple hidden layers
End of explanation
"""
img = scipy.ndimage.interpolation.rotate(X_train[110].reshape(28, 28), -10, reshape=False)
print(img.shape)
plt.imshow(img, interpolation=None, cmap="Greys")
epochs = np.arange(10)
learning_rate = 0.077
scores_train, scores_test = [], []
nn = NeuralNetwork([784,250,10], learning_rate, random_state = 0)
indices = np.arange(X_train.shape[0])
timer.reset()
for _ in epochs:
np.random.shuffle(indices)
for i in indices:
for rotation in [-10, 0, 10]:
img = scipy.ndimage.interpolation.rotate(X_train[i].reshape(28, 28), rotation, cval=0.01, order=1, reshape=False)
nn.fit(img.flatten(), y_train_ohe[i])
scores_train.append(nn.score(X_train, y_train))
scores_test.append(nn.score(X_test, y_test))
timer("test score: %f, training score: %f" % (scores_test[-1], scores_train[-1]))
plt.plot(epochs, scores_test, label = "Test score")
plt.plot(epochs, scores_train, label = "Training score")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(loc = "lower right")
plt.title("Trained with rotation (+/- 10)\n Hidden Nodes: 250, LR: 0.077")
print("Accuracy scores")
pd.DataFrame({"epochs": epochs, "train": scores_train, "test": scores_test})
"""
Explanation: Rotation
End of explanation
"""
missed = y_test_pred != y_test
pd.Series(y_test[missed]).value_counts().plot(kind = "bar")
plt.title("No of mis classification by digit")
plt.ylabel("No of misclassification")
plt.xlabel("Digit")
fig, _ = plt.subplots(6, 4, figsize = (15, 10))
for i, ax in enumerate(fig.axes):
ax.imshow(X_test[missed][i].reshape(28, 28), interpolation="nearest", cmap="Greys")
ax.set_title("T: %d, P: %d" % (y_test[missed][i], y_test_pred[missed][i]))
plt.tight_layout()
img = scipy.ndimage.imread("/Users/abulbasar/Downloads/9-03.png", mode="L")
print("Original size:", img.shape)
img = normalize_fetures(scipy.misc.imresize(img, (28, 28)))
img = np.abs(img - 0.99)
plt.imshow(img, cmap="Greys", interpolation="none")
print("Predicted value: ", nn.predict(img.flatten(), cls=True))
"""
Explanation: Which charaters NN was most wrong about?
End of explanation
"""
|
rvperry/phys202-2015-work | assignments/assignment05/MatplotlibEx03.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Matplotlib Exercise 3
Imports
End of explanation
"""
def well2d(x, y, nx, ny, L=1.0):
"""Compute the 2d quantum well wave function."""
psi=(2/L)*np.sin((nx*np.pi*x)/L)*np.sin((ny*np.pi*y)/L)
return psi
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
psi
"""
Explanation: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is:
$$ \psi_{n_x,n_y}(x,y) = \frac{2}{L}
\sin{\left( \frac{n_x \pi x}{L} \right)}
\sin{\left( \frac{n_y \pi y}{L} \right)} $$
This is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well.
Define a function well2d that computes this wavefunction for values of x and y that are NumPy arrays.
End of explanation
"""
plt.colormaps?
plt.figure(figsize=(9,7))
plt.contourf(x,y,well2d(x,y,3,2,1))
plt.set_cmap('RdBu')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Well 2D')
plt.tick_params(direction='out')
plt.colorbar(values=well2d)
assert True # use this cell for grading the contour plot
"""
Explanation: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction:
Use $n_x=3$, $n_y=2$ and $L=0$.
Use the limits $[0,1]$ for the x and y axis.
Customize your plot to make it effective and beautiful.
Use a non-default colormap.
Add a colorbar to you visualization.
First make a plot using one of the contour functions:
End of explanation
"""
plt.figure(figsize=(9,7))
plt.pcolor(x,y,well2d(x,y,3,2,1))
plt.xlabel('x')
plt.ylabel('y')
plt.title('Well 2D')
plt.tick_params(direction='out')
plt.colorbar()
assert True # use this cell for grading the pcolor plot
"""
Explanation: Next make a visualization using one of the pcolor functions:
End of explanation
"""
|
VUInformationRetrieval/IR2016_2017 | 01_inspecting.ipynb | gpl-2.0 | import pickle, bz2
Summaries_file = 'data/malaria__Summaries.pkl.bz2'
Summaries = pickle.load( bz2.BZ2File( Summaries_file, 'rb' ) )
"""
Explanation: Mini-Assignment 1: Inspecting the PubMed Paper Dataset
In this code for the first mini-assignment, we will get to know the dataset that we will be using throughout. You can find the assignment tasks at the very bottom of this document.
Our dataset consists of short texts (article abstracts) from the PubMed database of scientific publications in the Life Science domain. As the full dataset consists of millions of documents, we are using just a small subset, namely all publications that contain the word "malaria" in their title or abstract. You can download that dataset in the form of four files (malaria__Summaries.pkl.bz2, etc.) from Blackboard. Save these four files in a directory called data, which should be a sub-directory of the one that contains this notebook file (or adjust the file path in the code)
Loading the Dataset
End of explanation
"""
from collections import namedtuple
paper = namedtuple( 'paper', ['title', 'authors', 'year', 'doi'] )
for (id, paper_info) in Summaries.items():
Summaries[id] = paper( *paper_info )
Summaries[24130474]
Summaries[24130474].title
"""
Explanation: To make it easier to access the data, we convert here paper entries into named tuples. This will allow us to refer to fields by keyword (like var.year), rather than index (like var[2]).
End of explanation
"""
import matplotlib.pyplot as plt
# show plots inline within the notebook
%matplotlib inline
# set plots' resolution
plt.rcParams['savefig.dpi'] = 100
"""
Explanation: Dataset Statistics
Plotting relies on matplotlib and NumPy. If your installation doesn't have them included already, you can download them here and here, respectively.
End of explanation
"""
from collections import Counter
paper_years = [ p.year for p in Summaries.values() ]
papers_per_year = sorted( Counter(paper_years).items() )
print('Number of papers in the dataset per year for the past decade:')
print(papers_per_year[-10:])
"""
Explanation: Papers per Year
Here, we will get information on how many papers in the dataset were published per year.
We'll be using the Counter class to determine the number of papers per year.
End of explanation
"""
papers_per_year_since_1940 = [ (y,count) for (y,count) in papers_per_year if y >= 1940 ]
years_since_1940 = [ y for (y,count) in papers_per_year_since_1940 ]
nr_papers_since_1940 = [ count for (y,count) in papers_per_year_since_1940 ]
print('Number of papers in the dataset published since 1940:')
print(sum(nr_papers_since_1940))
"""
Explanation: Filtering results, to obain only papers since 1940:
End of explanation
"""
plt.bar(left=years_since_1940, height=nr_papers_since_1940, width=1.0)
plt.xlim(1940, 2017)
plt.xlabel('year')
plt.ylabel('number of papers');
"""
Explanation: Creating a bar plot to visualize the results (using matplotlib.pyplot.bar):
End of explanation
"""
plt.hist( x=[p.year for p in Summaries.values()], bins=range(1940,2018) );
plt.xlim(1940, 2017)
plt.xlabel('year')
plt.ylabel('number of papers');
"""
Explanation: Alternatively, you can get the same result in a more direct manner by plotting it as a histogram with matplotlib.pyplot.hist:
End of explanation
"""
# flattening the list of lists of authors:
authors_expanded = [ auth for paper in Summaries.values() for auth in paper.authors ]
nr_papers_by_author = Counter( authors_expanded )
print('Number of authors in the dataset with distinct names:')
print(len(nr_papers_by_author))
print('Top 50 authors with greatest number of papers:')
print(sorted(nr_papers_by_author.items(), key=lambda i:i[1], reverse=True)[:50])
"""
Explanation: Papers per Author
Here, we will obtain the distribution characterizing the number of papers published by an author.
End of explanation
"""
plt.hist( x=list(nr_papers_by_author.values()), bins=range(51), log=True )
plt.xlabel('number of papers authored')
plt.ylabel('number of authors');
"""
Explanation: Creating a histogram to visualize the results:
End of explanation
"""
plt.hist( x=[ len(p.authors) for p in Summaries.values() ], bins=range(20), align='left', normed=True )
plt.xlabel('number of authors in one paper')
plt.ylabel('fraction of papers')
plt.xlim(-0.5, 15.5);
"""
Explanation: Authors per Paper
End of explanation
"""
words = [ word.lower() for paper in Summaries.values() for word in paper.title.split(' ') ]
word_counts = Counter(words)
print('Number of distinct words in the paper titles:')
print(len(word_counts))
"""
Explanation: Words in Titles
assemble list of words in paper titles, convert them to lowercase, and remove trailing '.':
End of explanation
"""
# Add your code here
"""
Explanation: Assignments
Your name: ...
Task 1
Create a Python dictionary object that returns sets of author names for a given year. You can name this dictionary, for example, authors_at_year. (You can use a defaultdict with a default value of set.) Demonstrate the working of this dictionary by showing the author set for the year 1941.
End of explanation
"""
# Add your code here
"""
Explanation: Task 2
Based on the dictionary authors_at_year from exercise 1 above, create a plot for the years from 1940 until 2016 that shows how many authors published at least one paper. (You can retrieve the number of unique items in a set s with len(s).)
End of explanation
"""
# Add your code here
"""
Explanation: Task 3
Calculate and plot (e.g. using plt.plot) a graph of the frequency of the 100 most frequent words in titles of papers, from most frequent to least frequent. (You can make use of the data structures created above.)
End of explanation
"""
# Add your code here
"""
Explanation: Task 4
Print out the top 50 most often occurring words in the paper's titles. (You can again make use of the data structures created above.)
End of explanation
"""
|
michaelgat/Udacity_DL | tv-script-generation/dlnd_tv_script_generation-mg1.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (150, 170)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {w:i for i, w in enumerate(vocab)}
int_to_vocab = {i:w for i, w in enumerate(vocab)}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
'.' : '||period||',
',' : '||comma||',
'\"' : '||quotation_mark||',
';' : '||semicolon||',
'!' : '||esclamation_mark||',
'?' : '||question_mark||',
'(' : '||left_parentheses||',
')' : '||right_parentheses||',
'--': '||dash||',
'\n': '||return||'
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], 'input')
targets = tf.placeholder(tf.int32, [None, None], 'targets')
learning_rate = tf.placeholder(tf.float32, None, 'learning_rate')
return inputs, targets, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([cell])
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), 'initial_state')
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embeddings = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embeddings, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype = tf.float32)
final_state = tf.identity(state, 'final_state')
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn = None)
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
batch_length = batch_size * seq_length
total_batches = len(int_text) // batch_length
batches = np.zeros([total_batches, 2, batch_size, seq_length], dtype = np.int32)
int_text_array = np.array(int_text[:total_batches * batch_length])
int_text_matrix = int_text_array.reshape((batch_size, -1))
batch = 0
for n in range(0, int_text_matrix.shape[1], seq_length):
x = int_text_matrix[:, n:n + seq_length]
y = np.zeros(x.shape)
if batch != total_batches - 1:
y[:,:] = int_text_matrix[:, n + 1 : n + seq_length + 1]
else:
for i in range(batch_size):
index = (n * (i + 1)) + ((i * seq_length) + 1)
y[i,:] = int_text_array.take(range(index, index + seq_length), mode = "wrap")
batches[batch, 0] = x
batches[batch, 1] = y
batch += 1
return batches
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 50
# Batch Size
batch_size = 120
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 256
# Sequence Length
seq_length = 25
# Learning Rate
learning_rate = .01
# Show stats for every n number of batches
show_every_n_batches = 20
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
with tf.device("/gpu:0"):
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name('input:0')
init_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probs = loaded_graph.get_tensor_by_name('probs:0')
return inputs, init_state, final_state, probs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
return int_to_vocab[np.argmax(probabilities)]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'barney_gumble'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive/08_image/labs/mnist_linear.ipynb | apache-2.0 | import numpy as np
import shutil
import os
import tensorflow as tf
print(tf.__version__)
"""
Explanation: MNIST Image Classification with TensorFlow
This notebook demonstrates how to implement a simple linear image models on MNIST using Estimator.
<hr/>
This <a href="mnist_models.ipynb">companion notebook</a> extends the basic harness of this notebook to a variety of models including DNN, CNN, dropout, pooling etc.
End of explanation
"""
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("mnist/data", one_hot = True, reshape = False)
print(mnist.train.images.shape)
print(mnist.train.labels.shape)
HEIGHT = 28
WIDTH = 28
NCLASSES = 10
import matplotlib.pyplot as plt
IMGNO = 12
plt.imshow(mnist.test.images[IMGNO].reshape(HEIGHT, WIDTH));
"""
Explanation: Exploring the data
Let's download MNIST data and examine the shape. We will need these numbers ...
End of explanation
"""
def linear_model(img):
#TODO
return ylogits, NCLASSES
"""
Explanation: Define the model.
Let's start with a very simple linear classifier. All our models will have this basic interface -- they will take an image and return logits.
End of explanation
"""
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x = {"image": mnist.train.images},
y = mnist.train.labels,
batch_size = 100,
num_epochs = None,
shuffle = True,
queue_capacity = 5000
)
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
#TODO
)
def serving_input_fn():
inputs = {"image": tf.placeholder(dtype = tf.float32, shape = [None, HEIGHT, WIDTH])}
features = inputs # as-is
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = inputs)
"""
Explanation: Write Input Functions
As usual, we need to specify input functions for training, evaluation, and predicition.
End of explanation
"""
def image_classifier(features, labels, mode, params):
ylogits, nclasses = linear_model(features["image"])
probabilities = tf.nn.softmax(logits = ylogits)
class_ids = tf.cast(x = tf.argmax(input = probabilities, axis = 1), dtype = tf.uint8)
if mode == tf.estimator.ModeKeys.TRAIN or mode == tf.estimator.ModeKeys.EVAL:
loss = tf.reduce_mean(input_tensor = tf.nn.softmax_cross_entropy_with_logits_v2(logits = ylogits, labels = labels))
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss = loss,
global_step = tf.train.get_global_step(),
learning_rate = params["learning_rate"],
optimizer = "Adam")
eval_metric_ops = None
else:
train_op = None
eval_metric_ops = {"accuracy": tf.metrics.accuracy(labels = tf.argmax(input = labels, axis = 1), predictions = class_ids)}
else:
loss = None
train_op = None
eval_metric_ops = None
return tf.estimator.EstimatorSpec(
mode = mode,
predictions = {"probabilities": probabilities, "class_ids": class_ids},
loss = loss,
train_op = train_op,
eval_metric_ops = eval_metric_ops,
export_outputs = {"predictions": tf.estimator.export.PredictOutput({"probabilities": probabilities, "class_ids": class_ids})}
)
"""
Explanation: Write Custom Estimator
I could have simply used a canned LinearClassifier, but later on, I will want to use different models, and so let's write a custom estimator
End of explanation
"""
def train_and_evaluate(output_dir, hparams):
estimator = tf.estimator.Estimator(
model_fn = image_classifier,
model_dir = output_dir,
params = hparams)
train_spec = tf.estimator.TrainSpec(
input_fn = train_input_fn,
max_steps = hparams["train_steps"])
exporter = tf.estimator.LatestExporter(name = "exporter", serving_input_receiver_fn = serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
input_fn = eval_input_fn,
steps = None,
exporters = exporter)
tf.estimator.train_and_evaluate(estimator = estimator, train_spec = train_spec, eval_spec = eval_spec)
"""
Explanation: tf.estimator.train_and_evaluate does distributed training.
End of explanation
"""
OUTDIR = "mnist/learned"
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
hparams = {"train_steps": 1000, "learning_rate": 0.01}
train_and_evaluate(OUTDIR, hparams)
"""
Explanation: This is the main() function
End of explanation
"""
|
drericstrong/Blog | 20170402_ArcheryWithGeometryPixelsAndMonteCarlo.ipynb | agpl-3.0 | from PIL import Image
import numpy as np
im = Image.open("TargetMonteCarlo.bmp")
# Convert the image into an array of [R,G,B] per pixel
data = np.array(im.getdata(), np.uint8).reshape(im.size[1], im.size[0], 3)
# For example, the upper left pixel is black ([0,0,0]):
print("Upper left pixel: {}".format(data[0][0]))
# The upper center pixel is blue ([0, 162, 232]):
print("Upper middel pixel: {}".format(data[0][99]))
# And the ~middle pixel is red ([237, 28, 36]):
print("Middle pixel: {}".format(data[99][99]))
"""
Explanation: Initial Problem Setup
Let's imagine a person throwing darts at a circular target. Unfortunately, due to lack of skill, the dart thrower isn't able to accurately aim, so the probability of hitting each part of the target is exactly the same (uniform probability distribution).
The circular target has a bullseye painted in the center, and it will look something like this:
[image in blog post]
where the red circle is the bullseye, the blue circle is the target, and the black square is the area outside the target. Assume that the black square just barely encloses the blue target. Although in real life the dart thrower might in fact miss badly enough that they throw the dart outside the black square, let's assume that this is impossible, for now.
Darts are scored in the following way: 2 points if the bullseye is hit, 1 point if the circular target itself is hit (but not the bullseye), and 0 points if the circular target is not hit at all.
Given these parameters, what's the average score per dart?
In the following sections, we will investigate three potential solutions to this problem:
Geometrical
Image-Based
Monte Carlo
Each time, the parameters of the problem will be adjusted to add complexity.
Situation 1: Geometrical
Using geometry to solve this problem is likely what most people think of first, since we've all probably encountered a similar problem for a class or standardized test. According to established geometrical formulas, we know the area of the bullseye and target (pi times their radius squared), and we know the area of the black square (two times the radius of the target squared). Also, we work under the assumption that the dart is thrown randomly, which means that both the x and y axes can be treated as a uniform probability distribution. For example, it's equally likely to be thrown in the upper left corner as the center, assuming the areas are the same.
Given these assumptions, the probability of hitting the bullseye is just the area of the bullseye divided by the total area, where r_b is the radius of the bullseye and r_t is the radius of the target:
$p_{b} = \frac{BullseyeArea}{TotalArea} = \frac{\pi r_{b}^{2}}{(2r_{t})^{2}} = \frac{\pi r_{b}^{2}}{4r_{t}^{2}}$
The probability of hitting the target (but not the bullseye) is:
$p_{t} = \frac{TargetArea}{TotalArea} - p_{b} = \frac{\pi r_{t}^{2}}{(2r_{t})^{2}} - \frac{\pi r_{b}^{2}}{(2r_{t})^{2}} = \frac{\pi }{4} - \frac{\pi r_{b}^{2}}{4r_{t}^{2}}$
In the equation above, we must subtract out the area of the bullseye because we want to find the probability that the target is hit but the bullseye is not hit.
Given that the score for not hitting the target is zero, we won't need to worry about it.
Let's put this all together. If we multiply the probability to hit each area by the score we receive when we hit it, we'll get the expected score per dart throw. In equation form:
$e_{d} = 0p_{n} + 1p_{t} + 2p_{b} = 0 + \frac{\pi }{4} - \frac{\pi r_{b}^{2}}{4r_{t}^{2}} + \frac{2 \pi r_{b}^{2}}{4r_{t}^{2}} = \frac{\pi }{4} + \frac{\pi r_{b}^{2}}{4r_{t}^{2}}$
For the case where the radius of the target is 1 and the radius of the bullseye is 0.1, we get an expected score of:
$e_{d}(1, 0.1) = \frac{\pi }{4} + \frac{0.01 \pi}{4} \approx 0.793$
Given that we have a chance to miss the target entirely, and the bullseye is relatively small compared to the target, this result makes sense.
Situation 2: Image-Based
Now, what if the shapes weren't drawn perfectly? Imagine that the target wasn't machined but was instead painted on a piece of paper. In this case, the bullseye and target will no longer be necessarily perfect circles, and the above approach would not be as simple. One potential approach is to take a picture of the target, and count the number of pixels of each color.
Let's take the image I drew above as an example. The Python library "pillow" can be used to parse the image into a numpy array:
End of explanation
"""
# Here we are using a trick: "162" is unique to the blue pixels,
# and "28" is unique to the red pixels, so we can search for it
# alone. Any of the pixels that are left from the total must be
# black, so we can count (length of the data - blue - red) pixels.
blue = len(data[np.where(data == 162)])
red = len(data[np.where(data == 28)])
black = len(im.getdata()) - blue - red
print("Black:{}, Blue:{}, Red:{}".format(black, blue, red))
"""
Explanation: We now have an array of pixels in the [Red, Green, Blue] space. The color codes for the image are:
Black = [0,0,0]
Blue = [0,162,232]
Red = [237,28,36]
Let's count up the number of pixels of each color:
End of explanation
"""
total = blue + red + black
# The probabilities are the count over the total pixels
p_t = float(blue) / float(total)
p_b = float(red) / float(total)
# To get the expected value, we use the same formula as before:
# e_d = (p_n)(0) + (p_t)(1) + (p_b)(2)
e_d = p_t + 2*p_b
print("Expected score per dart throw:{}".format(round(e_d,3)))
"""
Explanation: To get the probabilities, we can divide each of these pixels by the total number of pixels (39,601, since the image is 199 x 199 pixels). As above, the expected value is given by:
e_d = (p_n)(0) + (p_t)(1) + (p_b)(2)
End of explanation
"""
from scipy.stats import norm
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Problem constraints
r_b = 0.1
r_t = 1
mean_x = 0.05 * r_t
# Iterate over many possible sigma_y
results = {}
for sigma_x in np.linspace(0.1, 1, 100):
prob_hit = norm.cdf(r_b, loc=mean_x, scale=sigma_x) - \
norm.cdf(-r_b, loc=mean_x, scale=sigma_x)
results[sigma_x] = prob_hit
# Save the results to a DataFrame and plot them
df = pd.DataFrame.from_dict(results, orient="index")
f = df.plot();
f.set_ylabel("prob_hit")
f.set_xlabel("sigma_x");
plt.scatter(0.385, 0.2, c = 'r');
# Calculate sigma_y based on sigma_x
print('Sigma_y = {}'.format(round(0.385/0.75,3)))
"""
Explanation: The difference in the result compared to Solution 1 comes from the fact that the image was not drawn perfectly, as you can tell if you zoom in far enough.
Situation 3: Monte Carlo
Let's complicate the situation even further. Imagine that the dart thrower is not a beginner but instead has a higher chance of throwing darts near the bullseye, although their aim is not perfect. There are several different ways we could model this situation, but let's assume that instead of treating the probability distribution across the target as uniform, we model the x-axis and y-axis as independent Gaussian probability distributions. Since the pdf of the Gaussian distribution will go to zero at both positive and negative infinity, we can also remove the previous situation's assumption that the dart must fall within the bounds of the black square.
In addition, we will consider the following assumptions about the dart thrower:
They are better at aiming left-to-right than up-to-down
They tend to throw a little bit up and to the right of the bullseye
They are lined up with the bullseye in the X direction about 20% of the time
The first item will affect the standard deviation of the Gaussian distributions- we want the standard deviation of the X axis distribution to be less than the standard deviation of the Y axis distribution. The second item above requires us to adjust the mean of the Gaussian distributions in the positive direction, for both the x and y axis. Both of these items are rather vague, and we will need to choose some appropriate values. The third item is the most difficult, as it requires us to select appropriate standard deviations such that the probability of hitting the bullseye (in the X direction alone) is 20%.
We will also add the same constraints as in Situation 1: the radius of the target is 1, and the radius of the bullseye is 0.1.
"Better at Aiming Left-to-Right"
"Better at aiming" implies more consistency in the results of the dart throw when thrown left-to-right, which means decreasing the standard deviation of the Gaussian distribution of the X axis compared to the Y axis. Given that this statement is rather vague, let's assume that the standard deviation in the X direction is 0.75 times the standard deviation in the Y direction:
sigma_x = (0.75)(sigma_y)
"Tends to Throw Up and to the Right"
"Tends to throw" implies an area of the target that the dart thrower hits most often, which means that we need to shift the mean of the Gaussian distributions up and to the right. For both the X and Y axis, this will be in the positive direction.
The mean needs to be shifted compared to the center of the target, which is our reference frame. Hence, let's place the center of the bullseye in the center of our coordinate system, which will somewhat simplify the subsequent math.
Again, the statement is vague, so to meet the assumption, let's adjust the mean in both directions by about 5% of the radius of the target:
mean_x = (0.05)(r_t) = 0.05
mean_y = (0.05)(r_t) = 0.05
"Tends to Hit (X direction) 20% of the Time"
"Tends to hit the bullseye in the X direction about 20% of the time" implies something about the results of the X Gaussian probability distribution. We want to select a standard deviation such that a dart will fall between [-r_b, r_b] in the X direction 20% of the time. r_b, remember, is the radius of the bullseye.
We can frame this using the CDF of the Gaussian distribution. Since the CDF will give the probability that the distribution is at least x, we can subtract the CDF of the low end of the bullseye (-r_b) from the high end (r_b).
0.2 = prob_hit = CDF_x(r_b) - CDF_x(-r_b)
We have a fixed mean_x (from "tends to throw up and to the right"), so the only two variables are sigma_x and prob_hit. For ease of explanation, I'll employ the "visual solution" method by plotting sigma_x vs. prob_hit, and select a sigma_x such that the prob_hit is approximately 20%.
End of explanation
"""
import random
import numpy as np
# User-defined parameters
n_hist = 1000000
# Radius of the target and bullseye
r_t = 1
r_b = 0.1
# Parameters of the random distributions
mean_x = 0.05
mean_y = 0.05
sigma_x = 0.385
sigma_y = 0.513
# Score for hitting the bullseye, target, and nothing
b_score = 2
t_score = 1
n_score = 0
# Monte Carlo loop
total_score = 0
for n in range(0,n_hist):
x = random.normalvariate(mean_x, sigma_x)
y = random.normalvariate(mean_y, sigma_y)
check = np.sqrt(x**2 + y**2)
# If it hits the bullseye
if check < r_b:
total_score += b_score
# If it hits the target but not the bullseye
elif check < r_t:
total_score += t_score
# If it hits neither the target nor bullseye
else:
total_score += n_score
# Find the mean of the Monte Carlo simulation
hist_mean = total_score / n_hist
print('Expected score per dart throw: {}'.format(hist_mean))
"""
Explanation: Based on the above figure, I'll select sigma_x equal to 0.385, since it is the point on the line where prob_hit is 0.2. To find sigma_y, we can introduce the constraint from earlier that:
sigma_x = (0.75)(sigma_y)
Which makes sigma_y equal to 0.513 (0.385 divided by 0.75), which means that we now have all the required parameters for each Gaussian distribution:
X ~ N(0.05, 0.385)
Y ~ N(0.05, 0.513)
Monte Carlo Simulation
Monte Carlo methods use repeated random simulation (sampling) of a distribution to achieve an approximate result. In layman's terms, we're going to randomly pull numbers from each Gaussian distribution, score them based on whether they hit the bullseye or target, repeat this process many times, and then we'll look at the average of the results.
The following code will implement a Monte Carlo solution using 10 million histories (note that using numpy arrays would have been faster than the for loop, but I feel that putting the code in this form is clearer). It will print out the mean score per dart throw across all 10 million histories.
End of explanation
"""
|
IST256/learn-python | content/lessons/03-Conditionals/Slides.ipynb | mit | if boolean-expression:
statements-when-true
else:
statemrnts-when-false
"""
Explanation: IST256 Lesson 03
Conditionals
Zybook Ch3
P4E Ch3
Links
Participation: https://poll.ist256.com
Zoom Chat!!!
Agenda
Homework 02 Solution
Non-Linear Code Execution
Relational and Logical Operators
Different types of non-linear execution.
Run-Time error handling
Connect Activity
A Boolean value is a/an ______?
A. True or False value
B. Zero-based value
C. Non-Negative value
D. Alphanumeric value
Vote Now: https://poll.ist256.com
What is a Boolean Expression?
A Boolean expression evaluates to a Boolean value of <font color='red'> True </font> or <font color='green'> False </font>.
Boolean expressions ask questions.
GPA >3.2 <span>→</span> Is GPA greater than 3.2?
The result of which is <font color='red'> True </font> or <font color='green'> False </font> based on the evaluation of the expression:
GPA = 4.0 <span>→</span> GPA > 3.2 <span>→</span> <font color='red'> True </font>
GPA = 2.0 <span>→</span> GPA > 3.2 <span>→</span> <font color='green'> False </font>
Program Flow Control with IF
The IF statement is used to branch your code based on a Boolean expression.
End of explanation
"""
x = 15
y = 20
z = 2
x > y
z*x <= y
y >= x-z
z*10 == x
"""
Explanation: Python’s Relational Operators
<table style="font-size:1.2em;">
<thead><tr>
<th>Operator</th>
<th>What it does</th>
<th>Examples</th>
</tr></thead>
<tbody>
<tr>
<td><code> > </code></td>
<td> Greater than </td>
<td> 4>2 (True)</td>
</tr>
<tr>
<td><code> < </code></td>
<td> Less than </td>
<td> 4<2 (False)</td>
</tr>
<tr>
<td><code> == </code></td>
<td> Equal To </td>
<td> 4==2 (False)</td>
</tr>
<tr>
<td><code> != </code></td>
<td> Not Equal To </td>
<td> 4!=2 (True)</td>
</tr>
<tr>
<td><code> >= </code></td>
<td> Greater Than or Equal To </td>
<td> 4>=2 (True)</td>
<tr>
<td><code> <= </code></td>
<td> Less Than or Equal To </td>
<td> 4<=2 (True)</td>
</tr>
</tbody>
</table>
Expressions consisting of relational operators evaluate to a Boolean value
Watch Me Code 1!
```
Do you need more milk?
When the Fudge family has less than 1 gallon of milk,
we need more!
```
Check Yourself: Relational Operators
On Which line number is the Boolean expression True?
End of explanation
"""
raining = False
snowing = True
age = 45
age < 18 and raining
age >= 18 and not snowing
not snowing or not raining
age == 45 and not snowing
"""
Explanation: A. 4
B. 5
C. 6
D. 7
Vote Now: https://poll.ist256.com
Python’s Logical Operators
<table style="font-size:1.2em;">
<thead><tr>
<th>Operator</th>
<th>What it does</th>
<th>Examples</th>
</tr></thead>
<tbody>
<tr>
<td><code> and </code></td>
<td> True only when both are True </td>
<td> 4>2 and 4<5 (True)</td>
</tr>
<tr>
<td><code> or </code></td>
<td> False only when both are False </td>
<td> 4<2 or 4==4 (True)</td>
</tr>
<tr>
<td><code> not </code></td>
<td> Negation(Opposite) </td>
<td> not 4==2 (True)</td>
</tr>
<tr>
<td><code> in </code></td>
<td> Set operator </td>
<td> 4 in [2,4,7] (True)</td>
</tr>
</tbody>
</table>
Check Yourself: Logical Operators
On Which line number is the Boolean expression True?
End of explanation
"""
if boolean-expression1:
statements-when-exp1-true
elif boolean-expression2:
statements-when-exp2-true
elif boolean-expression3:
statements-when-exp3-true
else:
statements-none-are-true
"""
Explanation: A. 4
B. 5
C. 6
D. 7
Vote Now: https://poll.ist256.com
Multiple Decisions: IF ladder
Use elif to make more than one decision in your if statement. Only one code block within the ladder is executed.
End of explanation
"""
x = int(input("enter an integer"))
# one single statement. only one block executes
if x>10:
print("A:bigger than 10")
elif x>20:
print("A:bigger than 20")
# Independent if's, each True Boolean executes a block
if x>10:
print("B:bigger than 10")
if x>20:
print("B:bigger than 20")
"""
Explanation: elif versus a series of if statements
End of explanation
"""
if x > 20:
if y == 4:
print("One")
elif y > 4:
print("Two")
else:
print("Three")
else:
print("Four")
"""
Explanation: Check Yourself: IF Statement
Assuming values x = 77 and y = 2 what value is printed?
End of explanation
"""
try:
statements-which
might-throw-an-error
except errorType1:
code-when-Type1-happens
except errorType2:
code-when-Type2-happens
finally:
code-happens-regardless
"""
Explanation: A. One
B. Two
C. Three
D. Four
Vote Now: https://poll.ist256.com
End-To-End Example, Part 1:
Tax Calculations!
The country of “Fudgebonia” determines your tax rate from the number of dependents:
0 <span>→</span> 30%
1 <span>→</span> 25%
2 <span>→</span> 18%
3 or more <span>→</span> 10%
Write a program to prompt for number of dependents (0-3) and annual income.
It should then calculate your tax rate and tax bill. Format numbers properly!
Handle Bad Input with Exceptions
Exceptions represent a class of errors which occur at run-time.
We’ve seen these before when run a program and it crashes due to bad input. And we get a TypeError or ValueError.
Python provides a mechanism try .. except to catch these errors at run-time and prevent your program from crashing.
Exceptions are <i>exceptional</i>. They should ONLY be used to handle unforeseen errors in program input.
Try…Except…Finally
The Try... Except statement is used to handle exceptions. Remember that exceptions catch run-time errors!
End of explanation
"""
try:
x = float(input("Enter a number: "))
if x > 0:
y = "a"
else:
y = "b"
except ValueError:
y = "c"
print(y)
"""
Explanation: Watch Me Code 2
The need for an exception handling:
- Bad input
- try except finally
- Good practice of catching the specific error
Check Yourself: Conditionals Try/Except
What prints on line 9 when you input the value '-45s'?
End of explanation
"""
|
Cyb3rWard0g/ThreatHunter-Playbook | docs/notebooks/campaigns/apt29Evals.ipynb | gpl-3.0 | # Importing Libraries
from bokeh.io import show
from bokeh.plotting import figure
from bokeh.models import ColumnDataSource, LabelSet, HoverTool
from bokeh.transform import dodge
import pandas as pd
# You need to run this code at the beginning in order to show visualization using Jupyter Notebooks
from bokeh.io import output_notebook
output_notebook()
apt29= pd.read_json('https://raw.githubusercontent.com/OTRF/ThreatHunter-Playbook/master/docs/evals/apt29/data/otr_results.json')
summary = (
apt29
.groupby(['step','stepname']).agg(total=pd.NamedAgg(column="substep", aggfunc="nunique"))
.join(
apt29[apt29['detectiontype'] == 'Telemetry']
.groupby(['step','stepname']).agg(telemetry=pd.NamedAgg(column="vendor", aggfunc="count"))
)
).reset_index()
summary['percentage'] = (summary['telemetry'] / summary['total']).map("{:.0%}".format)
# Get Total Average Telemetry coverage
total_avg_percentage = '{0:.0f}'.format((summary['telemetry'].sum() / summary['total'].sum() * 100))
# Lists of values to create ColumnDataSource
stepname = summary['stepname'].tolist()
total = summary['total'].tolist()
telemetry = summary['telemetry'].tolist()
percentage = summary['percentage'].tolist()
# Creating ColumnDataSource object: source of data for visualization
source = ColumnDataSource(data={'stepname':stepname,'sub-Steps':total,'covered':telemetry,'percentage':percentage})
# Defining HoverTool object (Display info with Mouse): It is applied to chart named 'needHover'
hover_tool = HoverTool(names = ['needHover'],tooltips = [("Covered", "@covered"),("Percentage", "@percentage")])
# Creating Figure
p = figure(x_range=stepname,y_range=(0,23),plot_height=550,plot_width=600,toolbar_location='right',tools=[hover_tool])
# Creating Vertical Bar Charts
p.vbar(x=dodge('stepname',0.0,range=p.x_range),top='sub-Steps',width=0.7,source=source,color="#c9d9d3",legend_label="Total")
p.vbar(x=dodge('stepname',0.0, range=p.x_range),top='covered',width=0.7,source=source,color="#718dbf",legend_label="Covered", name = 'needHover')
# Adding Legend
p.legend.location = "top_right"
p.legend.orientation = "vertical"
p.legend.border_line_width = 3
p.legend.border_line_color = "black"
p.legend.border_line_alpha = 0.3
# Adding Title
p.title.text = 'Telemetry Detection Category (Average Coverage: {}%)'.format(total_avg_percentage)
p.title.align = 'center'
p.title.text_font_size = '12pt'
# Adding Axis Labels
p.xaxis.axis_label = 'Emulation Steps'
p.xaxis.major_label_orientation = 45
p.yaxis.axis_label = 'Count of Sub-Steps'
# Adding Data Label: Only for total of sub-steps
total_label = LabelSet(x='stepname',y='sub-Steps',text='sub-Steps',text_align='center',level='glyph',source= source)
p.add_layout(total_label)
#Showing visualization
show(p)
"""
Explanation: Free Telemetry Notebook
| | |
|:--------------|:---|
| Group | APT29 |
| Description | APT29 is a threat group that has been attributed to the Russian government and has operated since at least 2008. This group reportedly compromised the Democratic National Committee starting in the summer of 2015 |
| Author | Open Threat Research - APT29 Detection Hackathon |
Telemetry Detection Category
End of explanation
"""
from pyspark.sql import SparkSession
"""
Explanation: Import Libraries
End of explanation
"""
spark = SparkSession.builder.getOrCreate()
spark.conf.set("spark.sql.caseSensitive", "true")
"""
Explanation: Start Spark Session
End of explanation
"""
!wget https://github.com/OTRF/mordor/raw/master/datasets/large/apt29/day1/apt29_evals_day1_manual.zip
!unzip apt29_evals_day1_manual.zip
"""
Explanation: Decompress Dataset
End of explanation
"""
df_day1_host = spark.read.json('apt29_evals_day1_manual_2020-05-01225525.json')
"""
Explanation: Import Datasets
End of explanation
"""
df_day1_host.createTempView('apt29Host')
"""
Explanation: Create Temporary SQL View
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(ParentImage) LIKE "%explorer.exe"
AND LOWER(Image) LIKE "%3aka3%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Adversary - Detection Steps
1.A.1. User Execution
Procedure: User Pam executed payload rcs.3aka3.doc
Criteria: The rcs.3aka3.doc process spawning from explorer.exe
Detection Type:Telemetry(None)
Query ID:204B00B6-A92B-4EF7-8510-4FB237703147
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(ParentProcessName) LIKE "%explorer.exe"
AND LOWER(NewProcessName) LIKE "%3aka3%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:52540C1E-DD76-41B2-93ED-CFBA2B94ECF7
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 13
AND TargetObject RLIKE '.*\\\\\\\\AppCompatFlags\\\\\\\\Compatibility Assistant\\\\\\\\Store\\\\\\\\.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Detection Type:General(None)
Query ID:DFD6A782-9BDB-4550-AB6B-525E825B095E
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 1.A.2. Masquerading
Procedure: Used unicode right-to-left override (RTLO) character to obfuscate file name rcs.3aka3.doc (originally cod.3aka.scr)
Criteria: Evidence of the right-to-left override character (U+202E) in the rcs.3aka.doc process OR the original filename (cod.3aka.scr)
Detection Type:Telemetry(None)
Query ID:F4C71BF4-E068-493D-ABAA-0C5DFA02875D
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:D94222A0-72F9-4F1E-84A9-F14CA1098D44
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 3
AND LOWER(Image) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 1.A.3. Uncommonly Used Port
Procedure: Established C2 channel (192.168.0.5) via rcs.3aka3.doc payload over TCP port 1234
Criteria: Established network channel over port 1234
Detection Type:Telemetry(None)
Query ID:B53A710B-43AB-4B57-BD92-4E787D494978
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 5156
AND LOWER(Application) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:1BAC5645-83CD-4D6F-A4F8-659084401F47
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 7
AND Image LIKE "%3aka3%"
AND LOWER(ImageLoaded) LIKE '%bcrypt.dll'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 1.A.4. Standard Cryptographic Protocol
Procedure: Used RC4 stream cipher to encrypt C2 (192.168.0.5) traffic
Criteria: Evidence that the network data sent over the C2 channel is encrypted
Detection Type:None(None)
Query ID:E12B701E-1222-413C-BCAF-F357CB769B3E
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(ParentImage) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
AND LOWER(Image) LIKE "%cmd.exe"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 1.B.1. Command-Line Interface
Procedure: Spawned interactive cmd.exe
Criteria: cmd.exe spawning from the rcs.3aka3.doc process
Detection Type:Telemetry(Correlated)
Query ID:4799C203-573A-49CB-ACE4-8C4C5CD3862A
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(ParentProcessName) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
AND LOWER(NewProcessName) LIKE "%cmd.exe"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:C8D664CD-48EE-4663-AE49-D5B0B19014C7
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(ParentImage) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
AND LOWER(Image) LIKE '%cmd.exe'
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE '%powershell.exe'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 1.B.2. PowerShell
Procedure: Spawned interactive powershell.exe
Criteria: powershell.exe spawning from cmd.exe
Detection Type:Telemetry(Correlated)
Query ID:C1DBF5F2-21D5-45E4-8D9A-44905F1F8242
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(ParentProcessName) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
AND LOWER(NewProcessName) LIKE '%cmd.exe'
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE '%powershell.exe'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:43B46661-3407-4302-BA8C-EE772C677DCB
End of explanation
"""
df = spark.sql(
'''
SELECT b.ScriptBlockText
FROM apt29Host a
INNER JOIN (
SELECT d.ParentProcessGuid, d.ProcessId, c.ScriptBlockText
FROM apt29Host c
INNER JOIN (
SELECT ParentProcessGuid, ProcessGuid, ProcessId
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) d
ON c.ExecutionProcessID = d.ProcessId
WHERE c.Channel = "Microsoft-Windows-PowerShell/Operational"
AND c.EventID = 4104
AND LOWER(c.ScriptBlockText) LIKE "%childitem%"
) b
ON a.ProcessGuid = b.ParentProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND LOWER(a.ParentImage) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 2.A.1. File and Directory Discovery
Procedure: Searched filesystem for document and media files using PowerShell
Criteria: powershell.exe executing (Get-)ChildItem
Detection Type:Telemetry(Correlated)
Query ID:10C87900-CC2F-4EE1-A2F2-1832A761B050
End of explanation
"""
df = spark.sql(
'''
SELECT b.ScriptBlockText
FROM apt29Host a
INNER JOIN (
SELECT d.NewProcessId, d.ProcessId, c.ScriptBlockText
FROM apt29Host c
INNER JOIN (
SELECT split(NewProcessId, '0x')[1] as NewProcessId, ProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
) d
ON hex(c.ExecutionProcessID) = d.NewProcessId
WHERE c.Channel = "Microsoft-Windows-PowerShell/Operational"
AND c.EventID = 4104
AND LOWER(c.ScriptBlockText) LIKE "%childitem%"
) b
ON a.NewProcessId = b.ProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND LOWER(a.ParentProcessName) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:26F6963D-00D5-466A-B4BA-59DA30892B26
End of explanation
"""
df = spark.sql(
'''
SELECT b.ScriptBlockText
FROM apt29Host a
INNER JOIN (
SELECT d.ParentProcessGuid, d.ProcessId, c.ScriptBlockText
FROM apt29Host c
INNER JOIN (
SELECT ParentProcessGuid, ProcessGuid, ProcessId
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) d
ON c.ExecutionProcessID = d.ProcessId
WHERE c.Channel = "Microsoft-Windows-PowerShell/Operational"
AND c.EventID = 4104
AND LOWER(c.ScriptBlockText) LIKE "%childitem%"
) b
ON a.ProcessGuid = b.ParentProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND LOWER(a.ParentImage) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 2.A.2. Automated Collection
Procedure: Scripted search of filesystem for document and media files using PowerShell
Criteria: powershell.exe executing (Get-)ChildItem
Detection Type:Telemetry(Correlated)
Query ID:F96EA21C-1EB4-4988-8F98-BD018717EE2D
End of explanation
"""
df = spark.sql(
'''
SELECT b.ScriptBlockText
FROM apt29Host a
INNER JOIN (
SELECT d.NewProcessId, d.ProcessId, c.ScriptBlockText
FROM apt29Host c
INNER JOIN (
SELECT split(NewProcessId, '0x')[1] as NewProcessId, ProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
) d
ON hex(c.ExecutionProcessID) = d.NewProcessId
WHERE c.Channel = "Microsoft-Windows-PowerShell/Operational"
AND c.EventID = 4104
AND LOWER(c.ScriptBlockText) LIKE "%childitem%"
) b
ON a.NewProcessId = b.ProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND LOWER(a.ParentProcessName) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:EAD989D4-8886-46DC-BC8C-780C10760E93
End of explanation
"""
df = spark.sql(
'''
SELECT b.ScriptBlockText
FROM apt29Host a
INNER JOIN (
SELECT d.ParentProcessGuid, d.ProcessId, c.ScriptBlockText
FROM apt29Host c
INNER JOIN (
SELECT ParentProcessGuid, ProcessGuid, ProcessId
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) d
ON c.ExecutionProcessID = d.ProcessId
WHERE c.Channel = "Microsoft-Windows-PowerShell/Operational"
AND c.EventID = 4104
AND LOWER(c.ScriptBlockText) LIKE "%compress-archive%"
) b
ON a.ProcessGuid = b.ParentProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND LOWER(a.ParentImage) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 2.A.3. Data from Local System
Procedure: Recursively collected files found in C:\Users\Pam\ using PowerShell
Criteria: powershell.exe reading files in C:\Users\Pam\
Detection Type:None(None)
2.A.4. Data Compressed
Procedure: Compressed and stored files into ZIP (Draft.zip) using PowerShell
Criteria: powershell.exe executing Compress-Archive
Detection Type:Telemetry(Correlated)
Query ID:6CDEBEBF-387F-4A40-A4E8-8D4DF3A8F897
End of explanation
"""
df = spark.sql(
'''
SELECT b.ScriptBlockText
FROM apt29Host a
INNER JOIN (
SELECT d.NewProcessId, d.ProcessId, c.ScriptBlockText
FROM apt29Host c
INNER JOIN (
SELECT split(NewProcessId, '0x')[1] as NewProcessId, ProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
) d
ON hex(c.ExecutionProcessID) = d.NewProcessId
WHERE c.Channel = "Microsoft-Windows-PowerShell/Operational"
AND c.EventID = 4104
AND LOWER(c.ScriptBlockText) LIKE "%compress-archive%"
) b
ON a.NewProcessId = b.ProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND LOWER(a.ParentProcessName) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:621F8EE7-E9D8-417C-9FE5-5A0D89C3736A
End of explanation
"""
df = spark.sql(
'''
SELECT TargetFilename
FROM apt29Host a
INNER JOIN (
SELECT d.ProcessGuid, d.ProcessId
FROM apt29Host c
INNER JOIN (
SELECT ProcessGuid, ProcessId
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) d
ON c.ExecutionProcessID = d.ProcessId
WHERE c.Channel = "Microsoft-Windows-PowerShell/Operational"
AND c.EventID = 4104
AND LOWER(c.ScriptBlockText) LIKE "%compress-archive%"
) b
ON a.ProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 11
AND LOWER(a.TargetFilename) LIKE "%zip"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 2.A.5. Data Staged
Procedure: Staged files for exfiltration into ZIP (Draft.zip) using PowerShell
Criteria: powershell.exe creating the file draft.zip
Detection Type:Telemetry(Correlated)
Query ID:76154CEC-1E01-4D3A-B9ED-C78978597C2B
End of explanation
"""
df = spark.sql(
'''
SELECT b.Message
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid, Message
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 11
AND LOWER(TargetFilename) LIKE '%monkey.png'
) b
ON a.ProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND LOWER(a.Image) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 2.B.1. Exfiltration Over Command and Control Channel
Procedure: Read and downloaded ZIP (Draft.zip) over C2 channel (192.168.0.5 over TCP port 1234)
Criteria: The rcs.3aka3.doc process reading the file draft.zip while connected to the C2 channel
Detection Type:None(None)
3.A.1. Remote File Copy
Procedure: Dropped stage 2 payload (monkey.png) to disk
Criteria: The rcs.3aka3.doc process creating the file monkey.png
Detection Type:Telemetry(Correlated)
Query ID:64249901-ADF8-4E5D-8BB4-70540A45E26C
End of explanation
"""
df = spark.sql(
'''
SELECT d.Image, d.CommandLine, c.ScriptBlockText
FROM apt29Host c
INNER JOIN (
SELECT ParentProcessGuid, ProcessGuid, ProcessId, ParentImage, Image, ParentCommandLine, CommandLine
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) d
ON c.ExecutionProcessID = d.ProcessId
WHERE c.Channel = "Microsoft-Windows-PowerShell/Operational"
AND c.EventID = 4104
AND LOWER(c.ScriptBlockText) LIKE "%monkey.png%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 3.A.2. Obfuscated Files or Information
Procedure: Embedded PowerShell payload in monkey.png using steganography
Criteria: Evidence that a PowerShell payload was within monkey.png
Detection Type:Telemetry(None)
Query ID:0F10E1D1-EDF8-4B9F-B879-3651598D528A
End of explanation
"""
df = spark.sql(
'''
SELECT d.NewProcessName, d.CommandLine, c.ScriptBlockText
FROM apt29Host c
INNER JOIN (
SELECT NewProcessName, CommandLine, split(NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
) d
ON LOWER(hex(c.ExecutionProcessID)) = d.NewProcessId
WHERE c.Channel = "Microsoft-Windows-PowerShell/Operational"
AND c.EventID = 4104
AND LOWER(c.ScriptBlockText) LIKE "%monkey.png%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:94F9B4F2-1C52-4A47-BF47-C786513A05AA
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 13
AND LOWER(TargetObject) RLIKE '.*\\\\\\\\folder\\\\\\\\shell\\\\\\\\open\\\\\\\\command\\\\\\\delegateexecute.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 3.B.1. Component Object Model Hijacking
Procedure: Modified the Registry to enable COM hijacking of sdclt.exe using PowerShell
Criteria: Addition of the DelegateExecute subkey in HKCU\Software\Classes\Folder\shell\open\command
Detection Type:Telemetry(None)
Query ID:04EB334D-A304-40D9-B177-0BB6E95FC23E
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(ParentImage) LIKE "%sdclt.exe%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 3.B.2. Bypass User Account Control
Procedure: Executed elevated PowerShell payload
Criteria: High integrity powershell.exe spawning from control.exe (spawned from sdclt.exe)
Detection Type:Technique(None)
Query ID:7a4a8c7e-4238-4db3-a90d-34e9f3c6e60f
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:d52fe669-55da-49e1-a76b-89297c66fa02
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%sdclt.exe"
AND IntegrityLevel = "High"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Detection Type:Telemetry(None)
Query ID:F7E315BA-6A66-44D8-ABB3-3FBB4AA8F80A
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:6C8780E9-E6AF-4210-8EA0-72E9017CEE7D
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%sdclt.exe"
AND MandatoryLabel = "S-1-16-12288"
AND TokenElevationType = "%%1937"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:C36B49B5-DF58-4A34-9FE9-56189B9DEFEA
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:EE34D18C-0549-4AFB-8B98-01160B0C9094
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ProcessGuid = c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 3
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 3.B.3. Commonly Used Port
Procedure: Established C2 channel (192.168.0.5) via PowerShell payload over TCP port 443
Criteria: Established network channel over port 443
Detection Type:Telemetry(Correlated)
Query ID:E209D0C5-5A2B-4AEC-92B0-1510165B8EC7
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host d
INNER JOIN (
SELECT split(a.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON LOWER(hex(CAST(ProcessId as INT))) = c.NewProcessId
WHERE LOWER(Channel) = "security"
AND d.EventID = 5156
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:2E9B9ADC-2426-419F-8E6E-2D9338384F80
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host d
INNER JOIN (
SELECT b.ProcessGuid
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(ParentImage) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE b.Channel = "Microsoft-Windows-Sysmon/Operational"
AND b.EventID = 1
) c
ON d.ProcessGuid = c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 12
AND LOWER(d.TargetObject) RLIKE '.*\\\\\\\\folder\\\\\\\\shell\\\\\\\\open\\\\\\\\command.*'
AND d.Message RLIKE '.*EventType: DeleteKey.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 3.B.4. Standard Application Layer Protocol
Procedure: Used HTTPS to transport C2 (192.168.0.5) traffic
Criteria: Evidence that the network data sent over the C2 channel is HTTPS
Detection Type:None(None)
3.B.5. Standard Cryptographic Protocol
Procedure: Used HTTPS to encrypt C2 (192.168.0.5) traffic
Criteria: Evidence that the network data sent over the C2 channel is encrypted
Detection Type:None(None)
3.C.1. Modify Registry
Procedure: Modified the Registry to remove artifacts of COM hijacking
Criteria: Deletion of of the HKCU\Software\Classes\Folder\shell\Open\command subkey
Detection Type:Telemetry(Correlated)
Query ID:22A46621-7A92-48C1-81BF-B3937EB4FDC3
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host d
INNER JOIN (
SELECT b.ProcessGuid
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(ParentImage) RLIKE '.*\\‎|â€|‪|‫|‬|â€|‮.*'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE b.Channel = "Microsoft-Windows-Sysmon/Operational"
AND b.EventID = 1
) c
ON d.ProcessGuid = c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 11
AND LOWER(d.TargetFilename) LIKE '%.zip'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.A.1. Remote File Copy
Procedure: Dropped additional tools (SysinternalsSuite.zip) to disk over C2 channel (192.168.0.5)
Criteria: powershell.exe creating the file SysinternalsSuite.zip
Detection Type:Telemetry(Correlated)
Query ID:337EA65D-55A7-4890-BB2A-6A08BB9703E2
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.A.2. PowerShell
Procedure: Spawned interactive powershell.exe
Criteria: powershell.exe spawning from powershell.exe
Detection Type:Telemetry(Correlated)
Query ID:B86F90BD-716C-4432-AE97-901174F111A8
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:FA520225-1813-4EF2-BA58-98CB59C897D7
End of explanation
"""
df = spark.sql(
'''
SELECT Payload
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4103
AND LOWER(f.Payload) LIKE "%expand-archive%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.A.3. Deobfuscate/Decode Files or Information
Procedure: Decompressed ZIP (SysinternalsSuite.zip) file using PowerShell
Criteria: powershell.exe executing Expand-Archive
Detection Type:Telemetry(Correlated)
Query ID:66B068A4-C3AB-4973-AE07-2C15AFF78104
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%expand-archive%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:09F29912-8E93-461E-9E89-3F06F6763383
End of explanation
"""
df = spark.sql(
'''
SELECT Payload
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4103
AND LOWER(f.Payload) LIKE "%expand-archive%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:B5F24262-9373-43A4-A83F-0DBB708BD2C0
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%expand-archive%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:4310F2AF-11EF-4EAC-A968-3436FE5F6140
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%get-process%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.B.1. Process Discovery
Procedure: Enumerated current running processes using PowerShell
Criteria: powershell.exe executing Get-Process
Detection Type:Telemetry(Correlated)
Query ID:CE6D61C3-C3B5-43D2-BD3C-4C1711A822DA
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%get-process%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:294DFB34-1FA8-464D-B85C-F2AE163DB4A9
End of explanation
"""
df = spark.sql(
'''
SELECT f.ProcessGuid
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId, d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 1
AND LOWER(f.Image) LIKE '%sdelete%'
AND LOWER(f.CommandLine) LIKE '%3aka3%'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.B.2. File Deletion
Procedure: Deleted rcs.3aka3.doc on disk using SDelete
Criteria: sdelete64.exe deleting the file rcs.3aka3.doc
Detection Type:Telemetry(Correlated)
Query ID:5EED5350-0BFD-4501-8B2D-4CE4F8F9E948
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessGuid
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId, d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 1
AND LOWER(f.Image) LIKE '%sdelete%'
AND LOWER(f.CommandLine) LIKE '%3aka3%'
) g
ON h.ProcessGuid = g.ProcessGuid
WHERE h.Channel = "Microsoft-Windows-Sysmon/Operational"
AND h.EventID in (12,13)
AND LOWER(h.TargetObject) RLIKE '.*\\\\\\\\software\\\\\\\\sysinternals\\\\\\\\sdelete.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:59A9AC92-124D-4C4B-A6BF-3121C98677C3
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON f.ProcessId = e.NewProcessId
WHERE LOWER(f.Channel) = "security"
AND f.EventID = 4688
AND LOWER(f.NewProcessName) LIKE '%sdelete%'
AND LOWER(f.CommandLine) LIKE '%3aka3%'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:3A1DC1C2-B640-4FCE-A71F-2F65AB060A8C
End of explanation
"""
df = spark.sql(
'''
SELECT Message, g.CommandLine
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessGuid, f.CommandLine
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId, d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 1
AND LOWER(f.Image) LIKE '%sdelete%'
AND LOWER(f.CommandLine) LIKE '%draft.zip%'
) g
ON h.ProcessGuid = g.ProcessGuid
WHERE h.Channel = "Microsoft-Windows-Sysmon/Operational"
AND h.EventID = 23
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.B.3. File Deletion
Procedure: Deleted Draft.zip on disk using SDelete
Criteria: sdelete64.exe deleting the file draft.zip
Detection Type:Telemetry(Correlated)
Query ID:02D0BBFB-4BDF-4167-B530-253779745EF7
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessGuid
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId, d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 1
AND LOWER(f.Image) LIKE '%sdelete%'
AND LOWER(f.CommandLine) LIKE '%draft.zip%'
) g
ON h.ProcessGuid = g.ProcessGuid
WHERE h.Channel = "Microsoft-Windows-Sysmon/Operational"
AND h.EventID in (12,13)
AND LOWER(h.TargetObject) RLIKE '.*\\\\\\\\software\\\\\\\\sysinternals\\\\\\\\sdelete.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:719618E8-9EE7-4693-937E-1FD39228DEBC
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON f.ProcessId = e.NewProcessId
WHERE LOWER(f.Channel) = "security"
AND f.EventID = 4688
AND LOWER(f.NewProcessName) LIKE '%sdelete%'
AND LOWER(f.CommandLine) LIKE '%draft.zip'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:5A19E46B-8328-4867-81CF-87518A3784B1
End of explanation
"""
df = spark.sql(
'''
SELECT Message, g.CommandLine
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessGuid, f.CommandLine
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId, d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 1
AND LOWER(f.Image) LIKE '%sdelete%'
AND LOWER(f.CommandLine) LIKE '%sysinternalssuite.zip%'
) g
ON h.ProcessGuid = g.ProcessGuid
WHERE h.Channel = "Microsoft-Windows-Sysmon/Operational"
AND h.EventID = 23
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.B.4. File Deletion
Procedure: Deleted SysinternalsSuite.zip on disk using SDelete
Criteria: sdelete64.exe deleting the file SysinternalsSuite.zip
Detection Type:Telemetry(Correlated)
Query ID:83D62033-105A-4A02-8B75-DAB52D8D51EC
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessGuid
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId, d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 1
AND LOWER(f.Image) LIKE '%sdelete%'
AND LOWER(f.CommandLine) LIKE '%sysinternalssuite.zip%'
) g
ON h.ProcessGuid = g.ProcessGuid
WHERE h.Channel = "Microsoft-Windows-Sysmon/Operational"
AND h.EventID in (12,13)
AND LOWER(h.TargetObject) RLIKE '.*\\\\\\\\software\\\\\\\\sysinternals\\\\\\\\sdelete.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:AC2ECFF0-D817-4893-BDED-F16B837C4DBA
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON f.ProcessId = e.NewProcessId
WHERE LOWER(f.Channel) = "security"
AND f.EventID = 4688
AND LOWER(f.NewProcessName) LIKE '%sdelete%'
AND LOWER(f.CommandLine) LIKE '%sysinternalssuite.zip'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:4D6DE690-E92C-4D60-93E6-8E5C7C4DF143
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%$env:temp%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.C.1. File and Directory Discovery
Procedure: Enumerated user's temporary directory path using PowerShell
Criteria: powershell.exe executing $env:TEMP
Detection Type:Telemetry(Correlated)
Query ID:85BFD73C-875E-4208-AD9E-1922D4D4D991
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%$env:temp%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:D18CF7B9-CBF0-40CE-9D07-12DC83AF3B2F
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%$env:username%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.C.2. System Owner/User Discovery
Procedure: Enumerated the current username using PowerShell
Criteria: powershell.exe executing $env:USERNAME
Detection Type:Telemetry(Correlated)
Query ID:A45F53ED-65CB-4739-A4D3-F2B0F08F86F8
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%$env:username%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:6F3D1615-69D6-41C6-90D0-39ACA14941BD
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%$env:computername%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.C.3. System Information Discovery
Procedure: Enumerated the computer hostname using PowerShell
Criteria: powershell.exe executing $env:COMPUTERNAME
Detection Type:Telemetry(Correlated)
Query ID:9B610803-2B27-4DA4-9AAC-C859F48510DA
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%$env:computername%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:1BA09833-CDF3-44BE-86D0-6F5B1C66D151
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%$env:userdomain%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.C.4. System Network Configuration Discovery
Procedure: Enumerated the current domain name using PowerShell
Criteria: powershell.exe executing $env:USERDOMAIN
Detection Type:Telemetry(Correlated)
Query ID:1418A09E-BC90-4BC5-A0BC-1ECC4283ACF4
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%$env:userdomain%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:8D215D46-CE33-4CB7-9934-FF9205971570
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%$pid%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.C.5. Process Discovery
Procedure: Enumerated the current process ID using PowerShell
Criteria: powershell.exe executing $PID
Detection Type:Telemetry(Correlated)
Query ID:2DBE08DB-BADD-40AD-A037-DEBD29E207C6
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%$pid%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:9CFC783B-2DC8-4A3D-AC7B-2DF890827E2E
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%gwmi win32_operatingsystem%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.C.6. System Information Discovery
Procedure: Enumerated the OS version using PowerShell
Criteria: powershell.exe executing Gwmi Win32_OperatingSystem
Detection Type:Telemetry(Correlated)
Query ID:5A2B7006-A887-465F-9D41-AED8F6AECBE1
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%gwmi win32_operatingsystem%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:69A3B3AC-42BE-44F6-A418-C2356894F745
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%-class antivirusproduct%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.C.7. Security Software Discovery
Procedure: Enumerated anti-virus software using PowerShell
Criteria: powershell.exe executing Get-WmiObject ... -Class AntiVirusProduct
Detection Type:Telemetry(Correlated)
Query ID:E1E0849D-1771-438B-9D8F-A67B7EC48B97
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%-class antivirusproduct%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:956D78C8-FCB5-440D-B059-6790F729D02D
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%-class firewallproduct%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.C.8. Security Software Discovery
Procedure: Enumerated firewall software using PowerShell
Criteria: powershell.exe executing Get-WmiObject ... -Class FireWallProduct
Detection Type:Telemetry(Correlated)
Query ID:9F924458-73AD-42C8-B98E-0CB4B4355B9B
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%-class firewallproduct%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:B7549913-AF53-4F9A-9C3F-4106578EA5F2
End of explanation
"""
df = spark.sql(
'''
SELECT a.EventTime, o.TargetUserName, o.IpAddress, a.Message
FROM apt29Host o
INNER JOIN (
SELECT Message, EventTime, SubjectLogonId
FROM apt29Host
WHERE lower(Channel) = "security"
AND EventID = 4661
AND ObjectType = "SAM_DOMAIN"
AND SubjectUserName NOT LIKE '%$'
AND AccessMask = '0x20094'
AND LOWER(Message) LIKE '%getlocalgroupmembership%'
) a
ON o.TargetLogonId = a.SubjectLogonId
WHERE lower(Channel) = "security"
AND o.EventID = 4624
AND o.LogonType = 3
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.C.9. Permission Groups Discovery
Procedure: Enumerated user's domain group membership via the NetUserGetGroups API
Criteria: powershell.exe executing the NetUserGetGroups API
Detection Type:technique(alert)
Query ID:FA458669-1C94-4150-AFFC-A3236FC6B275
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%netusergetgroups%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Detection Type:Telemetry(Correlated)
Query ID:11827B7C-8010-443C-9116-500289E0ED57
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%netusergetgroups%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:52E7DFEA-05BC-4B81-BFE9-DE6085FA8228
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ProcessGuid = e.ProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 7
AND LOWER(f.ImageLoaded) LIKE "%netapi32.dll"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.C.10. Execution through API
Procedure: Executed API call by reflectively loading Netapi32.dll
Criteria: The NetUserGetGroups API function loaded into powershell.exe from Netapi32.dll
Detection Type:Telemetry(Correlated)
Query ID:0B50643F-98FA-4F4A-8E22-9257D85AD7C5
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%netusergetlocalgroups%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.C.11. Permission Groups Discovery
Procedure: Enumerated user's local group membership via the NetUserGetLocalGroups API
Criteria: powershell.exe executing the NetUserGetLocalGroups API
Detection Type:Telemetry(Correlated)
Query ID:1CD16ED8-C812-40B1-B968-F0DABFC79DDF
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%netusergetlocalgroups%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:F0AC46E2-63EA-4C8E-AF39-6631444451E5
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ProcessGuid = e.ProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 7
AND LOWER(f.ImageLoaded) LIKE "%netapi32.dll"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 4.C.12. Execution through API
Procedure: Executed API call by reflectively loading Netapi32.dll
Criteria: The NetUserGetLocalGroups API function loaded into powershelle.exe from Netapi32.dll
Detection Type:Telemetry(Correlated)
Query ID:53CEF026-66EF-4B26-B5C9-10D4BBA3F9E8
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID IN (12,13,14)
AND (LOWER(TargetObject) LIKE "%javamtsup%" OR LOWER(Details) LIKE "%javamtsup%")
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 5.A.1. New Service
Procedure: Created a new service (javamtsup) that executes a service binary (javamtsup.exe) at system startup
Criteria: powershell.exe creating the Javamtsup service
Detection Type:Telemetry(Correlated)
Query ID:A16CE10D-6EE3-4611-BE9B-B023F36E2DFF
End of explanation
"""
df = spark.sql(
'''
SELECT Payload
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId, d.ParentProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4103
AND LOWER(f.Payload) LIKE "%new-service%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:E76C4174-C24A-4CA3-9EA8-46C5286D3B6F
End of explanation
"""
df = spark.sql(
'''
SELECT Payload
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4103
AND LOWER(f.Payload) LIKE "%new-service%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:AA3EF640-2720-4E8A-B86D-DFCF2FDB86BD
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ProcessGuid = e.ProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 11
AND f.TargetFilename RLIKE '.*\\\\\\\\ProgramData\\\\\\\\Microsoft\\\\\\\\Windows\\\\\\\\Start Menu\\\\\\\\Programs\\\\\\\\StartUp.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 5.B.1. Registry Run Keys / Startup Folder
Procedure: Created a LNK file (hostui.lnk) in the Startup folder that executes on login
Criteria: powershell.exe creating the file hostui.lnk in the Startup folder
Detection Type:Telemetry(Correlated)
Query ID:611FCA99-97D0-4873-9E51-1C1BA2DBB40D
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessGuid
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid, d.ParentProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 1
AND LOWER(f.Image) LIKE '%accesschk%'
) g
ON h.ProcessGuid = g.ProcessGuid
WHERE h.Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 7
AND LOWER(ImageLoaded) LIKE '%accesschk%'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 6.A.1. Credentials in Files
Procedure: Read the Chrome SQL database file to extract encrypted credentials
Criteria: accesschk.exe reading files within %APPDATALOCAL%\Google\chrome\user data\default\
Detection Type:None(None)
6.A.2. Credential Dumping
Procedure: Executed the CryptUnprotectedData API call to decrypt Chrome passwords
Criteria: accesschk.exe executing the CryptUnprotectedData API
Detection Type:None(None)
6.A.3. Masquerading
Procedure: Masqueraded a Chrome password dump tool as accesscheck.exe, a legitimate Sysinternals tool
Criteria: Evidence that accesschk.exe is not the legitimate Sysinternals tool
Detection Type:Telemetry(Correlated)
Query ID:0A19F9B7-5E17-47E5-8015-29E9ABC09ADC
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessGuid
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid, d.ParentProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 1
AND LOWER(f.Image) LIKE '%accesschk%'
) g
ON h.ProcessGuid = g.ProcessGuid
WHERE h.Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 7
AND LOWER(ImageLoaded) LIKE '%accesschk%'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Detection Type:General(Correlated)
Query ID:1FCE98FC-1FF9-41CB-9C25-0235729A2B01
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ProcessGuid = e.ProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 11
AND LOWER(f.TargetFilename) LIKE "%.pfx"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 6.B.1. Private Keys
Procedure: Exported a local certificate to a PFX file using PowerShell
Criteria: powershell.exe creating a certificate file exported from the system
Detection Type:Telemetry(Correlated)
Query ID:6392C9F1-D975-4F75-8A70-433DEDD7F622
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid, d.ParentProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.SourceProcessGuid = e.ParentProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 8
AND f.TargetImage LIKE '%lsass.exe'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 6.C.1. Credential Dumping
Procedure: Dumped password hashes from the Windows Registry by injecting a malicious DLL into Lsass.exe
Criteria: powershell.exe injecting into lsass.exe OR lsass.exe reading Registry keys under HKLM:\SAM\SAM\Domains\Account\Users\
Detection Type:Telemetry(Correlated)
Query ID:7B2CE2A5-4386-4EED-9A03-9B7D1049C4AE
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid, d.ParentProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ProcessGuid = e.ProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 7
AND LOWER(f.ImageLoaded) LIKE "%system.drawing.ni.dll"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 7.A.1. Screen Capture
Procedure: Captured and saved screenshots using PowerShell
Criteria: powershell.exe executing the CopyFromScreen function from System.Drawing.dll
Detection Type:Telemetry(Correlated)
Query ID:3B4E5808-3C71-406A-B181-17B0CE3178C9
End of explanation
"""
df = spark.sql(
'''
SELECT Payload
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId, d.ParentProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4103
AND LOWER(f.Payload) LIKE "%copyfromscreen%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Detection Type:Telemetry(Correlated)
Query ID:B374D3E7-3580-441F-8D6E-48C40CBA7922
End of explanation
"""
df = spark.sql(
'''
SELECT Payload
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4103
AND LOWER(f.Payload) LIKE "%copyfromscreen%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:2AA4D448-3893-4F31-9497-0F8E2B7E3CFD
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId, d.ParentProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4103
AND LOWER(f.Payload) LIKE "%get-clipboard%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 7.A.2. Clipboard Data
Procedure: Captured clipboard contents using PowerShell
Criteria: powershell.exe executing Get-Clipboard
Detection Type:Telemetry(Correlated)
Query ID:F4609F7E-C4DB-4327-91D4-59A58C962A02
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4103
AND LOWER(f.Payload) LIKE "%get-clipboard%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:6EC8D7EB-153B-459A-9333-51208449DB99
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid, d.ParentProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ProcessGuid = e.ProcessGuid
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 11
AND LOWER(f.TargetFilename) LIKE '%officesupplies%'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 7.A.3. Input Capture
Procedure: Captured user keystrokes using the GetAsyncKeyState API
Criteria: powershell.exe executing the GetAsyncKeyState API
Detection Type:None(None)
7.B.1. Data from Local System
Procedure: Read data in the user's Downloads directory using PowerShell
Criteria: powershell.exe reading files in C:\Users\pam\Downloads\
Detection Type:None(None)
7.B.2. Data Compressed
Procedure: Compressed data from the user's Downloads directory into a ZIP file (OfficeSupplies.7z) using PowerShell
Criteria: powershell.exe creating the file OfficeSupplies.7z
Detection Type:Telemetry(Correlated)
Query ID:BA68938F-7506-4E20-BC06-0B44B535A0B1
End of explanation
"""
df = spark.sql(
'''
SELECT f.ScriptBlockText
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId, d.ParentProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%compress-7zip%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 7.B.3. Data Encrypted
Procedure: Encrypted data from the user's Downloads directory using PowerShell
Criteria: powershell.exe executing Compress-7Zip with the password argument used for encryption
Detection Type:Telemetry(Correlated)
Query ID:4C19DDB9-9763-4D1C-9B9D-788ECF193778
End of explanation
"""
df = spark.sql(
'''
SELECT f.ScriptBlockText
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%compress-7zip%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:C670DAFF-B1FD-45B2-9DEB-AC5AEC273EE7
End of explanation
"""
df = spark.sql(
'''
SELECT f.ScriptBlockText
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId, d.ParentProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ExecutionProcessID = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%copy-item%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 7.B.4. Exfiltration Over Alternative Protocol
Procedure: Exfiltrated collection (OfficeSupplies.7z) to WebDAV network share using PowerShell
Criteria: powershell executing Copy-Item pointing to an attack-controlled WebDav network share (192.168.0.4:80)
Detection Type:Telemetry(Correlated)
Query ID:7AAC6658-2B5C-4B4A-B7C9-D42D288D5218
End of explanation
"""
df = spark.sql(
'''
SELECT f.ScriptBlockText
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId
WHERE f.Channel = "Microsoft-Windows-PowerShell/Operational"
AND f.EventID = 4104
AND LOWER(f.ScriptBlockText) LIKE "%copy-item%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:B19F8E16-AA6C-45C1-8A0D-92812830C237
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND CommandLine RLIKE '.*rundll32.exe.*\\\\\\\\windows\\\\\\\\system32\\\\\\\\davclnt.dll.*DavSetCookie.*'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Detection Type:technique(Alert)
Query ID:C10730EA-6345-4934-AA0F-B0EFCA0C4BA6
End of explanation
"""
df = spark.sql(
'''
SELECT f.Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId, d.ParentProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ProcessId = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 3
AND f.DestinationPort = 389
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 8.A.1. Remote System Discovery
Procedure: Enumerated remote systems using LDAP queries
Criteria: powershell.exe making LDAP queries over port 389 to the Domain Controller (10.0.0.4)
Detection Type:Telemetry(Correlated)
Query ID:C1307FC1-19B7-467B-9705-95147B492CC7
End of explanation
"""
df = spark.sql(
'''
SELECT f.Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(CAST(f.ProcessId as INT))) = e.NewProcessId
WHERE LOWER(f.Channel) = "security"
AND EventID = 5156
AND DestPort = 389
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:542C2E36-0BC0-450B-A34F-C600E9DC396B
End of explanation
"""
df = spark.sql(
'''
SELECT f.Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessId, d.ParentProcessId
FROM apt29Host d
INNER JOIN (
SELECT a.ProcessGuid, a.ParentProcessGuid
FROM apt29Host a
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE "%control.exe"
AND LOWER(ParentImage) LIKE "%sdclt.exe"
) b
ON a.ParentProcessGuid = b.ProcessGuid
WHERE a.Channel = "Microsoft-Windows-Sysmon/Operational"
AND a.EventID = 1
AND a.IntegrityLevel = "High"
) c
ON d.ParentProcessGuid= c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 1
AND d.Image LIKE '%powershell.exe'
) e
ON f.ProcessId = e.ProcessId
WHERE f.Channel = "Microsoft-Windows-Sysmon/Operational"
AND f.EventID = 3
AND f.DestinationPort = 5985
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 8.A.2. Remote System Discovery
Procedure: Established WinRM connection to remote host NASHUA (10.0.1.6)
Criteria: Network connection to NASHUA (10.0.1.6) over port 5985
Detection Type:Telemetry(Correlated)
Query ID:0A5428EA-171D-4944-B27C-0EBC3D557FAD
End of explanation
"""
df = spark.sql(
'''
SELECT f.Message
FROM apt29Host f
INNER JOIN (
SELECT split(d.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host d
INNER JOIN(
SELECT a.ProcessId, a.NewProcessId
FROM apt29Host a
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE "%control.exe"
AND LOWER(ParentProcessName) LIKE "%sdclt.exe"
) b
ON a.ProcessId = b.NewProcessId
WHERE LOWER(a.Channel) = "security"
AND a.EventID = 4688
AND a.MandatoryLabel = "S-1-16-12288"
AND a.TokenElevationType = "%%1937"
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4688
AND d.NewProcessName LIKE '%powershell.exe'
) e
ON LOWER(hex(CAST(f.ProcessId as INT))) = e.NewProcessId
WHERE LOWER(f.Channel) = "security"
AND EventID = 5156
AND DestPort = 5985
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:0376E07E-3C48-4B89-A50D-B3FAAB23EDAB
End of explanation
"""
df = spark.sql(
'''
SELECT b.ScriptBlockText
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid, ProcessId
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND LOWER(Image) LIKE '%wsmprovhost.exe'
) a
ON b.ExecutionProcessID = a.ProcessId
WHERE b.Channel = "Microsoft-Windows-PowerShell/Operational"
AND b.EventID = 4104
AND LOWER(b.ScriptBlockText) LIKE "%get-process%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 8.A.3. Process Discovery
Procedure: Enumerated processes on remote host Scranton (10.0.1.4) using PowerShell
Criteria: powershell.exe executing Get-Process
Detection Type:Telemetry(Correlated)
Query ID:6C481791-2AE8-4F6B-9BFE-C1F6DE1E0BC0
End of explanation
"""
df = spark.sql(
'''
SELECT b.ScriptBlockText
FROM apt29Host b
INNER JOIN (
SELECT split(NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND LOWER(NewProcessName) LIKE '%wsmprovhost.exe'
) a
ON LOWER(hex(b.ExecutionProcessID)) = a.NewProcessId
WHERE b.Channel = "Microsoft-Windows-PowerShell/Operational"
AND b.EventID = 4104
AND LOWER(b.ScriptBlockText) LIKE "%get-process%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:088846AF-FF45-4FC4-896C-64F24517BBD7
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 5145
AND RelativeTargetName LIKE '%python.exe'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 8.B.1. Remote File Copy
Procedure: Copied python.exe payload from a WebDAV share (192.168.0.4) to remote host Scranton (10.0.1.4)
Criteria: The file python.exe created on Scranton (10.0.1.4)
Detection Type:Telemetry(None)
Query ID:97402495-2449-415F-BDAD-5CC8EFC1E1B5
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE Channel = 'Microsoft-Windows-Sysmon/Operational'
AND EventID = 11
AND TargetFilename LIKE '%python.exe'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:D804F2D8-C65B-42D6-A731-C13BE2BDB441
End of explanation
"""
df = spark.sql(
'''
SELECT Hostname, a.Message
FROM apt29Host b
INNER JOIN (
SELECT TargetLogonId, Message
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4624
AND LogonType = 3
AND TargetUserName NOT LIKE '%$'
) a
ON b.SubjectLogonId = a.TargetLogonId
WHERE LOWER(b.Channel) = "security"
AND b.EventID = 5145
AND b.RelativeTargetName LIKE '%python.exe'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 8.B.2. Software Packing
Procedure: python.exe payload was packed with UPX
Criteria: Evidence that the file python.exe is packed
Detection Type:None(None)
8.C.1. Valid Accounts
Procedure: Logged on to remote host NASHUA (10.0.1.6) using valid credentials for user Pam
Criteria: Successful logon as user Pam on NASHUA (10.0.1.6)
Detection Type:Telemetry(None)
Query ID:AF5E8E22-DEC8-40AF-98AD-84BE1AC3F34C
End of explanation
"""
df = spark.sql(
'''
SELECT EventTime, Hostname, ShareName, RelativeTargetName, SubjectUserName
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 5145
AND ShareName LIKE '%IPC%'
AND RelativeTargetName LIKE '%PSEXESVC%'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 8.C.2. Windows Admin Shares
Procedure: Established SMB session to remote host NASHUA's (10.0.1.6) IPC$ share using PsExec
Criteria: SMB session to NASHUA (10.0.1.6) over TCP port 445/135 OR evidence of usage of a Windows share
Detection Type:Telemetry(None)
Query ID:C91A4BF2-22B1-421B-B1DE-626778AD3BBB
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND ParentImage LIKE '%services.exe'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND Image LIKE '%python.exe'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 8.C.3. Service Execution
Procedure: Executed python.exe using PSExec
Criteria: python.exe spawned by PSEXESVC.exe
Detection Type:Telemetry(Correlated)
Query ID:BDE98B9B-77DD-4AD4-B755-463C3C27EE5F
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host b
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND ParentProcessName LIKE '%services.exe'
) a
ON b.ProcessId = a.NewProcessId
WHERE LOWER(Channel) = "security"
AND NewProcessName LIKE '%python.exe'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:11D81CCD-163F-4347-8F1D-072F4B4B3B26
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT b.ProcessGuid
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND ParentImage LIKE '%services.exe'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND Image LIKE '%python.exe'
) c
ON d.ParentProcessGuid = c.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) e
ON f.ProcessGuid = e.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 11
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 9.A.1. Remote File Copy
Procedure: Dropped rar.exe to disk on remote host NASHUA (10.0.1.6)
Criteria: python.exe creating the file rar.exe
Detection Type:Telemetry(Correlated)
Query ID:1C94AFAF-74A9-4578-B026-7AA6948D9DBE
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT b.ProcessGuid
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND ParentImage LIKE '%services.exe'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND Image LIKE '%python.exe'
) c
ON d.ParentProcessGuid = c.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) e
ON f.ProcessGuid = e.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 11
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 9.A.2. Remote File Copy
Procedure: Dropped rar.exe to disk on remote host NASHUA (10.0.1.6)
Criteria: python.exe creating the file sdelete64.exe
Detection Type:Telemetry(Correlated)
Query ID:F98D589E-94A9-4974-A142-7E75D9760118
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT b.ProcessGuid
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND ParentImage LIKE '%services.exe'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND Image LIKE '%python.exe'
) c
ON d.ParentProcessGuid = c.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE '%powershell.exe'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 9.B.1. PowerShell
Procedure: Spawned interactive powershell.exe
Criteria: powershell.exe spawning from python.exe
Detection Type:Telemetry(Correlated)
Query ID:77D403CE-2832-4927-B74A-42D965B5AF94
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host f
INNER JOIN (
SELECT d.NewProcessId
FROM apt29Host d
INNER JOIN (
SELECT b.NewProcessId
FROM apt29Host b
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND ParentProcessName LIKE '%services.exe'
) a
ON b.ProcessId = a.NewProcessId
WHERE LOWER(Channel) = "security"
AND NewProcessName LIKE '%python.exe'
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(Channel) = "security"
AND EventID = 4688
) e
ON f.ProcessId = e.NewProcessId
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND NewProcessName LIKE '%powershell.exe'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:B56C6666-EEF3-4028-85D4-6AAE01CD506C
End of explanation
"""
df = spark.sql(
'''
SELECT h.ScriptBlockText
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessId
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT b.ProcessGuid
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND ParentImage LIKE '%services.exe'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND Image LIKE '%python.exe'
) c
ON d.ParentProcessGuid = c.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE '%powershell.exe'
) g
ON h.ExecutionProcessID = g.ProcessId
WHERE h.Channel = "Microsoft-Windows-PowerShell/Operational"
AND h.EventID = 4104
AND LOWER(h.ScriptBlockText) LIKE "%childitem%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 9.B.2. File and Directory Discovery
Procedure: Searched filesystem for document and media files using PowerShell
Criteria: powershell.exe executing (Get-)ChildItem
Detection Type:Telemetry(Correlated)
Query ID:3DDF2B9B-10AC-454C-BFA0-1F7BD011947E
End of explanation
"""
df = spark.sql(
'''
SELECT h.ScriptBlockText
FROM apt29Host h
INNER JOIN (
SELECT split(f.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host f
INNER JOIN (
SELECT d.NewProcessId
FROM apt29Host d
INNER JOIN (
SELECT b.NewProcessId
FROM apt29Host b
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND ParentProcessName LIKE '%services.exe'
) a
ON b.ProcessId = a.NewProcessId
WHERE LOWER(Channel) = "security"
AND NewProcessName LIKE '%python.exe'
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(Channel) = "security"
AND EventID = 4688
) e
ON f.ProcessId = e.NewProcessId
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND NewProcessName LIKE '%powershell.exe'
) g
ON LOWER(hex(h.ExecutionProcessID)) = g.NewProcessId
WHERE h.Channel = "Microsoft-Windows-PowerShell/Operational"
AND h.EventID = 4104
AND LOWER(h.ScriptBlockText) LIKE "%childitem%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:E7ED941E-F3B3-441B-B43D-1F1B194D6303
End of explanation
"""
df = spark.sql(
'''
SELECT h.ScriptBlockText
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessId
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT b.ProcessGuid
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND ParentImage LIKE '%services.exe'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND Image LIKE '%python.exe'
) c
ON d.ParentProcessGuid = c.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE '%powershell.exe'
) g
ON h.ExecutionProcessID = g.ProcessId
WHERE h.Channel = "Microsoft-Windows-PowerShell/Operational"
AND h.EventID = 4104
AND LOWER(h.ScriptBlockText) LIKE "%childitem%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 9.B.3. Automated Collection
Procedure: Scripted search of filesystem for document and media files using PowerShell
Criteria: powershell.exe executing (Get-)ChildItem
Detection Type:Telemetry(Correlated)
Query ID:6AE2BDBE-48BD-4323-8572-B2214D244013
End of explanation
"""
df = spark.sql(
'''
SELECT h.ScriptBlockText
FROM apt29Host h
INNER JOIN (
SELECT split(f.NewProcessId, '0x')[1] as NewProcessId
FROM apt29Host f
INNER JOIN (
SELECT d.NewProcessId
FROM apt29Host d
INNER JOIN (
SELECT b.NewProcessId
FROM apt29Host b
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND ParentProcessName LIKE '%services.exe'
) a
ON b.ProcessId = a.NewProcessId
WHERE LOWER(Channel) = "security"
AND NewProcessName LIKE '%python.exe'
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(Channel) = "security"
AND EventID = 4688
) e
ON f.ProcessId = e.NewProcessId
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND NewProcessName LIKE '%powershell.exe'
) g
ON LOWER(hex(h.ExecutionProcessID)) = g.NewProcessId
WHERE h.Channel = "Microsoft-Windows-PowerShell/Operational"
AND h.EventID = 4104
AND LOWER(h.ScriptBlockText) LIKE "%childitem%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:6A0DF333-5329-42B5-9AF6-60AB647051CD
End of explanation
"""
df = spark.sql(
'''
SELECT h.Message
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessGuid
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT b.ProcessGuid
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND ParentImage LIKE '%services.exe'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND Image LIKE '%python.exe'
) c
ON d.ParentProcessGuid = c.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE '%powershell.exe'
) g
ON h.ProcessGuid = g.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND h.EventID = 11
AND LOWER(h.TargetFilename) LIKE "%working.zip"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 9.B.4. Data from Local System
Procedure: Recursively collected files found in C:\Users\Pam\ using PowerShell
Criteria: powershell.exe reading files in C:\Users\Pam\
Detection Type:None(None)
9.B.5. Data Staged
Procedure: Staged files for exfiltration into ZIP (working.zip in AppData directory) using PowerShell
Criteria: powershell.exe creating the file working.zip
Detection Type:Telemetry(Correlated)
Query ID:17B04626-D628-4CFC-9EF1-7FF9CD48FF5E
End of explanation
"""
df = spark.sql(
'''
SELECT h.Message
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessGuid
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT b.ProcessGuid
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND ParentImage LIKE '%services.exe'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND Image LIKE '%python.exe'
) c
ON d.ParentProcessGuid = c.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE '%powershell.exe'
) g
ON h.ParentProcessGuid = g.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND h.EventID = 1
AND LOWER(h.CommandLine) LIKE "%rar.exe%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 9.B.6. Data Encrypted
Procedure: Encrypted staged ZIP (working.zip in AppData directory) into working.zip (on Desktop) using rar.exe
Criteria: powershell.exe executing rar.exe with the -a parameter for a password to use for encryption
Detection Type:Telemetry(Correlated)
Query ID:9EC44B89-9B82-41F2-B11E-D49392853C63
End of explanation
"""
df = spark.sql(
'''
SELECT h.Message
FROM apt29Host h
INNER JOIN (
SELECT f.NewProcessId
FROM apt29Host f
INNER JOIN (
SELECT d.NewProcessId
FROM apt29Host d
INNER JOIN (
SELECT b.NewProcessId
FROM apt29Host b
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND ParentProcessName LIKE '%services.exe'
) a
ON b.ProcessId = a.NewProcessId
WHERE LOWER(Channel) = "security"
AND NewProcessName LIKE '%python.exe'
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(Channel) = "security"
AND EventID = 4688
) e
ON f.ProcessId = e.NewProcessId
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND NewProcessName LIKE '%powershell.exe'
) g
ON h.ProcessId = g.NewProcessId
WHERE LOWER(Channel) = "security"
AND h.EventID = 4688
AND LOWER(h.CommandLine) LIKE "%rar.exe%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:579D025B-DFFB-416B-B07A-A36D9CE1EF93
End of explanation
"""
df = spark.sql(
'''
SELECT h.Message
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessGuid
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT b.ProcessGuid
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND ParentImage LIKE '%services.exe'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND Image LIKE '%python.exe'
) c
ON d.ParentProcessGuid = c.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE '%powershell.exe'
) g
ON h.ParentProcessGuid = g.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND h.EventID = 1
AND LOWER(h.CommandLine) LIKE "%rar.exe%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 9.B.7. Data Compressed
Procedure: Compressed staged ZIP (working.zip in AppData directory) into working.zip (on Desktop) using rar.exe
Criteria: powershell.exe executing rar.exe
Detection Type:Telemetry(Correlated)
Query ID:FD1AE986-FD91-4B91-8BCE-42C9295949F7
End of explanation
"""
df = spark.sql(
'''
SELECT h.Message
FROM apt29Host h
INNER JOIN (
SELECT f.NewProcessId
FROM apt29Host f
INNER JOIN (
SELECT d.NewProcessId
FROM apt29Host d
INNER JOIN (
SELECT b.NewProcessId
FROM apt29Host b
INNER JOIN (
SELECT NewProcessId
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND ParentProcessName LIKE '%services.exe'
) a
ON b.ProcessId = a.NewProcessId
WHERE LOWER(Channel) = "security"
AND NewProcessName LIKE '%python.exe'
) c
ON d.ProcessId = c.NewProcessId
WHERE LOWER(Channel) = "security"
AND EventID = 4688
) e
ON f.ProcessId = e.NewProcessId
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND NewProcessName LIKE '%powershell.exe'
) g
ON h.ProcessId = g.NewProcessId
WHERE LOWER(Channel) = "security"
AND h.EventID = 4688
AND LOWER(h.CommandLine) LIKE "%rar.exe%"
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:8A865709-E762-4A26-BDEC-A762FB37947B
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host j
INNER JOIN (
SELECT h.ProcessGuid
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessGuid
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT b.ProcessGuid
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND ParentImage LIKE '%services.exe'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND Image LIKE '%python.exe'
) c
ON d.ParentProcessGuid = c.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE '%cmd.exe'
) g
ON h.ParentProcessGuid = g.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND h.EventID = 1
) i
ON j.ProcessGuid = i.ProcessGuid
WHERE j.Channel = "Microsoft-Windows-Sysmon/Operational"
AND j.EventID = 23
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 9.B.8. Exfiltration Over Command and Control Channel
Procedure: Read and downloaded ZIP (working.zip on Desktop) over C2 channel (192.168.0.5 over TCP port 8443)
Criteria: python.exe reading the file working.zip while connected to the C2 channel
Detection Type:None(None)
9.C.1. File Deletion
Procedure: Deleted rar.exe on disk using SDelete
Criteria: sdelete64.exe deleting the file rar.exe
Detection Type:Telemetry(Correlated)
Query ID:C20D8999-0B0D-4A50-9CDC-2BAAC4C7B577
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host j
INNER JOIN (
SELECT h.ProcessGuid
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessGuid
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT b.ProcessGuid
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND ParentImage LIKE '%services.exe'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND Image LIKE '%python.exe'
) c
ON d.ParentProcessGuid = c.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE '%cmd.exe'
) g
ON h.ParentProcessGuid = g.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND h.EventID = 1
) i
ON j.ProcessGuid = i.ProcessGuid
WHERE j.Channel = "Microsoft-Windows-Sysmon/Operational"
AND j.EventID = 23
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 9.C.2. File Deletion
Procedure: Deleted working.zip (from Desktop) on disk using SDelete
Criteria: sdelete64.exe deleting the file \Desktop\working.zip
Detection Type:Telemetry(Correlated)
Query ID:CB869916-7BCF-4F9F-8B95-C19B407B91E3
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host j
INNER JOIN (
SELECT h.ProcessGuid
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessGuid
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT b.ProcessGuid
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND ParentImage LIKE '%services.exe'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND Image LIKE '%python.exe'
) c
ON d.ParentProcessGuid = c.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE '%cmd.exe'
) g
ON h.ParentProcessGuid = g.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND h.EventID = 1
) i
ON j.ProcessGuid = i.ProcessGuid
WHERE j.Channel = "Microsoft-Windows-Sysmon/Operational"
AND j.EventID = 23
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 9.C.3. File Deletion
Procedure: Deleted working.zip (from AppData directory) on disk using SDelete
Criteria: sdelete64.exe deleting the file \AppData\Roaming\working.zip
Detection Type:Telemetry(Correlated)
Query ID:59F37185-0BE4-4D81-8B81-FBFBD8055587
End of explanation
"""
df = spark.sql(
'''
SELECT h.Message
FROM apt29Host h
INNER JOIN (
SELECT f.ProcessGuid
FROM apt29Host f
INNER JOIN (
SELECT d.ProcessGuid
FROM apt29Host d
INNER JOIN (
SELECT b.ProcessGuid
FROM apt29Host b
INNER JOIN (
SELECT ProcessGuid
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND ParentImage LIKE '%services.exe'
) a
ON b.ParentProcessGuid = a.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND Image LIKE '%python.exe'
) c
ON d.ParentProcessGuid = c.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
) e
ON f.ParentProcessGuid = e.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE '%cmd.exe'
) g
ON h.ProcessGuid = g.ProcessGuid
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND h.EventID = 23
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 9.C.4. File Deletion
Procedure: Deleted SDelete on disk using cmd.exe del command
Criteria: cmd.exe deleting the file sdelete64.exe
Detection Type:Telemetry(Correlated)
Query ID:0FC62E32-9052-49EB-A5D5-1DF316D634AD
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND ParentImage LIKE '%services.exe'
AND Image LIKE '%javamtsup.exe'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: 10.A.1. Service Execution
Procedure: Executed persistent service (javamtsup) on system startup
Criteria: javamtsup.exe spawning from services.exe
Detection Type:Telemetry(None)
Query ID:CB9F90C0-93EA-469A-9515-7DF27DF1592A
End of explanation
"""
df = spark.sql(
'''
SELECT Message
FROM apt29Host
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND ParentProcessName LIKE '%services.exe'
AND NewProcessName LIKE '%javamtsup.exe'
'''
)
df.show(100,truncate = False, vertical = True)
"""
Explanation: Query ID:4DABE602-E648-4C1E-81B3-A2AC96F94CE0
End of explanation
"""
|
xmnlab/pywim | notebooks/presentations/scipyla2015/PyWIM.ipynb | mit | from IPython.display import display
from matplotlib import pyplot as plt
from scipy import signal
from scipy import constants
from scipy.signal import argrelextrema
from collections import defaultdict
from sklearn import metrics
import statsmodels.api as sm
import numpy as np
import pandas as pd
import numba as nb
import sqlalchemy
import os
import sys
import datetime
import peakutils
# local
sys.path.insert(
0, os.path.dirname(os.path.dirname(os.path.dirname(os.getcwd())))
)
from pywim.utils.dsp.synthetic_data.sensor_data import gen_truck_raw_data
from pywim.estimation.vehicular_classification import dww
%matplotlib inline
def plot_signals(df: pd.DataFrame, ax=None):
kwargs = {}
if ax is not None:
kwargs['ax'] = ax
df.plot(**kwargs)
plt.title("Datos de los sensores")
plt.xlabel('Segundos (s)')
plt.ylabel('Tensión (V)')
plt.grid(True)
if ax is None:
plt.show()
"""
Explanation: Table of Contents
1. El uso de Python como apoyo al pesaje de vehículos pesados en movimiento (WIM)
2. Descripción del proyecto
3. Adquisición de datos
3.1 Uso de datos sintéticos
4. Almacenamiento y flujo de lo datos
5. Procesamiento digital de señal
5.1 Corrección de baseline
5.2 Filtrado de señal
5.3 Detección de picos
5.4 Detección de la curva de la señal para el cálculo de peso
6. Cálculos
6.1 Velocidad
6.2 Distancia entre ejes
6.3 Área bajo la curva
6.4 Pesos
7. Clasificación de vehículos
8. Calibración de los cálculos de pesaje
9. Reconocimiento automático de matrículas vehiculares
10. Conclusión
<!--bibtex
@TechReport{tech:optimization-vehicle-classification,
Title = {Optimization Vehicle Classification},
Author = {van Boxel, DW and van Lieshout, RA},
Institution = {Ministerie van Verkeer en Waterstaat - Directoraat-Generaal Rijkswaterstaat - Dienst Weg- en Waterbouwkunde (DWW)},
Year = {2003},
Owner = {xmn},
Timestamp = {2014.10.22}
}
@Article{pattern-recogntion-of-strings,
Title = {Pattern recognition of strings with substitutions, insertions, deletions and generalized transpositions},
Author = {Oommen, B John and Loke, Richard KS},
Journal = {Pattern Recognition},
Year = {1997},
Number = {5},
Pages = {789--800},
Volume = {30},
Publisher = {Elsevier}
}
@article{vanweigh,
title={Weigh-in-Motion--Categorising vehicles},
author={van Boxel, DW and van Lieshout, RA and van Doorn, RA}
}
@misc{kistler2004installation,
title={Installation Instructions: Lineas{\textregistered} Sensors for Weigh-in-Motion Type 9195E},
author={Kistler Instrumente, AG},
year={2004},
publisher={Kistler Instrumente AG, Switzerland}
}
@article{helmus2013nmrglue,
title={Nmrglue: an open source Python package for the analysis of multidimensional NMR data},
author={Helmus, Jonathan J and Jaroniec, Christopher P},
journal={Journal of biomolecular NMR},
volume={55},
number={4},
pages={355--367},
year={2013},
publisher={Springer}
}
@article{billauer2008peakdet,
title={peakdet: Peak detection using MATLAB},
author={Billauer, Eli},
journal={Eli Billauer’s home page},
year={2008}
}
@Article{article:alpr-using-python-and-opencv,
Title = {Automatic License Plate Recognition using Python and OpenCV},
Author = {Sajjad, K.M.},
Year = {2010},
Institution = {Department of Computer Science and Engineering, MES College of Engineering, Kerala, India},
Owner = {xmn},
Timestamp = {2014.08.24}
}
@inproceedings{burnos2008auto,
title={Auto-calibration and temperature correction of WIM systems},
author={Burnos, Piotr},
booktitle={Fifth International Conference on Weigh-in-Motion (ICWIM5)},
pages={439},
year={2008}
}
@inproceedings{gajda2012analysis,
title={Analysis of the temperature influences on the metrological properties of polymer piezoelectric load sensors applied in Weigh-in-Motion systems},
author={Gajda, Janusz and Sroka, Ryszard and Stencel, Marek and Zeglen, Tadeusz and Piwowar, Piotr and Burnos, Piotr},
booktitle={Instrumentation and Measurement Technology Conference (I2MTC), 2012 IEEE International},
pages={772--775},
year={2012},
organization={IEEE}
}
-->
<!-- %%javascript
IPython.load_extensions('calico-document-tools'); -->
El uso de Python como apoyo al pesaje de vehículos pesados en movimiento (WIM) - [En actualización]
Muchos accidentes en carreteras son causados directa o indirectamente por vehículos pesados conducidos con sobrepeso. Estos causan daños en el pavimento y también sufren más efectos dinámicos durante las curvas.
Para inhibir el exceso de peso de estos vehículos es necesario fiscalizar estas infracciones y, cuando necesario, aplicar las medidas establecidas por ley, como multas y aprehensiones. Un método que está siendo investigado en muchas partes del mundo es el pesaje en movimiento. Este método tiene como ventajas la economía en espacio físico y operación, ya que sus sensores son instalados en la propia carretera y no implica en atrasos en el viaje de los usuarios de la vía, pues puede pesar los vehículos pesados transitando en la velocidad directriz de la vía.
En este trabajo serán presentados tecnologías útiles para desarrollar un sistema computacional para apoyo al pesaje de vehículos en movimiento. La experiencia para desarrollar este trabajo fue obtenida a través del proyecto desarrollado en el laboratorio de transportes (LabTrans) de la Universidade Federal de Santa Catarina (UFSC). El objetivo de este trabajo es servir como base inicial para futuros investigadores del tema.
El lenguaje utilizado aquí será el Python y las librerías principales utilizadas serán: numpy, scipy, pandas, sqlalchemy, statsmodels, numba, scikit-learn, pydaqmx, bokeh.
Descripción del proyecto
Un sistema computacional de pesaje de vehículos en movimiento está compuesto, básicamente, de:
- Adquisición de señal de los sensores de peso en la vía);
- Segmentación de señal (para recortar la señal respectiva al camión medido);
- Tratamiento de señales;
- Cálculos (velocidad, número de ejes, grupos de ejes, distancia entre ejes, peso total, peso por ejes, peso por grupo de ejes, largo);
- Clasificación del vehículo;
- Calibración;
- Reconocimiento de matrículas vehiculares;
- Detección de infracción;
El sistema debe ser rápido y robusto para procesar todas estas informaciones en el menor tiempo posible. Python no es un lenguaje reconocido por tener un alto desempeño, por eso, es necesario utilizar librerías y métodos para potenciar su capacidad de procesamiento.
Con base en los resultados del pesaje, clasificación y reconocimiento de la matrícula vehicular es posible saber si el vehículo cometió alguna infracción y, en caso positivo, es posible vincular la infracción a la identificación del vehículo infractor.
End of explanation
"""
df = pd.DataFrame()
sample_rate = 2000
total_seconds = 3.0
# analog channel
df = gen_truck_raw_data(
sample_rate=sample_rate,
speed=15,
vehicle_layout='-o--o---o--',
sensors_distance=[1, 1],
p_signal_noise=10
)
plot_signals(df)
"""
Explanation: Adquisición de datos
La adquisición de datos es hecha por una placa de adquisición de datos que está
conectada con el sensor y la computadora que almacenará estos datos.
Para eso, por ejemplo, puede ser hecha por medio de placas de adquisición
DAQmx de la empresa National Instruments (NI). Para comunicarse con estas
puede ser utilizada la librería PyDAQmx, un wrap hecho en Python para los
controladores del hardware fornecidos por la empresa. Esta librería es una
interfaz completa para los controladores NIDAQmx ANSI C e importa todas las
funciones del controlador e importa todas las constantes predefinidas.
Como resultado, la librería retorna un objeto numpy.array.
Después de adquirir la señal de los sensores, los datos pueden ser almacenados
en un buffer circular en memoria que, dentro un proceso paralelo, busca una
señal completa de un vehículo (segmento). El segmento de datos puede ser hecho
por medio de un bucle inductivo.
Para iniciar la adquisición de datos es necesario definir parámetros como el
tipo de adquisición (ej. CreateAIVoltageChan), la cantidad de canales,
especificación de los canales acesados, el mostreo total por lectura,
tasa de mostreo, modo de agrupación de los datos (ej. DAQmx_Val_GroupByChannel)
Uso de datos sintéticos
End of explanation
"""
df_filt = df.copy()
for s in df_filt.keys():
df_filt[s] -= df_filt[s][:100].min()
# ploteo
plot_signals(df_filt)
"""
Explanation: Almacenamiento de datos
Después de segmentados, los datos brutos son almacenados en la base
de datos. Eso posibilita cambiar los métodos de cálculos o parámetros
de calibración, posibilitando analizar los métodos utilizados.
Una tecnología que puede ser muy útil para el almacenamiento de
datos brutos es HDF5. Así es posible definir un patrón para el
almacenamiento de los datos y se mantiene integridad de estos
datos (https://support.hdfgroup.org/HDF5/).
5. Procesamiento digital de señal
Para la realización de los cálculos, la señal necesita ser tratada y, para eso, es necesario aplicar un filtrado de señal y corrección de baseline. Para la aplicación del filtrado, en el ejemplo, será utilizado la recomendación de <a name="ref-1"/>(KistlerInstrumente, 2004), la fabricante de los sensores Lineas: filtrado del tipo pasa baja de orden 1, a 600 Hz.
5.1 Corrección de baseline
Para hacer la corrección de baseline pode ser utilizado el método que sea más apropiado para las características eléctricas de la señal del sensor. En la librería nmrglue <a name="ref-2"/>(Helmus and Jaroniec, 2013) tiene el módulo proc_bl que contiene muchas funciones que pueden ayudar a hacer la corrección de baseline. En el ejemplo abajo, la corrección será hecha sustrayendo de la señal el valor mínimo encontrado en los primeros 100 puntos de la señal.
End of explanation
"""
order = 1
freq = 600 # Mz
lower_cut = freq/sample_rate
b, a = signal.butter(order, lower_cut)
for k in df_filt.keys():
df_filt[k] = signal.filtfilt(b, a, df_filt[k])
# ploteo
plot_signals(df_filt)
"""
Explanation: 5.2 Filtrado de señal
El filtro utilizado será de tipo basa baja, de orden 1, con la frecuencia de corte de 600Hz. Para eso, fue utilizado los métodos filtfilt y butterworth de la librería scipy.
End of explanation
"""
peaks = {}
for k in df_filt.keys():
index = peakutils.indexes(df_filt[k].values)
peaks[k] = index
# ploteo
ax = plt.figure().gca()
plot_signals(df_filt, ax=ax)
for k in df_filt.keys():
ax.plot(df_filt.index[peaks[k]], df_filt[k].iloc[peaks[k]], 'ro')
plt.show()
"""
Explanation: 5.3 Detección de picos
Para la detección de picos, puede ser utilizada la librería
peakutils (https://pypi.python.org/pypi/PeakUtils)
End of explanation
"""
sensor_curve = defaultdict(dict)
for k in df_filt.keys():
# k => sensor
ax = plt.figure().gca()
for i, peak in enumerate(peaks[k]):
# i => axle
sensor_curve[k][i] = df_filt[[k]].iloc[peak-25:peak+25]
plot_signals(sensor_curve[k][i], ax=ax)
ax.set_xlim([0, 1])
plt.show()
"""
Explanation: Detección de la curva de la señal para el cálculo de peso
Para el recorte de la curva para el cálculo de peso para los sensores,
usando como base los sensores Lineas de Kistler, puede ser utilizado
el concepto descrito en
<a name="ref-4"/>(Kistler Instrumente, 2004).
La figura abajo
<a name="ref-5"/>(Kistler Instrumente, 2004)
ilustra cómo debe ser hecho el recorte.
<figure>
<img src="https://github.com/OpenWIM/pywim/blob/master/notebooks/presentations/scipyla2015/img/kistler-cut-signal-area.png?raw=true"
alt="Recorte del área de la señal"/>
<center><figcaption>Recorte del área de la señal</figcaption></center>
</figure>
Para hacerlo con los datos de ejemplo, puede ser adoptado un threshold
de 0,2 y un $\Delta{t}$ de 20. Para facilitar el ejemplo,
el corte será hecho desde 400 puntos antes del pico hasta 400 puntos
después del pico.
End of explanation
"""
distance_sensors = 1 # metro
vehicle_speed = defaultdict(list)
speed_index = []
for i in range(1, df_filt.shape[1]):
# i => sensor
for j in range(len(peaks['a%s' % i])):
# j => axis
time_points = peaks['a%s' % i][j]-peaks['a%s' % (i-1)][j]
d_time = time_points*(1/sample_rate)
vehicle_speed['axle_%s' % j].append(distance_sensors/d_time) # m/s
speed_index.append('speed_sensor_%s_%s' % (i-1, i))
df_speed = pd.DataFrame(
vehicle_speed, index=speed_index
)
vehicle_speed_mean = df_speed.mean().mean()
display(df_speed*3.6) # km
print('Velocidad media:', vehicle_speed_mean * 3.6, 'km/h') # km/h
"""
Explanation: Cálculos
A partir de las informaciones de los picos de la señal y su curva, es posible empezar los cálculos para determinar la distancia entre ejes, velocidad y peso. A continuación, serán presentados estos cálculos utilizando los datos de ejemplo generados en las secciones anteriores.
Velocidad
Para calcular la velocidad es necesario, primeramente, saber la distancia entre los sensores. Para este ejemplo, será adoptada la distancia de 1 metro. La velocidad se da a través de la fórmula: $v = \frac{\Delta{s}}{\Delta{t}}$
End of explanation
"""
axles_distance = defaultdict(dict)
for i in range(df_filt.shape[1]):
# i => sensor
for j in range(1, len(peaks['a%s' % i])):
iid = 'a%s' % i
time_points = peaks[iid][j]-peaks[iid][j-1]
d_time = time_points*(1/sample_rate)
axles_distance[iid]['axle%s-axle%s' % (j-1, j)] = (
d_time*vehicle_speed_mean
)
df_distance_axles = pd.DataFrame(axles_distance)
print(df_distance_axles)
"""
Explanation: Distancia entre ejes
Para calcular la distancia entre ejes es necesario haber calculado la velocidad. La fórmula para el cálculo de la distancia entre ejes es: $\Delta{s} = v*\Delta{t}$. En este ejemplo será utilizada la velocidad media, pero también podría ser utilizada la velocidad encontrada por eje.
End of explanation
"""
df_area = pd.DataFrame()
time_interval = 1/sample_rate
print('intervalo de tiempo:', time_interval)
for s in sensor_curve:
area = {}
for axle, v in sensor_curve[s].items():
# sumatorio con corrección de baseline
result = float((v-v.min()).sum()*time_interval)
area.update({'axle_%s' % axle: result})
df_area[s] = pd.Series(area)
df_area = df_area.T
print(df_area)
"""
Explanation: Área bajo la curva
Otra información necesaria para la realización de los cálculos de pesaje es el área bajo la curva identificada. Para realizar este cálculo es necesario hacer la integral de curva o, en este caso, la suma de los puntos de la curva.
End of explanation
"""
amp_sensibility = 0.15*10**-3 # 1.8 pC/N*5V/60000pC
sensors_number = df.shape[1]
C = pd.Series([1]*sensors_number)
Ls = pd.Series([0.53]*sensors_number)
V = df_speed.reset_index(drop=True)
A = df_area.reset_index(drop=True)
W = pd.DataFrame()
for i, axle in enumerate(V.keys()):
W[axle] = ((V[axle]/Ls)*A[axle]*C)/amp_sensibility/constants.g
print(W)
print('\nPromedio por eje:')
print(W.mean())
print('\nPeso Bruto Total:', W.mean().sum(), 'kg')
"""
Explanation: Pesos
Para calcular el peso del vehículo serán utilizadas las informaciones de velocidad, la curva de cada eje, . Para los sensores Lineas de Kistler, debe ser seguida la siguiente formula <a name="ref-6"/>(KistlerInstrumente, 2004):
$W = ( V / L_s ) * A * C$, donde W es la variable de peso, V es la velocidad, $L_s$ es el ancho del sensor, A es la integral de la curva y C es una constante de calibración. Para otros tipos de sensores, la fórmula es similar. Para sensores del tipo piezoeléctrico polímero y cerámicos es necesario considerar un método para corrección de los resultados debido a la sensibilidad a la temperatura <a name="ref-7"/>(Burnos, 2008), <a name="ref-8"/>(Gajda et al., 2012). Para los datos de ejemplo, serán calculados los pesos sobre los ejes y el peso bruto total utilizando como parámetro: el ancho del sensor con el valor de 0.53 metros y la constante de calibración igual a 1 para todos los sensores.
End of explanation
"""
layout_s = dww.layout((7, 2, 0.5, 2))
layout = dww.layout_to_int(layout_s)
layout_ref_s = '-O----O-O----O--'
layout_ref = dww.layout_to_int(layout_ref_s)
z = np.zeros((len(layout), len(layout_ref)), dtype=int)
%time resultado = dww.D(layout, layout_ref, z)
print('truck layout: ', layout_s)
print('truck layout reference:', layout_ref_s)
print(resultado)
"""
Explanation: Clasificación de vehículos
Aquí será presentado un método para clasificación vehicular basado en los trabajos de <a name="ref-9"/>(vanBoxel and vanLieshout, 2003) y <a name="ref-10"/>(Oommen and Loke, 1997)
En este método, es utilizado un conjunto de layouts de referencias, definido por un conjunto de símbolos, que representa el diseño del vehículo, como puede ser visto en la figura abajo <a name="ref-11"/>(vanBoxel and vanLieshout, 2003).
<figure>
<img src="https://github.com/OpenWIM/pywim/blob/master/notebooks/presentations/scipyla2015/img/dww-layout.png?raw=true" alt="Ejemplos de layout de vehículos"/>
<center><figcaption>Ejemplo de *layouts* de la representación de clases de vehículos pesados</figcaption></center>
</figure>
Para clasificar el vehículo, el sistema crea un layout para el vehículo medido, lo compara con layouts de referencias y clasifica el vehículo que con el layout de referencia que resulta más próximo.
Este método presenta bajo desempeño en el lenguaje Python. Para solucionar esto, fue utilizada la librería numba, llegando a ser cerca de 100 veces más rápido. Fue necesária una adaptación en el algoritmo donde, ante de hacer las comparaciones, el layout del veículo y el layout de la clase de referencia son convertidos en números, así la función de comparación puede ser marcada para ser compilada en modo nopython. Cuanto más cerca de 0 más cerca el layout del vehículo está del layout de referencia.
End of explanation
"""
# datos sintéticos
df_weight = pd.DataFrame({
'a1': np.ones(200), 'a2': np.ones(200), 'target': np.ones(200)
})
df_weight.loc[:100, ['a1', 'a2']] = 8000
df_weight.loc[100:, ['a1', 'a2']] = 10000
df_weight['a1'] += np.random.random(200)*1000
df_weight['a2'] += np.random.random(200)*1000
df_weight.loc[:100, ['target']] = 8000
df_weight.loc[100:, ['target']] = 10000
r2 = {}
c = {}
for s in ['a1', 'a2']:
sm.add_constant(df_weight[s]) # Adds a constant term to the predictor
model = sm.OLS(df_weight['target'], df_weight[s])
predict = model.fit()
r2[s] = [predict._results.rsquared]
c[s] = predict.params[s]
# ploteo
x = df_weight['a1']
y = df_weight['target']
x_lim_max = df_weight['a1'].max()
x_lim_max *= 1.2
x_lim_min = df_weight['a1'].min()
x_lim_min *= 0.8
line_base = np.linspace(x_lim_min, x_lim_max, x_lim_max)
for i, s in enumerate(['a1', 'a2']):
f = plt.figure()
plt.title('Valores de pesaje sensor %s' % s)
plt.xlabel('Valor calculado')
plt.ylabel('Valor Target')
plt.plot(df_weight[s], df_weight['target'], 'ro')
plt.plot(line_base, line_base*c[s])
f.show()
print('R2', r2)
print('CC', c)
def score_95_calc(metric_score, y, y_pred):
if y.shape[0] < 1:
print('size calc 0')
return 0.0
y_true = np.array([True] * y.shape[0])
lb = y - y * 0.05
ub = y + y * 0.05
y_pred_95 = (lb < y_pred) == (y_pred < ub)
y_pred_95 = y_pred_95 == True
return metric_score(y_true, y_pred_95)
def score_95_base(metric_score, estimator, X_test, y_test):
if y_test.shape[0] < 1:
print('size base 0')
return 0.0
y_pred = estimator.predict(X_test)
return score_95_calc(metric_score, y_test, y_pred)
def score_95_accuracy(estimator, X, y):
return score_95_base(metrics.accuracy_score, estimator, X, y)
def score_95_precision(estimator, X, y):
return score_95_base(metrics.precision_score, estimator, X, y)
def score_95_recall(estimator, X, y):
return score_95_base(metrics.recall_score, estimator, X, y)
def score_95_f1_score(estimator, X, y):
return score_95_base(metrics.f1_score, estimator, X, y)
df_weight_cc = df_weight[['a1', 'a2']].copy()
for s in ['a1', 'a2']:
df_weight_cc[s] *= c[s]
df_gross_weight = df_weight_cc.mean(axis=1)
for _m_name, _metric in [
('accuracy', metrics.accuracy_score),
('precision', metrics.precision_score),
('recall', metrics.recall_score),
('f1 score', metrics.f1_score),
]:
print(
('%s:' % _m_name).ljust(22, ' '),
score_95_calc(_metric, df_weight['target'], df_gross_weight)
)
"""
Explanation: Calibración de los cálculos de pesaje
La calibración periódica en sistemas de pesaje es muy importante para mantener a un bajo margen de errores los pesos calculados. Para apoyar esta etapa puede ser utilizado el método de regresión lineal por mínimos cuadrados (OLS) de la librería statsmodels que, por ejemplo, posibilita saber informaciones como el coeficiente de determinación (R²) de la regresión lineal realizada. La librería scikit-learn también puede ser usada en esta etapa con finalidad de apoyo en los análisis de los resultados. Para probar estas funcionalidades, serán utilizados dados de pesaje sintéticos con ruidos, para simular los errores de medición con 100 pasadas de dos camiones con peso conocido.
End of explanation
"""
|
geography-munich/sciprog | material/sub/jrjohansson/Lecture-6B-HPC.ipynb | apache-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: Lecture 6B - Tools for high-performance computing applications
J.R. Johansson (jrjohansson at gmail.com)
The latest version of this IPython notebook lecture is available at http://github.com/jrjohansson/scientific-python-lectures.
The other notebooks in this lecture series are indexed at http://jrjohansson.github.io.
End of explanation
"""
import multiprocessing
import os
import time
import numpy
def task(args):
print("PID =", os.getpid(), ", args =", args)
return os.getpid(), args
task("test")
pool = multiprocessing.Pool(processes=4)
result = pool.map(task, [1,2,3,4,5,6,7,8])
result
"""
Explanation: multiprocessing
Python has a built-in process-based library for concurrent computing, called multiprocessing.
End of explanation
"""
from IPython.parallel import Client
cli = Client()
"""
Explanation: The multiprocessing package is very useful for highly parallel tasks that do not need to communicate with each other, other than when sending the initial data to the pool of processes and when and collecting the results.
IPython parallel
IPython includes a very interesting and versatile parallel computing environment, which is very easy to use. It builds on the concept of ipython engines and controllers, that one can connect to and submit tasks to. To get started using this framework for parallel computing, one first have to start up an IPython cluster of engines. The easiest way to do this is to use the ipcluster command,
$ ipcluster start -n 4
Or, alternatively, from the "Clusters" tab on the IPython notebook dashboard page. This will start 4 IPython engines on the current host, which is useful for multicore systems. It is also possible to setup IPython clusters that spans over many nodes in a computing cluster. For more information about possible use cases, see the official documentation Using IPython for parallel computing.
To use the IPython cluster in our Python programs or notebooks, we start by creating an instance of IPython.parallel.Client:
End of explanation
"""
cli.ids
"""
Explanation: Using the 'ids' attribute we can retreive a list of ids for the IPython engines in the cluster:
End of explanation
"""
def getpid():
""" return the unique ID of the current process """
import os
return os.getpid()
# first try it on the notebook process
getpid()
# run it on one of the engines
cli[0].apply_sync(getpid)
# run it on ALL of the engines at the same time
cli[:].apply_sync(getpid)
"""
Explanation: Each of these engines are ready to execute tasks. We can selectively run code on individual engines:
End of explanation
"""
dview = cli[:]
@dview.parallel(block=True)
def dummy_task(delay):
""" a dummy task that takes 'delay' seconds to finish """
import os, time
t0 = time.time()
pid = os.getpid()
time.sleep(delay)
t1 = time.time()
return [pid, t0, t1]
# generate random delay times for dummy tasks
delay_times = numpy.random.rand(4)
"""
Explanation: We can use this cluster of IPython engines to execute tasks in parallel. The easiest way to dispatch a function to different engines is to define the function with the decorator:
@view.parallel(block=True)
Here, view is supposed to be the engine pool which we want to dispatch the function (task). Once our function is defined this way we can dispatch it to the engine using the map method in the resulting class (in Python, a decorator is a language construct which automatically wraps the function into another function or a class).
To see how all this works, lets look at an example:
End of explanation
"""
dummy_task.map(delay_times)
"""
Explanation: Now, to map the function dummy_task to the random delay time data, we use the map method in dummy_task:
End of explanation
"""
def visualize_tasks(results):
res = numpy.array(results)
fig, ax = plt.subplots(figsize=(10, res.shape[1]))
yticks = []
yticklabels = []
tmin = min(res[:,1])
for n, pid in enumerate(numpy.unique(res[:,0])):
yticks.append(n)
yticklabels.append("%d" % pid)
for m in numpy.where(res[:,0] == pid)[0]:
ax.add_patch(plt.Rectangle((res[m,1] - tmin, n-0.25),
res[m,2] - res[m,1], 0.5, color="green", alpha=0.5))
ax.set_ylim(-.5, n+.5)
ax.set_xlim(0, max(res[:,2]) - tmin + 0.)
ax.set_yticks(yticks)
ax.set_yticklabels(yticklabels)
ax.set_ylabel("PID")
ax.set_xlabel("seconds")
delay_times = numpy.random.rand(64)
result = dummy_task.map(delay_times)
visualize_tasks(result)
"""
Explanation: Let's do the same thing again with many more tasks and visualize how these tasks are executed on different IPython engines:
End of explanation
"""
lbview = cli.load_balanced_view()
@lbview.parallel(block=True)
def dummy_task_load_balanced(delay):
""" a dummy task that takes 'delay' seconds to finish """
import os, time
t0 = time.time()
pid = os.getpid()
time.sleep(delay)
t1 = time.time()
return [pid, t0, t1]
result = dummy_task_load_balanced.map(delay_times)
visualize_tasks(result)
"""
Explanation: That's a nice and easy parallelization! We can see that we utilize all four engines quite well.
But one short coming so far is that the tasks are not load balanced, so one engine might be idle while others still have more tasks to work on.
However, the IPython parallel environment provides a number of alternative "views" of the engine cluster, and there is a view that provides load balancing as well (above we have used the "direct view", which is why we called it "dview").
To obtain a load balanced view we simply use the load_balanced_view method in the engine cluster client instance cli:
End of explanation
"""
%%file mpitest.py
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank == 0:
data = [1.0, 2.0, 3.0, 4.0]
comm.send(data, dest=1, tag=11)
elif rank == 1:
data = comm.recv(source=0, tag=11)
print "rank =", rank, ", data =", data
!mpirun -n 2 python mpitest.py
"""
Explanation: In the example above we can see that the engine cluster is a bit more efficiently used, and the time to completion is shorter than in the previous example.
Further reading
There are many other ways to use the IPython parallel environment. The official documentation has a nice guide:
http://ipython.org/ipython-doc/dev/parallel/
MPI
When more communication between processes is required, sophisticated solutions such as MPI and OpenMP are often needed. MPI is process based parallel processing library/protocol, and can be used in Python programs through the mpi4py package:
http://mpi4py.scipy.org/
To use the mpi4py package we include MPI from mpi4py:
from mpi4py import MPI
A MPI python program must be started using the mpirun -n N command, where N is the number of processes that should be included in the process group.
Note that the IPython parallel enviroment also has support for MPI, but to begin with we will use mpi4py and the mpirun in the follow examples.
Example 1
End of explanation
"""
%%file mpi-numpy-array.py
from mpi4py import MPI
import numpy
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank == 0:
data = numpy.random.rand(10)
comm.Send(data, dest=1, tag=13)
elif rank == 1:
data = numpy.empty(10, dtype=numpy.float64)
comm.Recv(data, source=0, tag=13)
print "rank =", rank, ", data =", data
!mpirun -n 2 python mpi-numpy-array.py
"""
Explanation: Example 2
Send a numpy array from one process to another:
End of explanation
"""
# prepare some random data
N = 16
A = numpy.random.rand(N, N)
numpy.save("random-matrix.npy", A)
x = numpy.random.rand(N)
numpy.save("random-vector.npy", x)
%%file mpi-matrix-vector.py
from mpi4py import MPI
import numpy
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
p = comm.Get_size()
def matvec(comm, A, x):
m = A.shape[0] / p
y_part = numpy.dot(A[rank * m:(rank+1)*m], x)
y = numpy.zeros_like(x)
comm.Allgather([y_part, MPI.DOUBLE], [y, MPI.DOUBLE])
return y
A = numpy.load("random-matrix.npy")
x = numpy.load("random-vector.npy")
y_mpi = matvec(comm, A, x)
if rank == 0:
y = numpy.dot(A, x)
print(y_mpi)
print "sum(y - y_mpi) =", (y - y_mpi).sum()
!mpirun -n 4 python mpi-matrix-vector.py
"""
Explanation: Example 3: Matrix-vector multiplication
End of explanation
"""
# prepare some random data
N = 128
a = numpy.random.rand(N)
numpy.save("random-vector.npy", a)
%%file mpi-psum.py
from mpi4py import MPI
import numpy as np
def psum(a):
r = MPI.COMM_WORLD.Get_rank()
size = MPI.COMM_WORLD.Get_size()
m = len(a) / size
locsum = np.sum(a[r*m:(r+1)*m])
rcvBuf = np.array(0.0, 'd')
MPI.COMM_WORLD.Allreduce([locsum, MPI.DOUBLE], [rcvBuf, MPI.DOUBLE], op=MPI.SUM)
return rcvBuf
a = np.load("random-vector.npy")
s = psum(a)
if MPI.COMM_WORLD.Get_rank() == 0:
print "sum =", s, ", numpy sum =", a.sum()
!mpirun -n 4 python mpi-psum.py
"""
Explanation: Example 4: Sum of the elements in a vector
End of explanation
"""
N_core = multiprocessing.cpu_count()
print("This system has %d cores" % N_core)
"""
Explanation: Further reading
http://mpi4py.scipy.org
http://mpi4py.scipy.org/docs/usrman/tutorial.html
https://computing.llnl.gov/tutorials/mpi/
OpenMP
What about OpenMP? OpenMP is a standard and widely used thread-based parallel API that unfortunaltely is not useful directly in Python. The reason is that the CPython implementation use a global interpreter lock, making it impossible to simultaneously run several Python threads. Threads are therefore not useful for parallel computing in Python, unless it is only used to wrap compiled code that do the OpenMP parallelization (Numpy can do something like that).
This is clearly a limitation in the Python interpreter, and as a consequence all parallelization in Python must use processes (not threads).
However, there is a way around this that is not that painful. When calling out to compiled code the GIL is released, and it is possible to write Python-like code in Cython where we can selectively release the GIL and do OpenMP computations.
End of explanation
"""
%load_ext cythonmagic
%%cython -f -c-fopenmp --link-args=-fopenmp -c-g
cimport cython
cimport numpy
from cython.parallel import prange, parallel
cimport openmp
def cy_openmp_test():
cdef int n, N
# release GIL so that we can use OpenMP
with nogil, parallel():
N = openmp.omp_get_num_threads()
n = openmp.omp_get_thread_num()
with gil:
print("Number of threads %d: thread number %d" % (N, n))
cy_openmp_test()
"""
Explanation: Here is a simple example that shows how OpenMP can be used via cython:
End of explanation
"""
# prepare some random data
N = 4 * N_core
M = numpy.random.rand(N, N)
x = numpy.random.rand(N)
y = numpy.zeros_like(x)
"""
Explanation: Example: matrix vector multiplication
End of explanation
"""
%%cython
cimport cython
cimport numpy
import numpy
@cython.boundscheck(False)
@cython.wraparound(False)
def cy_matvec(numpy.ndarray[numpy.float64_t, ndim=2] M,
numpy.ndarray[numpy.float64_t, ndim=1] x,
numpy.ndarray[numpy.float64_t, ndim=1] y):
cdef int i, j, n = len(x)
for i from 0 <= i < n:
for j from 0 <= j < n:
y[i] += M[i, j] * x[j]
return y
# check that we get the same results
y = numpy.zeros_like(x)
cy_matvec(M, x, y)
numpy.dot(M, x) - y
%timeit numpy.dot(M, x)
%timeit cy_matvec(M, x, y)
"""
Explanation: Let's first look at a simple implementation of matrix-vector multiplication in Cython:
End of explanation
"""
%%cython -f -c-fopenmp --link-args=-fopenmp -c-g
cimport cython
cimport numpy
from cython.parallel import parallel
cimport openmp
@cython.boundscheck(False)
@cython.wraparound(False)
def cy_matvec_omp(numpy.ndarray[numpy.float64_t, ndim=2] M,
numpy.ndarray[numpy.float64_t, ndim=1] x,
numpy.ndarray[numpy.float64_t, ndim=1] y):
cdef int i, j, n = len(x), N, r, m
# release GIL, so that we can use OpenMP
with nogil, parallel():
N = openmp.omp_get_num_threads()
r = openmp.omp_get_thread_num()
m = n / N
for i from 0 <= i < m:
for j from 0 <= j < n:
y[r * m + i] += M[r * m + i, j] * x[j]
return y
# check that we get the same results
y = numpy.zeros_like(x)
cy_matvec_omp(M, x, y)
numpy.dot(M, x) - y
%timeit numpy.dot(M, x)
%timeit cy_matvec_omp(M, x, y)
"""
Explanation: The Cython implementation here is a bit slower than numpy.dot, but not by much, so if we can use multiple cores with OpenMP it should be possible to beat the performance of numpy.dot.
End of explanation
"""
N_vec = numpy.arange(25, 2000, 25) * N_core
duration_ref = numpy.zeros(len(N_vec))
duration_cy = numpy.zeros(len(N_vec))
duration_cy_omp = numpy.zeros(len(N_vec))
for idx, N in enumerate(N_vec):
M = numpy.random.rand(N, N)
x = numpy.random.rand(N)
y = numpy.zeros_like(x)
t0 = time.time()
numpy.dot(M, x)
duration_ref[idx] = time.time() - t0
t0 = time.time()
cy_matvec(M, x, y)
duration_cy[idx] = time.time() - t0
t0 = time.time()
cy_matvec_omp(M, x, y)
duration_cy_omp[idx] = time.time() - t0
fig, ax = plt.subplots(figsize=(12, 6))
ax.loglog(N_vec, duration_ref, label='numpy')
ax.loglog(N_vec, duration_cy, label='cython')
ax.loglog(N_vec, duration_cy_omp, label='cython+openmp')
ax.legend(loc=2)
ax.set_yscale("log")
ax.set_ylabel("matrix-vector multiplication duration")
ax.set_xlabel("matrix size");
"""
Explanation: Now, this implementation is much slower than numpy.dot for this problem size, because of overhead associated with OpenMP and threading, etc. But let's look at the how the different implementations compare with larger matrix sizes:
End of explanation
"""
((duration_ref / duration_cy_omp)[-10:]).mean()
"""
Explanation: For large problem sizes the the cython+OpenMP implementation is faster than numpy.dot.
With this simple implementation, the speedup for large problem sizes is about:
End of explanation
"""
N_core
"""
Explanation: Obviously one could do a better job with more effort, since the theoretical limit of the speed-up is:
End of explanation
"""
%%file opencl-dense-mv.py
import pyopencl as cl
import numpy
import time
# problem size
n = 10000
# platform
platform_list = cl.get_platforms()
platform = platform_list[0]
# device
device_list = platform.get_devices()
device = device_list[0]
if False:
print("Platform name:" + platform.name)
print("Platform version:" + platform.version)
print("Device name:" + device.name)
print("Device type:" + cl.device_type.to_string(device.type))
print("Device memory: " + str(device.global_mem_size//1024//1024) + ' MB')
print("Device max clock speed:" + str(device.max_clock_frequency) + ' MHz')
print("Device compute units:" + str(device.max_compute_units))
# context
ctx = cl.Context([device]) # or we can use cl.create_some_context()
# command queue
queue = cl.CommandQueue(ctx)
# kernel
KERNEL_CODE = """
//
// Matrix-vector multiplication: r = m * v
//
#define N %(mat_size)d
__kernel
void dmv_cl(__global float *m, __global float *v, __global float *r)
{
int i, gid = get_global_id(0);
r[gid] = 0;
for (i = 0; i < N; i++)
{
r[gid] += m[gid * N + i] * v[i];
}
}
"""
kernel_params = {"mat_size": n}
program = cl.Program(ctx, KERNEL_CODE % kernel_params).build()
# data
A = numpy.random.rand(n, n)
x = numpy.random.rand(n, 1)
# host buffers
h_y = numpy.empty(numpy.shape(x)).astype(numpy.float32)
h_A = numpy.real(A).astype(numpy.float32)
h_x = numpy.real(x).astype(numpy.float32)
# device buffers
mf = cl.mem_flags
d_A_buf = cl.Buffer(ctx, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=h_A)
d_x_buf = cl.Buffer(ctx, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=h_x)
d_y_buf = cl.Buffer(ctx, mf.WRITE_ONLY, size=h_y.nbytes)
# execute OpenCL code
t0 = time.time()
event = program.dmv_cl(queue, h_y.shape, None, d_A_buf, d_x_buf, d_y_buf)
event.wait()
cl.enqueue_copy(queue, h_y, d_y_buf)
t1 = time.time()
print "opencl elapsed time =", (t1-t0)
# Same calculation with numpy
t0 = time.time()
y = numpy.dot(h_A, h_x)
t1 = time.time()
print "numpy elapsed time =", (t1-t0)
# see if the results are the same
print "max deviation =", numpy.abs(y-h_y).max()
!python opencl-dense-mv.py
"""
Explanation: Further reading
http://openmp.org
http://docs.cython.org/src/userguide/parallelism.html
OpenCL
OpenCL is an API for heterogenous computing, for example using GPUs for numerical computations. There is a python package called pyopencl that allows OpenCL code to be compiled, loaded and executed on the compute units completely from within Python. This is a nice way to work with OpenCL, because the time-consuming computations should be done on the compute units in compiled code, and in this Python only server as a control language.
End of explanation
"""
%load_ext version_information
%version_information numpy, mpi4py, Cython
"""
Explanation: Further reading
http://mathema.tician.de/software/pyopencl
Versions
End of explanation
"""
|
nre-aachen/GeMpy | Prototype Notebook/.ipynb_checkpoints/Example_1_Sandstone_Project-checkpoint.ipynb | mit | # Importing
import theano.tensor as T
import sys, os
sys.path.append("../GeMpy")
# Importing GeMpy modules
import GeMpy_core
import Visualization
# Reloading (only for development purposes)
import importlib
importlib.reload(GeMpy_core)
importlib.reload(Visualization)
# Usuful packages
import numpy as np
import pandas as pn
import matplotlib.pyplot as plt
# This was to choose the gpu
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
# Default options of printin
np.set_printoptions(precision = 6, linewidth= 130, suppress = True)
%matplotlib inline
#%matplotlib notebook
"""
Explanation: Example 1: Sandstone Model
End of explanation
"""
# Setting extend, grid and compile
# Setting the extent
sandstone = GeMpy_core.GeMpy()
# Create Data class with raw data
sandstone.import_data( [696000,747000,6863000,6950000,-20000, 2000],[ 40, 40, 80],
path_f = os.pardir+"/input_data/a_Foliations.csv",
path_i = os.pardir+"/input_data/a_Points.csv")
"""
Explanation: First we make a GeMpy instance with most of the parameters default (except range that is given by the project). Then we also fix the extension and the resolution of the domain we want to interpolate. Finally we compile the function, only needed once every time we open the project (the guys of theano they are working on letting loading compiled files, even though in our case it is not a big deal).
General note. So far the reescaling factor is calculated for all series at the same time. GeoModeller does it individually for every potential field. I have to look better what this parameter exactly means
End of explanation
"""
sandstone.Data.Foliations.head()
"""
Explanation: All input data is stored in pandas dataframes under, self.Data.Interances and self.Data.Foliations:
End of explanation
"""
sandstone.Data.set_series({"EarlyGranite_Series":sandstone.Data.formations[-1],
"BIF_Series":(sandstone.Data.formations[0], sandstone.Data.formations[1]),
"SimpleMafic_Series":sandstone.Data.formations[2]},
order = ["EarlyGranite_Series",
"BIF_Series",
"SimpleMafic_Series"])
"""
Explanation: In case of disconformities, we can define which formation belong to which series using a dictionary. Until python 3.6 is important to specify the order of the series otherwise is random
End of explanation
"""
sandstone.Data.Foliations.head()
"""
Explanation: Now in the data frame we should have the series column too
End of explanation
"""
# Create a class Grid so far just regular grid
sandstone.create_grid()
sandstone.Grid.grid
"""
Explanation: Next step is the creating of a grid. So far only regular. By default it takes the extent and the resolution given in the import_data method.
End of explanation
"""
sandstone.Plot.plot_data(series = sandstone.Data.series.columns.values[1])
"""
Explanation: Plotting raw data
The object Plot is created automatically as we call the methods above. This object contains some methods to plot the data and the results.
It is possible to plot a 2D projection of the data in a specific direction using the following method. Also is possible to choose the series you want to plot. Additionally all the key arguments of seaborn lmplot can be used.
End of explanation
"""
sandstone.set_interpolator()
"""
Explanation: Class Interpolator
This class will take the data from the class Data and calculate potential fields and block. We can pass as key arguments all the variables of the interpolation. I recommend not to touch them if you do not know what are you doing. The default values should be good enough. Also the first time we execute the method, we will compile the theano function so it can take a bit of time.
End of explanation
"""
sandstone.Plot.plot_potential_field(10, n_pf=0)
"""
Explanation: Now we could visualize the individual potential fields as follow:
Early granite
End of explanation
"""
sandstone.Plot.plot_potential_field(13, n_pf=1, cmap = "magma", plot_data = True,
verbose = 5 )
"""
Explanation: BIF Series
End of explanation
"""
sandstone.Plot.plot_potential_field(10, n_pf=2)
"""
Explanation: SImple mafic
End of explanation
"""
# Reset the block
sandstone.Interpolator.block.set_value(np.zeros_like(sandstone.Grid.grid[:,0]))
# Compute the block
sandstone.Interpolator.compute_block_model([0,1,2], verbose = 0)
sandstone.Interpolator.block.get_value(), np.unique(sandstone.Interpolator.block.get_value())
"""
Explanation: Optimizing the export of lithologies
But usually the final result we want to get is the final block. The method compute_block_model will compute the block model, updating the attribute block. This attribute is a theano shared function that can return a 3D array (raveled) using the method get_value().
End of explanation
"""
sandstone.Plot.plot_block_section(13, interpolation = 'nearest', direction='y')
plt.savefig("sandstone_example.png")
"""
Explanation: And again after computing the model in the Plot object we can use the method plot_block_section to see a 2D section of the model
End of explanation
"""
"""Export model to VTK
Export the geology blocks to VTK for visualisation of the entire 3-D model in an
external VTK viewer, e.g. Paraview.
..Note:: Requires pyevtk, available for free on: https://github.com/firedrakeproject/firedrake/tree/master/python/evtk
**Optional keywords**:
- *vtk_filename* = string : filename of VTK file (default: output_name)
- *data* = np.array : data array to export to VKT (default: entire block model)
"""
vtk_filename = "noddyFunct2"
extent_x = 10
extent_y = 10
extent_z = 10
delx = 0.2
dely = 0.2
delz = 0.2
from pyevtk.hl import gridToVTK
# Coordinates
x = np.arange(0, extent_x + 0.1*delx, delx, dtype='float64')
y = np.arange(0, extent_y + 0.1*dely, dely, dtype='float64')
z = np.arange(0, extent_z + 0.1*delz, delz, dtype='float64')
# self.block = np.swapaxes(self.block, 0, 2)
gridToVTK(vtk_filename, x, y, z, cellData = {"geology" : sol})
"""
Explanation: Export to vtk. (Under development)
End of explanation
"""
%%timeit
# Reset the block
sandstone.Interpolator.block.set_value(np.zeros_like(sandstone.Grid.grid[:,0]))
# Compute the block
sandstone.Interpolator.compute_block_model([0,1,2], verbose = 0)
"""
Explanation: Performance Analysis
One of the advantages of theano is the posibility to create a full profile of the function. This has to be included in at the time of the creation of the function. At the moment it should be active (the downside is larger compilation time and I think also a bit in the computation so be careful if you need a fast call)
CPU
The following profile is with a 2 core laptop. Nothing spectacular.
End of explanation
"""
esandstone.Interpolator._interpolate.profile.summary()
"""
Explanation: Looking at the profile we can see that most of time is in pow operation (exponential). This probably is that the extent is huge and we are doing it with too much precision. I am working on it
End of explanation
"""
%%timeit
# Reset the block
sandstone.block.set_value(np.zeros_like(sandstone.grid[:,0]))
# Compute the block
sandstone.compute_block_model([0,1,2], verbose = 0)
sandstone.block_export.profile.summary()
"""
Explanation: GPU
End of explanation
"""
|
AshleySetter/datahandling | SDE_Solution_Derivation.ipynb | mit | def a_q(t, v, q):
return v
def a_v(t, v, q):
return -(Gamma0 - Omega0*eta*q**2)*v - Omega0**2*q
def b_v(t, v, q):
return np.sqrt(2*Gamma0*k_b*T_0/m)
"""
Explanation: Equation of motion - SDE to be solved
$\ddot{q}(t) + \Gamma_0\dot{q}(t) + \Omega_0^2 q(t) - \dfrac{1}{m} F(t) = 0 $
where q = x, y or z
Where $F(t) = \mathcal{F}{fluct}(t) + F{feedback}(t)$
Taken from page 46 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler
Using $\mathcal{F}_{fluct}(t) = \sqrt{2m \Gamma_0 k_B T_0}\dfrac{dW(t)}{dt}$
and $F_{feedback}(t) = \Omega_0 \eta q^2 \dot{q}$
Taken from page 49 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler
we get the following SDE:
$\dfrac{d^2q(t)}{dt^2} + (\Gamma_0 - \Omega_0 \eta q(t)^2)\dfrac{dq(t)}{dt} + \Omega_0^2 q(t) - \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} = 0$
split into 2 first order ODE/SDE s
letting $v = \dfrac{dq}{dt}$
$\dfrac{dv(t)}{dt} + (\Gamma_0 - \Omega_0 \eta q(t)^2)v + \Omega_0^2 q(t) - \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} = 0$
therefore
$\dfrac{dv(t)}{dt} = -(\Gamma_0 - \Omega_0 \eta q(t)^2)v - \Omega_0^2 q(t) + \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} $
$v = \dfrac{dq}{dt}$ therefore $dq = v~dt$
\begin{align}
dq&=v\,dt\
dv&=[-(\Gamma_0-\Omega_0 \eta q(t)^2)v(t) - \Omega_0^2 q(t)]\,dt + \sqrt{\frac{2\Gamma_0 k_B T_0}m}\,dW
\end{align}
Apply Milstein Method to solve
Consider the autonomous Itō stochastic differential equation
${\mathrm {d}}X_{t}=a(X_{t})\,{\mathrm {d}}t+b(X_{t})\,{\mathrm {d}}W_{t}$
Taking $X_t = q_t$ for the 1st equation above (i.e. $dq = v~dt$) we get:
$$ a(q_t) = v $$
$$ b(q_t) = 0 $$
Taking $X_t = v_t$ for the 2nd equation above (i.e. $dv = ...$) we get:
$$a(v_t) = -(\Gamma_0-\Omega_0\eta q(t)^2)v - \Omega_0^2 q(t)$$
$$b(v_t) = \sqrt{\dfrac{2\Gamma_0 k_B T_0}m}$$
${\displaystyle b'(v_{t})=0}$ therefore the diffusion term does not depend on ${\displaystyle v_{t}}$ , the Milstein's method in this case is therefore equivalent to the Euler–Maruyama method.
We then construct these functions in python:
End of explanation
"""
Gamma0 = 4000 # radians/second
Omega0 = 75e3*2*np.pi # radians/second
eta = 0.5e7
T_0 = 300 # K
k_b = scipy.constants.Boltzmann # J/K
m = 3.1e-19 # KG
"""
Explanation: Using values obtained from fitting to data from a real particle we set the following constant values describing the system. Cooling has been assumed to be off by setting $\eta = 0$.
End of explanation
"""
dt = 1e-10
tArray = np.arange(0, 100e-6, dt)
print("{} Hz".format(1/dt))
"""
Explanation: partition the interval [0, T] into N equal subintervals of width $\Delta t>0$:
$ 0=\tau {0}<\tau {1}<\dots <\tau {N}=T{\text{ with }}\tau {n}:=n\Delta t{\text{ and }}\Delta t={\frac {T}{N}}$
End of explanation
"""
q0 = 0
v0 = 0
q = np.zeros_like(tArray)
v = np.zeros_like(tArray)
q[0] = q0
v[0] = v0
"""
Explanation: set $Y_{0}=x_{0}$
End of explanation
"""
np.random.seed(88)
dwArray = np.random.normal(0, np.sqrt(dt), len(tArray)) # independent and identically distributed normal random variables with expected value 0 and variance dt
"""
Explanation: Generate independent and identically distributed normal random variables with expected value 0 and variance dt
End of explanation
"""
#%%timeit
for n, t in enumerate(tArray[:-1]):
dw = dwArray[n]
v[n+1] = v[n] + a_v(t, v[n], q[n])*dt + b_v(t, v[n], q[n])*dw + 0
q[n+1] = q[n] + a_q(t, v[n], q[n])*dt + 0
"""
Explanation: Apply Milstein's method (Euler Maruyama if $b'(Y_{n}) = 0$ as is the case here):
recursively define $Y_{n}$ for $ 1\leq n\leq N $ by
$ Y_{{n+1}}=Y_{n}+a(Y_{n})\Delta t+b(Y_{n})\Delta W_{n}+{\frac {1}{2}}b(Y_{n})b'(Y_{n})\left((\Delta W_{n})^{2}-\Delta t\right)$
Perform this for the 2 first order differential equations:
End of explanation
"""
plt.plot(tArray*1e6, v)
plt.xlabel("t (us)")
plt.ylabel("v")
plt.plot(tArray*1e6, q)
plt.xlabel("t (us)")
plt.ylabel("q")
"""
Explanation: We now have an array of positions, $v$, and velocities $p$ with time $t$.
End of explanation
"""
q0 = 0
v0 = 0
X = np.zeros([len(tArray), 2])
X[0, 0] = q0
X[0, 1] = v0
def a(t, X):
q, v = X
return np.array([v, -(Gamma0 - Omega0*eta*q**2)*v - Omega0**2*q])
def b(t, X):
q, v = X
return np.array([0, np.sqrt(2*Gamma0*k_b*T_0/m)])
%%timeit
S = np.array([-1,1])
for n, t in enumerate(tArray[:-1]):
dw = dwArray[n]
K1 = a(t, X[n])*dt + b(t, X[n])*(dw - S*np.sqrt(dt))
Xh = X[n] + K1
K2 = a(t, Xh)*dt + b(t, Xh)*(dw + S*np.sqrt(dt))
X[n+1] = X[n] + 0.5 * (K1+K2)
q = X[:, 0]
v = X[:, 1]
plt.plot(tArray*1e6, v)
plt.xlabel("t (us)")
plt.ylabel("v")
plt.plot(tArray*1e6, q)
plt.xlabel("t (us)")
plt.ylabel("q")
"""
Explanation: Alternatively we can use a derivative-free version of Milsteins method as a two-stage kind-of Runge-Kutta method, documented in wikipedia (https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_method_%28SDE%29) or the original in arxiv.org https://arxiv.org/pdf/1210.0933.pdf.
End of explanation
"""
def a_q(t, v, q):
return v
def a_v(t, v, q):
return -(Gamma0 + deltaGamma)*v - Omega0**2*q
def b_v(t, v, q):
return np.sqrt(2*Gamma0*k_b*T_0/m)
"""
Explanation: The form of $F_{feedback}(t)$ is still questionable
On page 49 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler he uses the form: $F_{feedback}(t) = \Omega_0 \eta q^2 \dot{q}$
On page 2 of 'Parametric feeedback cooling of levitated optomechancs in a parabolic mirror trap' Paper by Jamie and Muddassar they use the form: $F_{feedback}(t) = \dfrac{\Omega_0 \eta q^2 \dot{q}}{q_0^2}$ where $q_0$ is the amplitude of the motion: $q(t) = q_0(sin(\omega_0t)$
However it always shows up as a term $\delta \Gamma$ like so:
$\dfrac{d^2q(t)}{dt^2} + (\Gamma_0 + \delta \Gamma)\dfrac{dq(t)}{dt} + \Omega_0^2 q(t) - \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} = 0$
By fitting to data we extract the following 3 parameters:
1) $A = \gamma^2 \dfrac{k_B T_0}{\pi m}\Gamma_0 $
Where:
$\gamma$ is the conversion factor between Volts and nanometres. This parameterises the amount of light/ number of photons collected from the nanoparticle. With unchanged allignment and the same particle this should remain constant with changes in pressure.
$m$ is the mass of the particle, a constant
$T_0$ is the temperature of the environment
$\Gamma_0$ the damping due to the environment only
2) $\Omega_0$ - the natural frequency at this trapping power
3) $\Gamma$ - the total damping on the system including environment and feedback etc...
By taking a reference save with no cooling we have $\Gamma = \Gamma_0$ and therefore we can extract $A' = \gamma^2 \dfrac{k_B T_0}{\pi m}$. Since $A'$ should be constant with pressure we can therefore extract $\Gamma_0$ at any pressure (if we have a reference save and therefore a value of $A'$) and therefore can extract $\delta \Gamma$, the damping due to cooling, we can then plug this into our SDE instead in order to include cooling in the SDE model.
For any dataset at any pressure we can do:
$\Gamma_0 = \dfrac{A}{A'}$
And then $\delta \Gamma = \Gamma - \Gamma_0$
Using this form and the same derivation as above we arrive at the following form of the 2 1st order differential equations:
\begin{align}
dq&=v\,dt\
dv&=[-(\Gamma_0 + \delta \Gamma)v(t) - \Omega_0^2 v(t)]\,dt + \sqrt{\frac{2\Gamma_0 k_B T_0}m}\,dW
\end{align}
End of explanation
"""
Gamma0 = 15 # radians/second
deltaGamma = 2200
Omega0 = 75e3*2*np.pi # radians/second
eta = 0.5e7
T_0 = 300 # K
k_b = scipy.constants.Boltzmann # J/K
m = 3.1e-19 # KG
dt = 1e-10
tArray = np.arange(0, 100e-6, dt)
q0 = 0
v0 = 0
q = np.zeros_like(tArray)
v = np.zeros_like(tArray)
q[0] = q0
v[0] = v0
np.random.seed(88)
dwArray = np.random.normal(0, np.sqrt(dt), len(tArray)) # independent and identically distributed normal random variables with expected value 0 and variance dt
for n, t in enumerate(tArray[:-1]):
dw = dwArray[n]
v[n+1] = v[n] + a_v(t, v[n], q[n])*dt + b_v(t, v[n], q[n])*dw + 0
q[n+1] = q[n] + a_q(t, v[n], q[n])*dt + 0
plt.plot(tArray*1e6, v)
plt.xlabel("t (us)")
plt.ylabel("v")
plt.plot(tArray*1e6, q)
plt.xlabel("t (us)")
plt.ylabel("q")
"""
Explanation: values below are taken from a ~1e-2 mbar cooled save
End of explanation
"""
|
davicsilva/dsintensive | notebooks/capstone-flightDelay.ipynb | apache-2.0 | from datetime import datetime
# Pandas and NumPy
import pandas as pd
import numpy as np
# Matplotlib for additional customization
from matplotlib import pyplot as plt
%matplotlib inline
# Seaborn for plotting and styling
import seaborn as sns
# 1. Flight delay: any flight with (real_departure - planned_departure >= 15 minutes)
# 2. The Brazilian Federal Agency for Civil Aviation (ANAC) does not define exactly what is a "flight delay" (in minutes)
# 3. Anyway, the ANAC has a resolution for this subject: https://goo.gl/YBwbMy (last access: nov, 15th, 2017)
# ---
# DELAY, for this analysis, is defined as greater than 15 minutes (local flights only)
DELAY = 15
"""
Explanation: Capstone Project - Flight Delays
Does weather events have impact the delay of flights (Brazil)?
It is important to see this notebook with the step-by-step of the dataset cleaning process:
https://github.com/davicsilva/dsintensive/blob/master/notebooks/flightDelayPrepData_v2.ipynb
End of explanation
"""
#[flights] dataset_01 => all "Active Regular Flights" from 2017, from january to september
#source: http://www.anac.gov.br/assuntos/dados-e-estatisticas/historico-de-voos
#Last access this website: nov, 14th, 2017
flights = pd.read_csv('data/arf2017ISO.csv', sep = ';', dtype = str)
flights['departure-est'] = flights[['departure-est']].apply(lambda row: row.str.replace("(?P<day>\d{2})/(?P<month>\d{2})/(?P<year>\d{4}) (?P<HOUR>\d{2}):(?P<MIN>\d{2})", "\g<year>/\g<month>/\g<day> \g<HOUR>:\g<MIN>:00"), axis=1)
flights['departure-real'] = flights[['departure-real']].apply(lambda row: row.str.replace("(?P<day>\d{2})/(?P<month>\d{2})/(?P<year>\d{4}) (?P<HOUR>\d{2}):(?P<MIN>\d{2})", "\g<year>/\g<month>/\g<day> \g<HOUR>:\g<MIN>:00"), axis=1)
flights['arrival-est'] = flights[['arrival-est']].apply(lambda row: row.str.replace("(?P<day>\d{2})/(?P<month>\d{2})/(?P<year>\d{4}) (?P<HOUR>\d{2}):(?P<MIN>\d{2})", "\g<year>/\g<month>/\g<day> \g<HOUR>:\g<MIN>:00"), axis=1)
flights['arrival-real'] = flights[['arrival-real']].apply(lambda row: row.str.replace("(?P<day>\d{2})/(?P<month>\d{2})/(?P<year>\d{4}) (?P<HOUR>\d{2}):(?P<MIN>\d{2})", "\g<year>/\g<month>/\g<day> \g<HOUR>:\g<MIN>:00"), axis=1)
# Departure and Arrival columns: from 'object' to 'date' format
flights['departure-est'] = pd.to_datetime(flights['departure-est'], errors='ignore')
flights['departure-real'] = pd.to_datetime(flights['departure-real'], errors='ignore')
flights['arrival-est'] = pd.to_datetime(flights['arrival-est'], errors='ignore')
flights['arrival-real'] = pd.to_datetime(flights['arrival-real'], errors='ignore')
# translate the flight status from portuguese to english
flights['flight-status'] = flights[['flight-status']].apply(lambda row: row.str.replace("REALIZADO", "ACCOMPLISHED"), axis=1)
flights['flight-status'] = flights[['flight-status']].apply(lambda row: row.str.replace("CANCELADO", "CANCELED"), axis=1)
flights.head()
flights.size
flights.to_csv("flights_csv.csv")
"""
Explanation: 1 - Local flights dataset. For now, only flights from January to September, 2017
A note about date columns on this dataset
* In the original dataset (CSV file from ANAC), the date was not in ISO8601 format (e.g. '2017-10-31 09:03:00')
* To fix this I used regex (regular expression) to transform this column directly on CSV file
* The original date was "31/10/2017 09:03" (october, 31, 2017 09:03)
End of explanation
"""
# See: https://stackoverflow.com/questions/37287938/sort-pandas-dataframe-by-value
#
df_departures = flights.groupby(['airport-A']).size().reset_index(name='number_departures')
df_departures.sort_values(by=['number_departures'], ascending=False, inplace=True)
df_departures
"""
Explanation: Some EDA's tasks
End of explanation
"""
# Airports dataset: all brazilian public airports (updated until october, 2017)
airports = pd.read_csv('data/brazilianPublicAirports-out2017.csv', sep = ';', dtype= str)
airports.head()
# Merge "flights" dataset with "airports" in order to identify
# local flights (origin and destination are in Brazil)
flights = pd.merge(flights, airports, left_on="airport-A", right_on="airport", how='left')
flights = pd.merge(flights, airports, left_on="airport-B", right_on="airport", how='left')
flights.tail()
"""
Explanation: 2 - Local airports (list with all the ~600 brazilian public airports)
Source: https://goo.gl/mNFuPt (a XLS spreadsheet in portuguese; last access on nov, 15th, 2017)
End of explanation
"""
# ------------------------------------------------------------------
# List of codes (two letters) used to justify a delay on the flight
# - delayCodesShortlist.csv: list with YYY codes
# - delayCodesLongList.csv: list with XXX codes
# ------------------------------------------------------------------
delaycodes = pd.read_csv('data/delayCodesShortlist.csv', sep = ';', dtype = str)
delaycodesLongList = pd.read_csv('data/delayCodesLonglist.csv', sep = ';', dtype = str)
delaycodes.head()
"""
Explanation: 3 - List of codes (two letters) used when there was a flight delay (departure)
I have found two lists that define two-letter codes used by the aircraft crew to justify the delay of the flights: a short and a long one.
Source: https://goo.gl/vUC8BX (last access: nov, 15th, 2017)
End of explanation
"""
# Weather sample: load the CSV with weather historical data (from Campinas, SP, Brazil, 2017)
weather = pd.read_csv('data/DataScience-Intensive-weatherAtCampinasAirport-2017-Campinas_Airport_2017Weather.csv', \
sep = ',', dtype = str)
weather["date"] = weather["year"].map(str) + "-" + weather["month"].map(str) + "-" + weather["day"].map(str)
weather["date"] = pd.to_datetime(weather['date'],errors='ignore')
weather.head()
"""
Explanation: 4 - The Weather data from https://www.wunderground.com/history
From this website I captured a sample data from local airport (Campinas, SP, Brazil): January to September, 2017.
The website presents data like this (see https://goo.gl/oKwzyH):
End of explanation
"""
|
darkomen/TFG | ipython_notebooks/07_conclusiones/conclusiones.ipynb | cc0-1.0 | %pylab inline
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos los ficheros con los datos
conclusiones = pd.read_csv('Conclusiones.csv')
columns=['bq','formfutura','filastruder']
#Mostramos un resumen de los datos obtenidoss
conclusiones[columns].describe()
"""
Explanation: Análisis de los datos obtenidos
Compararación de tres filamentos distintos
Filamento de BQ
Filamento de formfutura
Filamento de filastriuder
End of explanation
"""
graf=conclusiones[columns].plot(figsize=(16,10),ylim=(0.5,2.6))
graf.axhspan(1.65,1.85, alpha=0.2)
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
graf = conclusiones[columns].boxplot(return_type='axes')
graf.axhspan(1.65,1.85, alpha=0.2)
"""
Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
End of explanation
"""
|
DLR-SC/tigl | examples/python/notebooks/geometry_wing.ipynb | apache-2.0 | import tigl3.curve_factories
import tigl3.surface_factories
from OCC.gp import gp_Pnt
from OCC.Display.SimpleGui import init_display
import numpy as np
"""
Explanation: Wing modelling example
In this example, we demonstrate, how to build up a wing surface by starting with a list of curves. These curves are then interpolated using a B-spline suface interpolation
Importing modules
Again, all low lovel geomtry functions can be found in the tigl3.geometry module. For a more convenient use,
the module tigl3.surface_factories offers functions to create surfaces. Lets use this!
End of explanation
"""
# list of points on NACA2412 profile
px = [1.000084, 0.975825, 0.905287, 0.795069, 0.655665, 0.500588, 0.34468, 0.203313, 0.091996, 0.022051, 0.0, 0.026892, 0.098987, 0.208902, 0.346303, 0.499412, 0.653352, 0.792716, 0.90373, 0.975232, 0.999916]
py = [0.001257, 0.006231, 0.019752, 0.03826, 0.057302, 0.072381, 0.079198, 0.072947, 0.054325, 0.028152, 0.0, -0.023408, -0.037507, -0.042346, -0.039941, -0.033493, -0.0245, -0.015499, -0.008033, -0.003035, -0.001257]
points_c1 = np.array([pnt for pnt in zip(px, [0.]*len(px), py)]) * 2.
points_c2 = np.array([pnt for pnt in zip(px, [0]*len(px), py)])
points_c3 = np.array([pnt for pnt in zip(px, py, [0.]*len(px))]) * 0.2
# shift sections to their correct position
# second curve at y = 7
points_c2 += np.array([1.0, 7, 0])
# third curve at y = 7.5
points_c3[:, 1] *= -1
points_c3 += np.array([1.7, 7.8, 1.0])
"""
Explanation: Create profile points
Now, we want to create 3 profiles that are the input for the profile curves. The wing should have one curve at its root, one at its outer end and one at the tip of a winglet.
End of explanation
"""
curve1 = tigl3.curve_factories.interpolate_points(points_c1)
curve2 = tigl3.curve_factories.interpolate_points(points_c2)
curve3 = tigl3.curve_factories.interpolate_points(points_c3)
"""
Explanation: Build profiles curves
Now, lets built the profiles curves using tigl3.curve_factories.interpolate_points as done in the Airfoil example.
End of explanation
"""
surface = tigl3.surface_factories.interpolate_curves([curve1, curve2, curve3])
# surface = tigl3.surface_factories.interpolate_curves([curve1, curve2, curve3], [0., 0.7, 1.])
# surface = tigl3.surface_factories.interpolate_curves([curve1, curve2, curve3], degree=1)
"""
Explanation: Create the surface
The final surface is created with the B-spline interpolation from the tigl3.surface_factories package.
If you want, comment out the second line and play around with the curve parameters, especially the second value. What influence do they have on the final shape?
End of explanation
"""
tigl3.surface_factories.interpolate_curves?
"""
Explanation: The function tigl3.surface_factories.interpolate_curves has many more parameters that influence the resulting shape. Lets have a look:
End of explanation
"""
# start up the gui
display, start_display, add_menu, add_function_to_menu = init_display()
# make tesselation more accurate
display.Context.SetDeviationCoefficient(0.0001)
# draw the curve
display.DisplayShape(curve1)
display.DisplayShape(curve2)
display.DisplayShape(curve3)
display.DisplayShape(surface)
# match content to screen and start the event loop
display.FitAll()
start_display()
"""
Explanation: Visualize the result
Now, lets draw our wing. How does it look like? What can be improved?
Note: a separate window with the 3D Viewer is opening!
End of explanation
"""
|
susantabiswas/Natural-Language-Processing | Notebooks/Word_Prediction_using_Pentagrams_Memory_Efficient.ipynb | mit | #%%timeit
from nltk.util import ngrams
from collections import defaultdict
import nltk
import string
"""
Explanation: Word prediction based on Pentagram
This program reads the corpus line by line so it is slower than the program which reads the corpus
in one go.This reads the corpus one line at a time loads it into the memory
Import corpus
End of explanation
"""
quad_dict = defaultdict(int) #for keeping count of sentences of three words
penta_dict = defaultdict(int) #for keeping count of sentences of three words
w1 = '' #for storing the 3rd last word to be used for next token set
w2 = '' #for storing the 2nd last word to be used for next token set
w3 = '' #for storing the last word to be used for next token set
w4 = ''
vocab_dict = defaultdict(int) #for storing the different words with their frequencies
#word_len = 0
#Data/Tokenization/Chat1.txt
with open('mycorpus.txt','r') as file:
for line in file:
token = line.split()
i = 0
for word in token :
for l in word :
if l in string.punctuation:
word = word.replace(l," ")
#token[i] = "".join(l for l in word if l not in string.punctuation)
#token[i] = word.replace('.','').replace(' ','').replace(',','').replace(':','').replace(';','').replace('!','').replace('?','').replace('(','').replace(')','')
token[i] = word.lower()
i=i+1
content = " ".join(token)
token = content.split()
#word_len = word_len + len(token)
if not token:
continue
#first add the previous words
if w2!= '':
token.insert(0,w2)
if w3!= '':
token.insert(1,w3)
if w4!= '':
token.insert(2,w4)
#tokens for quadgrams
temp1 = list(ngrams(token,4))
if w1!= '':
token.insert(0,w1)
#add new unique words to the vocaulary set
for word in token:
if word not in vocab_dict:
vocab_dict[word] = 1
else:
vocab_dict[word]+= 1
#tokens for pentagrams
temp2 = list(ngrams(token,5))
#uni_trigrams = set(trigrams)
#count the frequency of the quadgram sentences
for t in temp1:
sen = ' '.join(t)
quad_dict[sen] += 1
#count the frequency of the pentagram sentences
for t in temp2:
sen = ' '.join(t)
penta_dict[sen] += 1
#then take out the last 4 words
n = len(token)
w1 = token[n -4]
w2 = token[n -3]
w3 = token[n -2]
w4 = token[n -1]
#print(word_len)
#print(len(quad_dict))
#print(len(tri_dict))
"""
Explanation: Do preprocessing:
Tokenize the corpus data
Remove the punctuations and lowercase the tokens
End of explanation
"""
def findprobability(s,w):
c1 = 0 # for count of sentence 's' with word 'w'
c2 = 0 # for count of sentence 's'
s1 = s + ' ' + w
if s1 in penta_dict:
c1 = penta_dict[s1]
if s in quad_dict:
c2 = quad_dict[s]
if c2 == 0:
return 0
return c1/c2
"""
Explanation: Find the probability
End of explanation
"""
#%%timeit
del token[:]
def doPrediction(sen):
#remove punctuations and make it lowercase
temp_l = sen.split()
i = 0
for word in temp_l :
for l in word :
if l in string.punctuation:
word = word.replace(l," ")
#token[i] = "".join(l for l in word if l not in string.punctuation)
#token[i] = word.replace('.','').replace(' ','').replace(',','').replace(':','').replace(';','').replace('!','').replace('?','').replace('(','').replace(')','')
temp_l[i] = word.lower()
i=i+1
content = " ".join(temp_l)
temp_l = content.split()
#print(temp_l)
sen = ' '.join(temp_l)
#print(sen)
max_prob = 0
#when there is no probable word available
#now for guessing the word which should exist we use quadgram
right_word = 'apple'
for word in vocab_dict:
prob = findprobability(sen,word)
if prob > max_prob:
max_prob = prob
right_word = word
print('Word Prediction is :',right_word)
#print('Probability:',max_prob)
#print(len(token),',',len(vocab))
#print(len(vocab))
sen = input('Enter four words\n')
doPrediction(sen)
"""
Explanation: Driver function for doing the prediction
End of explanation
"""
|
risantos/schoolwork | Física Computacional/Ficha 2.ipynb | mit | import numpy as np
%matplotlib inline
"""
Explanation: Departamento de Física - Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Física Computacional - Ficha 2 - Zeros de Funções
Rafael Isaque Santos - 2012144694 - Licenciatura em Física
End of explanation
"""
f = lambda x: np.sin(x)
df = lambda x: np.cos(x)
my_stop = 1.e-4
my_nitmax = 100000
my_cdif = 1.e-6
"""
Explanation: Define-se a nossa função:
$f(x) = \sin (x)$
e a sua derivada:
$f'(x) = \cos (x)$
End of explanation
"""
def bi(a, b, fun, eps, nitmax):
c = (a + b) / 2
it = 1
while np.abs(fun(c)) > eps and it < nitmax:
if fun(a)*fun(c) < 0: b = c
else: a = c
c = (a + b) / 2
it += 1
return it, c, fun(c)
bi(2, 4, f, my_stop, my_nitmax)
"""
Explanation: Método da Bisecção
End of explanation
"""
def regfalsi(a, b, fun, eps, nitmax):
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
it = 1
while np.abs(fun(c)) > eps and it < nitmax:
if fun(a) * fun(c) < 0: b = c
else: a = c
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
it += 1
return it, c, fun(c)
regfalsi(2, 4, f, my_stop, my_nitmax)
"""
Explanation: Método da Falsa Posição:
End of explanation
"""
def newtraph(c0, fun, dfun, eps, nitmax):
c = c0
it = 1
while np.abs(fun(c)) > eps and it < nitmax:
c = c - fun(c) / dfun(c)
it += 1
return it, c, fun(c)
newtraph(2, f, df, my_stop, my_nitmax)
"""
Explanation: Método de Newton-Raphson:
End of explanation
"""
def secant(a, b, fun, eps, nitmax):
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
it = 1
while np.abs(fun(c)) > eps and it < nitmax:
a = b
b = c
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
it += 1
return it, c, fun(c)
secant(2, 4, f, my_stop, my_nitmax)
def bi2(a, b, fun, eps, nitmax, cdif):
c = (a + b) / 2
c_prev = a
it = 1
while not((np.abs(fun(c)) < eps and np.abs(c - c_prev) < cdif) or it > nitmax):
if fun(a)*fun(c) < 0: b = c
else: a = c
c_prev = c
c = (a + b) / 2
it += 1
return it, c, fun(c), c_prev, fun(c_prev), (c - c_prev)
bi2(2, 4, f, my_stop, my_nitmax, my_cdif)
def regfalsi2(a, b, fun, eps, nitmax, cdif):
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
c_prev = c + cdif/2
it = 1
while not((np.abs(fun(c)) < eps and np.abs(c - c_prev) < cdif) or it > nitmax):
if fun(a) * fun(c) < 0: b = c
else: a = c
c_prev = c
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
it += 1
return it, c, fun(c), c_prev, fun(c_prev), (c - c_prev)
regfalsi2(2, 4, f, my_stop, my_nitmax, my_cdif)
def newtraph2(c0, fun, dfun, eps, nitmax, cdif):
c = c0
c_prev = c + cdif/2
it = 1
while not((np.abs(fun(c)) < eps and np.abs(c - c_prev) < cdif) or it > nitmax):
c_prev = c
c = c - fun(c) / dfun(c)
it += 1
return it, c, fun(c), c_prev, fun(c_prev), (c - c_prev)
newtraph2(2, f, df, my_stop, my_nitmax, my_cdif)
def secant2(a, b, fun, eps, nitmax, cdif):
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
c_prev = c + cdif/2
it = 1
while not((np.abs(fun(c)) < eps and np.abs(c - c_prev) < cdif) or it > nitmax):
a = b
b = c
c_prev = c
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
it += 1
return it, c, fun(c), c_prev, fun(c_prev), (c - c_prev)
secant2(2, 4, f, my_stop, my_nitmax, my_cdif)
"""
Explanation: Método da Secante
End of explanation
"""
from scipy.misc import derivative
def newtraphd(c0, fun, eps, nitmax):
c = c0
dfun = lambda x: derivative(fun, x, 0.0001)
it = 1
while np.abs(fun(c)) > eps and it < nitmax:
c = c - (fun(c) / dfun(c))
it += 1
return it, c, fun(c), dfun(c)
f2 = lambda x, k: x + np.e ** (-k * x**2) * np.cos(x)
f2_k1 = lambda x: f2(x, 1)
newtraph(0, f2_k1, df2_k1, 1e-4, my_nitmax)
f2_k50 = lambda x: f2(x, 50)
for i in range(1, 10+1): print(newtraphd(0, f2_k50, 1e-4, i))
for i in range(1, 10+1): print(newtraphd(-0.1, f2_k50, 1e-4, i))
"""
Explanation: Exercício 2
$f(x) = x + e^{-k x^{2}}$
End of explanation
"""
|
MontrealCorpusTools/PolyglotDB | examples/tutorial/tutorial_1_first_steps.ipynb | mit | from polyglotdb import CorpusContext
import polyglotdb.io as pgio
corpus_root = '/mnt/e/Data/pg_tutorial'
"""
Explanation: Tutorial 1: First steps
Downloading the tutorial corpus
The tutorial corpus used here is a version of the LibriSpeech test-clean subset, forced aligned with the
Montreal Forced Aligner (tutorial corpus download link). Extract the files to somewhere on your local machine.
Importing the tutorial corpus
We begin by importing the necessary classes and functions from polyglotdb as well as defining variables. Change the path to reflect where the tutorial corpus was extracted to on your local machine.
End of explanation
"""
parser = pgio.inspect_mfa(corpus_root)
parser.call_back = print # To show progress output
with CorpusContext('pg_tutorial') as c:
c.load(parser, corpus_root)
"""
Explanation: The import statements get the necessary classes and functions for importing, namely the CorpusContext class and
the polyglot IO module. CorpusContext objects are how all interactions with the database are handled. The CorpusContext is
created as a context manager in Python (the with ... as ... pattern), so that clean up and closing of connections are
automatically handled both on successful completion of the code as well as if errors are encountered.
The IO module handles all import and export functionality in polyglotdb. The principle functions that a user will encounter
are the inspect_X functions that generate parsers for corpus formats. In the above code, the MFA parser is used because
the tutorial corpus was aligned using the MFA. See Importing corpora for more information on the inspect functions and parser
objects they generate for various formats.
Once the proper path to the tutorial corpus is set, it can be imported via the following code:
End of explanation
"""
with CorpusContext('pg_tutorial') as c:
c.reset()
"""
Explanation: Important
If during the running of the import code, a neo4j.exceptions.ServiceUnavailable error is raised, then double check
that the pgdb database is running. Once polyglotdb is installed, simply call pgdb start, assuming pgdb install
has already been called. See the relevant documentation for more information.
Resetting the corpus
If at any point there's some error or interruption in import or other stages of the tutorial, the corpus can be reset to a
fresh state via the following code:
End of explanation
"""
with CorpusContext('pg_tutorial') as c:
print('Speakers:', c.speakers)
print('Discourses:', c.discourses)
q = c.query_lexicon(c.lexicon_phone)
q = q.order_by(c.lexicon_phone.label)
q = q.columns(c.lexicon_phone.label.column_name('phone'))
results = q.all()
print(results)
"""
Explanation: Warning
Be careful when running this code as it will delete any and all information in the corpus. For smaller corpora such
as the one presented here, the time to set up is not huge, but for larger corpora this can result in several hours worth
of time to reimport and re-enrich the corpus.
Testing some simple queries
To ensure that data import completed successfully, we can print the list of speakers, discourses, and phone types in the corpus, via:
End of explanation
"""
from polyglotdb.query.base.func import Count, Average
with CorpusContext('pg_tutorial') as c:
q = c.query_graph(c.phone).group_by(c.phone.label.column_name('phone'))
results = q.aggregate(Count().column_name('count'), Average(c.phone.duration).column_name('average_duration'))
for r in results:
print('The phone {} had {} occurrences and an average duration of {}.'.format(r['phone'], r['count'], r['average_duration']))
"""
Explanation: A more interesting summary query is perhaps looking at the count and average duration of different phone types across the corpus, via:
End of explanation
"""
|
ivotron/torpor-popper | experiments/redis/results/visualize.ipynb | bsd-3-clause | sns.barplot(x='machine', y='mbps', data=df.query('limits == "no" and op == "SET"'))
plt.xticks(rotation=30)
"""
Explanation: We run the redis benchmark (show results for SET operation) and we show results for multiple machines.
End of explanation
"""
for b in df['op'].unique():
if b == 'raw':
continue
sns.barplot(x='machine', y='slowdown', data=df.query('limits == "no" and op == "' + b + '"'))
plt.xticks(rotation=30)
sns.plt.title(b)
plt.show()
"""
Explanation: The problem with the above is that these are absolute numbers, and therefore they are missing a context. One way of providing one is to obtain raw memory bandwidth throughput and use it as a baseline (normalize the above w.r.t. raw bandwidth).
End of explanation
"""
for b in df['op'].unique():
if b == 'raw':
continue
sns.barplot(x='machine', y='slowdown', hue='limits', data=df.query('op == "' + b + '"'))
plt.xticks(rotation=30)
sns.plt.title(b)
plt.show()
"""
Explanation: The above shows the overhead (slowdown) of redis w.r.t. the raw memory bandwidth. The above makes much more sense: in the first graph we are comparing the same workload on disctinct machines, i.e. we are comparing machines. But this hypothetical experiment was evaluating the performance of the KV store!
So, in the first graph, what we can conclude is that "redis is significantly slower on issdm-0". After we normalize, then this is not the case, actually, the overhead of redis on issdm-0 is the lowest! Also, our focus moves from comparing hardware to talking about the overhead of redis overall across machines (which is the goal of the experiment). In this case, the claim we can make is that redis' overhead is 3-5k over the system memory bandwidth.
Now, would throttling help in this case? Let's see
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/snu/cmip6/models/sandbox-1/toplevel.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-1', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: SNU
Source ID: SANDBOX-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
Justin-YueLiu/CarND-Projects | CarND-LaneLines-P1/.ipynb_checkpoints/P1-checkpoint.ipynb | mit | #importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
"""
Explanation: Self-Driving Car Engineer Nanodegree
Project: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see this forum post for more troubleshooting tips.
Import Packages
End of explanation
"""
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
"""
Explanation: Read in an Image
End of explanation
"""
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
"""
Explanation: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
"""
import os
os.listdir("test_images/")
"""
Explanation: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
"""
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
"""
Explanation: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
End of explanation
"""
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
return result
"""
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, check out this forum post for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.
End of explanation
"""
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
"""
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
"""
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
"""
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
"""
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
"""
Explanation: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
"""
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
"""
Explanation: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation
"""
|
GoogleCloudPlatform/tf-estimator-tutorials | 01_Regression/04.0 - TF Regression Model - Dataset Input.ipynb | apache-2.0 | MODEL_NAME = 'reg-model-03'
TRAIN_DATA_FILES_PATTERN = 'data/train-*.csv'
VALID_DATA_FILES_PATTERN = 'data/valid-*.csv'
TEST_DATA_FILES_PATTERN = 'data/test-*.csv'
RESUME_TRAINING = False
PROCESS_FEATURES = True
EXTEND_FEATURE_COLUMNS = True
MULTI_THREADING = True
"""
Explanation: Steps to use the TF Experiment APIs
Define dataset metadata
Define data input function to read the data from csv files + feature processing
Create TF feature columns based on metadata + extended feature columns
Define an estimator (DNNRegressor) creation function with the required feature columns & parameters
Define a serving function to export the model
Run an Experiment with learn_runner to train, evaluate, and export the model
Evaluate the model using test data
Perform predictions
End of explanation
"""
HEADER = ['key','x','y','alpha','beta','target']
HEADER_DEFAULTS = [[0], [0.0], [0.0], ['NA'], ['NA'], [0.0]]
NUMERIC_FEATURE_NAMES = ['x', 'y']
CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY = {'alpha':['ax01', 'ax02'], 'beta':['bx01', 'bx02']}
CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.keys())
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES
TARGET_NAME = 'target'
UNUSED_FEATURE_NAMES = list(set(HEADER) - set(FEATURE_NAMES) - {TARGET_NAME})
print("Header: {}".format(HEADER))
print("Numeric Features: {}".format(NUMERIC_FEATURE_NAMES))
print("Categorical Features: {}".format(CATEGORICAL_FEATURE_NAMES))
print("Target: {}".format(TARGET_NAME))
print("Unused Features: {}".format(UNUSED_FEATURE_NAMES))
"""
Explanation: 1. Define Dataset Metadata
CSV file header and defaults
Numeric and categorical feature names
Target feature name
Unused columns
End of explanation
"""
def parse_csv_row(csv_row):
columns = tf.decode_csv(csv_row, record_defaults=HEADER_DEFAULTS)
features = dict(zip(HEADER, columns))
for column in UNUSED_FEATURE_NAMES:
features.pop(column)
target = features.pop(TARGET_NAME)
return features, target
def process_features(features):
features["x_2"] = tf.square(features['x'])
features["y_2"] = tf.square(features['y'])
features["xy"] = tf.multiply(features['x'], features['y']) # features['x'] * features['y']
features['dist_xy'] = tf.sqrt(tf.squared_difference(features['x'],features['y']))
return features
"""
Explanation: 2. Define Data Input Function
Input csv files name pattern
Use TF Dataset APIs to read and process the data
Parse CSV lines to feature tensors
Apply feature processing
Return (features, target) tensors
a. parsing and preprocessing logic
End of explanation
"""
def csv_input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL,
skip_header_lines=0,
num_epochs=None,
batch_size=200):
shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False
print("")
print("* data input_fn:")
print("================")
print("Input file(s): {}".format(files_name_pattern))
print("Batch size: {}".format(batch_size))
print("Epoch Count: {}".format(num_epochs))
print("Mode: {}".format(mode))
print("Shuffle: {}".format(shuffle))
print("================")
print("")
file_names = tf.matching_files(files_name_pattern)
dataset = data.TextLineDataset(filenames=file_names)
dataset = dataset.skip(skip_header_lines)
if shuffle:
dataset = dataset.shuffle(buffer_size=2 * batch_size + 1)
#useful for distributed training when training on 1 data file, so it can be shareded
#dataset = dataset.shard(num_workers, worker_index)
dataset = dataset.batch(batch_size)
dataset = dataset.map(lambda csv_row: parse_csv_row(csv_row))
if PROCESS_FEATURES:
dataset = dataset.map(lambda features, target: (process_features(features), target))
#dataset = dataset.batch(batch_size) #??? very long time
dataset = dataset.repeat(num_epochs)
iterator = dataset.make_one_shot_iterator()
features, target = iterator.get_next()
return features, target
features, target = csv_input_fn(files_name_pattern="")
print("Feature read from CSV: {}".format(list(features.keys())))
print("Target read from CSV: {}".format(target))
"""
Explanation: b. data pipeline input function
End of explanation
"""
def extend_feature_columns(feature_columns):
# crossing, bucketizing, and embedding can be applied here
feature_columns['alpha_X_beta'] = tf.feature_column.crossed_column(
[feature_columns['alpha'], feature_columns['beta']], 4)
return feature_columns
def get_feature_columns():
CONSTRUCTED_NUMERIC_FEATURES_NAMES = ['x_2', 'y_2', 'xy', 'dist_xy']
all_numeric_feature_names = NUMERIC_FEATURE_NAMES.copy()
if PROCESS_FEATURES:
all_numeric_feature_names += CONSTRUCTED_NUMERIC_FEATURES_NAMES
numeric_columns = {feature_name: tf.feature_column.numeric_column(feature_name)
for feature_name in all_numeric_feature_names}
categorical_column_with_vocabulary = \
{item[0]: tf.feature_column.categorical_column_with_vocabulary_list(item[0], item[1])
for item in CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.items()}
feature_columns = {}
if numeric_columns is not None:
feature_columns.update(numeric_columns)
if categorical_column_with_vocabulary is not None:
feature_columns.update(categorical_column_with_vocabulary)
if EXTEND_FEATURE_COLUMNS:
feature_columns = extend_feature_columns(feature_columns)
return feature_columns
feature_columns = get_feature_columns()
print("Feature Columns: {}".format(feature_columns))
"""
Explanation: 3. Define Feature Columns
The input numeric columns are assumed to be normalized (or have the same scale). Otherise, a normlizer_fn, along with the normlisation params (mean, stdv) should be passed to tf.feature_column.numeric_column() constructor.
End of explanation
"""
def create_estimator(run_config, hparams):
feature_columns = list(get_feature_columns().values())
dense_columns = list(
filter(lambda column: isinstance(column, feature_column._NumericColumn),
feature_columns
)
)
categorical_columns = list(
filter(lambda column: isinstance(column, feature_column._VocabularyListCategoricalColumn) |
isinstance(column, feature_column._BucketizedColumn),
feature_columns)
)
indicator_columns = list(
map(lambda column: tf.feature_column.indicator_column(column),
categorical_columns)
)
estimator = tf.estimator.DNNRegressor(
feature_columns= dense_columns + indicator_columns ,
hidden_units= hparams.hidden_units,
optimizer= tf.train.AdamOptimizer(),
activation_fn= tf.nn.elu,
dropout= hparams.dropout_prob,
config= run_config
)
print("")
print("Estimator Type: {}".format(type(estimator)))
print("")
return estimator
"""
Explanation: 4. Define an Estimator Creation Function
Get dense (numeric) columns from the feature columns
Convert categorical columns to indicator columns
Create Instantiate a DNNRegressor estimator given dense + indicator feature columns + params
End of explanation
"""
def csv_serving_input_fn():
SERVING_HEADER = ['x','y','alpha','beta']
SERVING_HEADER_DEFAULTS = [[0.0], [0.0], ['NA'], ['NA']]
rows_string_tensor = tf.placeholder(dtype=tf.string,
shape=[None],
name='csv_rows')
receiver_tensor = {'csv_rows': rows_string_tensor}
row_columns = tf.expand_dims(rows_string_tensor, -1)
columns = tf.decode_csv(row_columns, record_defaults=SERVING_HEADER_DEFAULTS)
features = dict(zip(SERVING_HEADER, columns))
return tf.estimator.export.ServingInputReceiver(
process_features(features), receiver_tensor)
"""
Explanation: 5. Define Serving Funcion
End of explanation
"""
def generate_experiment_fn(**experiment_args):
def _experiment_fn(run_config, hparams):
train_input_fn = lambda: csv_input_fn(
files_name_pattern=TRAIN_DATA_FILES_PATTERN,
mode = tf.contrib.learn.ModeKeys.TRAIN,
num_epochs=hparams.num_epochs,
batch_size=hparams.batch_size
)
eval_input_fn = lambda: csv_input_fn(
files_name_pattern=VALID_DATA_FILES_PATTERN,
mode=tf.contrib.learn.ModeKeys.EVAL,
num_epochs=1,
batch_size=hparams.batch_size
)
estimator = create_estimator(run_config, hparams)
return tf.contrib.learn.Experiment(
estimator,
train_input_fn=train_input_fn,
eval_input_fn=eval_input_fn,
eval_steps=None,
**experiment_args
)
return _experiment_fn
"""
Explanation: 6. Run Experiment
a. Define Experiment Function
End of explanation
"""
TRAIN_SIZE = 12000
NUM_EPOCHS = 1000
BATCH_SIZE = 500
NUM_EVAL = 10
CHECKPOINT_STEPS = int((TRAIN_SIZE/BATCH_SIZE) * (NUM_EPOCHS/NUM_EVAL))
hparams = tf.contrib.training.HParams(
num_epochs = NUM_EPOCHS,
batch_size = BATCH_SIZE,
hidden_units=[8, 4],
dropout_prob = 0.0)
model_dir = 'trained_models/{}'.format(MODEL_NAME)
run_config = tf.contrib.learn.RunConfig(
save_checkpoints_steps=CHECKPOINT_STEPS,
tf_random_seed=19830610,
model_dir=model_dir
)
print(hparams)
print("Model Directory:", run_config.model_dir)
print("")
print("Dataset Size:", TRAIN_SIZE)
print("Batch Size:", BATCH_SIZE)
print("Steps per Epoch:",TRAIN_SIZE/BATCH_SIZE)
print("Total Steps:", (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS)
print("Required Evaluation Steps:", NUM_EVAL)
print("That is 1 evaluation step after each",NUM_EPOCHS/NUM_EVAL," epochs")
print("Save Checkpoint After",CHECKPOINT_STEPS,"steps")
"""
Explanation: b. Set HParam and RunConfig
End of explanation
"""
if not RESUME_TRAINING:
print("Removing previous artifacts...")
shutil.rmtree(model_dir, ignore_errors=True)
else:
print("Resuming training...")
tf.logging.set_verbosity(tf.logging.INFO)
time_start = datetime.utcnow()
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
learn_runner.run(
experiment_fn=generate_experiment_fn(
export_strategies=[make_export_strategy(
csv_serving_input_fn,
exports_to_keep=1
)]
),
run_config=run_config,
schedule="train_and_evaluate",
hparams=hparams
)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
"""
Explanation: c. Run Experiment via learn_runner
End of explanation
"""
TRAIN_SIZE = 12000
VALID_SIZE = 3000
TEST_SIZE = 5000
train_input_fn = lambda: csv_input_fn(files_name_pattern= TRAIN_DATA_FILES_PATTERN,
mode= tf.estimator.ModeKeys.EVAL,
batch_size= TRAIN_SIZE)
valid_input_fn = lambda: csv_input_fn(files_name_pattern= VALID_DATA_FILES_PATTERN,
mode= tf.estimator.ModeKeys.EVAL,
batch_size= VALID_SIZE)
test_input_fn = lambda: csv_input_fn(files_name_pattern= TEST_DATA_FILES_PATTERN,
mode= tf.estimator.ModeKeys.EVAL,
batch_size= TEST_SIZE)
estimator = create_estimator(run_config, hparams)
train_results = estimator.evaluate(input_fn=train_input_fn, steps=1)
train_rmse = round(math.sqrt(train_results["average_loss"]),5)
print()
print("############################################################################################")
print("# Train RMSE: {} - {}".format(train_rmse, train_results))
print("############################################################################################")
valid_results = estimator.evaluate(input_fn=valid_input_fn, steps=1)
valid_rmse = round(math.sqrt(valid_results["average_loss"]),5)
print()
print("############################################################################################")
print("# Valid RMSE: {} - {}".format(valid_rmse,valid_results))
print("############################################################################################")
test_results = estimator.evaluate(input_fn=test_input_fn, steps=1)
test_rmse = round(math.sqrt(test_results["average_loss"]),5)
print()
print("############################################################################################")
print("# Test RMSE: {} - {}".format(test_rmse, test_results))
print("############################################################################################")
"""
Explanation: 7. Evaluate the Model
End of explanation
"""
import itertools
predict_input_fn = lambda: csv_input_fn(files_name_pattern=TEST_DATA_FILES_PATTERN,
mode= tf.estimator.ModeKeys.PREDICT,
batch_size= 5)
predictions = estimator.predict(input_fn=predict_input_fn)
values = list(map(lambda item: item["predictions"][0],list(itertools.islice(predictions, 5))))
print()
print("Predicted Values: {}".format(values))
"""
Explanation: 8. Prediction
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.2/examples/sun.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.2,<2.3"
"""
Explanation: Sun (single rotating star)
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_star(starA='sun')
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
print(b['sun'])
"""
Explanation: Setting Parameters
End of explanation
"""
b.set_value('teff', 1.0*u.solTeff)
b.set_value('requiv', 1.0*u.solRad)
b.set_value('mass', 1.0*u.solMass)
b.set_value('period', 24.47*u.d)
"""
Explanation: Let's set all the values of the sun based on the nominal solar values provided in the units package.
End of explanation
"""
b.set_value('incl', 23.5*u.deg)
b.set_value('distance', 1.0*u.AU)
"""
Explanation: And so that we can compare with measured/expected values, we'll observe the sun from the earth - with an inclination of 23.5 degrees and at a distance of 1 AU.
End of explanation
"""
print(b.get_quantity('teff'))
print(b.get_quantity('requiv'))
print(b.get_quantity('mass'))
print(b.get_quantity('period'))
print(b.get_quantity('incl'))
print(b.get_quantity('distance'))
"""
Explanation: Checking on the set values, we can see the values were converted correctly to PHOEBE's internal units.
End of explanation
"""
b.add_dataset('lc', times=[0.], pblum=1*u.solLum)
b.add_dataset('mesh', compute_times=[0.], columns=['teffs', 'loggs', 'rs'])
b.run_compute(irrad_method='none', distortion_method='rotstar')
"""
Explanation: Running Compute
Let's add a light curve so that we can compute the flux at a single time and compare it to the expected value. We'll set the passband luminosity to be the nominal value for the sun. We'll also add a mesh dataset so that we can plot the temperature distributions and test the size of the sun verse known values.
End of explanation
"""
afig, mplfig = b['mesh'].plot(fc='teffs', x='xs', y='ys', show=True)
afig, mplfig = b['mesh'].plot(fc='teffs', x='us', y='vs', show=True)
print("teff: {} ({})".format(b.get_value('teffs').mean(),
b.get_value('teff', context='component')))
"""
Explanation: Comparing to Expected Values
End of explanation
"""
print("rmin (pole): {} ({})".format(b.get_value('rs').min(),
b.get_value('requiv', context='component')))
print("rmax (equator): {} (>{})".format(b.get_value('rs').max(),
b.get_value('requiv', context='component')))
print("logg: {}".format(b.get_value('loggs').mean()))
print("flux: {}".format(b.get_quantity('fluxes@model')[0]))
"""
Explanation: For a rotating sphere, the minimum radius should occur at the pole and the maximum should occur at the equator.
End of explanation
"""
|
GoogleCloudPlatform/data-science-on-gcp | 11_realtime/evaluation.ipynb | apache-2.0 | import matplotlib
import matplotlib.pyplot as plt
import seaborn
matplotlib.rcParams.update({'font.size': 22})
"""
Explanation: Evaluating 2015-2018 Model on 2019 data
End of explanation
"""
%%bigquery
SELECT
SQRT(SUM(
(CAST(ontime AS FLOAT64) - predicted_ontime.scores[OFFSET(0)])*
(CAST(ontime AS FLOAT64) - predicted_ontime.scores[OFFSET(0)])
)/COUNT(*)) AS rmse
FROM dsongcp.ch10_automl_evaluated
"""
Explanation: Overall RMSE
The RMSE (to 3 decimal places) is 0.200, so worse than the 0.198 we get when we train and evaluate on subsets of 2015 data.
End of explanation
"""
%%bigquery
SELECT
*,
num_pred_ontime / num_ontime AS frac_1_as_1,
num_pred_late / num_ontime AS frac_1_as_0,
num_pred_ontime / num_late AS frac_0_as_1,
num_pred_late / num_late AS frac_0_as_0
FROM (
SELECT
0.7 AS thresh,
SUM(IF(CAST(ontime AS FLOAT64) > 0.5, 1, 0)) AS num_ontime,
SUM(IF(CAST(ontime AS FLOAT64) <= 0.5, 1, 0)) AS num_late,
SUM(IF(predicted_ontime.scores[OFFSET(0)] > 0.7, 1, 0)) AS num_pred_ontime,
SUM(IF(predicted_ontime.scores[OFFSET(0)] <= 0.7, 1, 0)) AS num_pred_late,
FROM dsongcp.ch10_automl_evaluated
)
%%bigquery
WITH counts AS (
SELECT
thresh,
COUNTIF(CAST(ontime AS FLOAT64) > 0.5 AND predicted_ontime.scores[OFFSET(0)] > thresh) AS num_1_as_1,
COUNTIF(CAST(ontime AS FLOAT64) > 0.5 AND predicted_ontime.scores[OFFSET(0)] <= thresh) AS num_1_as_0,
COUNTIF(CAST(ontime AS FLOAT64) <= 0.5 AND predicted_ontime.scores[OFFSET(0)] > thresh) AS num_0_as_1,
COUNTIF(CAST(ontime AS FLOAT64) <= 0.5 AND predicted_ontime.scores[OFFSET(0)] <= thresh) AS num_0_as_0
FROM UNNEST([0.5, 0.7, 0.8]) AS thresh, dsongcp.ch10_automl_evaluated
GROUP BY thresh
)
SELECT
*,
ROUND(num_1_as_1 / (num_1_as_1 + num_1_as_0), 2) AS frac_1_as_1,
ROUND(num_1_as_0 / (num_1_as_1 + num_1_as_0), 2) AS frac_1_as_0,
ROUND(num_0_as_1 / (num_0_as_1 + num_0_as_0), 2) AS frac_0_as_1,
ROUND(num_0_as_0 / (num_0_as_1 + num_0_as_0), 2) AS frac_0_as_0
FROM counts
ORDER BY thresh ASC
%%bigquery df
WITH counts AS (
SELECT
thresh,
COUNTIF(CAST(ontime AS FLOAT64) > 0.5 AND predicted_ontime.scores[OFFSET(0)] > thresh) AS num_1_as_1,
COUNTIF(CAST(ontime AS FLOAT64) > 0.5 AND predicted_ontime.scores[OFFSET(0)] <= thresh) AS num_1_as_0,
COUNTIF(CAST(ontime AS FLOAT64) <= 0.5 AND predicted_ontime.scores[OFFSET(0)] > thresh) AS num_0_as_1,
COUNTIF(CAST(ontime AS FLOAT64) <= 0.5 AND predicted_ontime.scores[OFFSET(0)] <= thresh) AS num_0_as_0
FROM UNNEST(GENERATE_ARRAY(0.0, 1.0, 0.01)) AS thresh, dsongcp.ch10_automl_evaluated
GROUP BY thresh
)
SELECT
*,
ROUND(num_1_as_1 / (num_1_as_1 + num_1_as_0), 2) AS frac_1_as_1,
ROUND(num_1_as_0 / (num_1_as_1 + num_1_as_0), 2) AS frac_1_as_0,
ROUND(num_0_as_1 / (num_0_as_1 + num_0_as_0), 2) AS frac_0_as_1,
ROUND(num_0_as_0 / (num_0_as_1 + num_0_as_0), 2) AS frac_0_as_0
FROM counts
ORDER BY thresh ASC
df.head()
ax = df.plot(x='thresh', y='frac_1_as_1', label='on-time', ylabel='fraction correct', style='r--');
df.plot(x='thresh', y='frac_0_as_0', label='late', ax=ax);
"""
Explanation: Confusion matrix
Let's find the fraction of true on-time flights at some threshold:
End of explanation
"""
%%bigquery df
SELECT
ROUND(predicted_ontime.scores[OFFSET(0)], 2) AS prob_ontime,
AVG(CAST(dep_delay AS FLOAT64)) AS dep_delay,
STDDEV(CAST(dep_delay AS FLOAT64)) AS std_dep_delay,
AVG(CAST(taxi_out AS FLOAT64)) AS taxi_out,
STDDEV(CAST(taxi_out AS FLOAT64)) AS std_taxi_out
FROM dsongcp.ch10_automl_evaluated
GROUP BY prob_ontime
ORDER BY prob_ontime ASC
df.plot(x='prob_ontime', y='dep_delay', ylabel='seconds');
%%bigquery df2
SELECT
ROUND(CAST(dep_delay AS FLOAT64), 0) AS dep_delay,
AVG(predicted_ontime.scores[OFFSET(0)]) AS prob_ontime,
FROM dsongcp.ch10_automl_evaluated
GROUP BY dep_delay
ORDER BY dep_delay ASC
df2.plot(x='dep_delay', y='prob_ontime', xlim=[-10,100]);
df.plot(x='prob_ontime', y='dep_delay', yerr='std_dep_delay', ylabel='seconds');
df.plot(x='prob_ontime', y='taxi_out', yerr='std_taxi_out', ylabel='seconds');
"""
Explanation: Impact of different variables
Let's see how the model behaves with respect to specific feature values
End of explanation
"""
%%bigquery df
WITH preds AS (
SELECT
CAST(ontime AS FLOAT64) AS ontime,
ROUND(predicted_ontime.scores[OFFSET(0)], 2) AS prob_ontime,
CAST(dep_delay AS FLOAT64) AS var,
FROM dsongcp.ch10_automl_evaluated
)
SELECT
prob_ontime,
AVG(IF((ontime > 0.5 and prob_ontime <= 0.5) or (ontime <= 0.5 and prob_ontime > 0.5), var, NULL)) AS wrong,
AVG(IF((ontime > 0.5 and prob_ontime > 0.5) or (ontime <= 0.5 and prob_ontime <= 0.5), var, NULL)) AS correct
FROM preds
GROUP BY prob_ontime
ORDER BY prob_ontime
ax = df.plot(x='prob_ontime', y='wrong', ylim=(0, 50), ylabel='dep_delay', style='r--');
df.plot(x='prob_ontime', y='correct', ax=ax, ylim=(0, 50));
%%bigquery df
WITH preds AS (
SELECT
CAST(ontime AS FLOAT64) AS ontime,
ROUND(predicted_ontime.scores[OFFSET(0)], 2) AS prob_ontime,
CAST(taxi_out AS FLOAT64) AS var,
FROM dsongcp.ch10_automl_evaluated
)
SELECT
prob_ontime,
AVG(IF((ontime > 0.5 and prob_ontime <= 0.5) or (ontime <= 0.5 and prob_ontime > 0.5), var, NULL)) AS wrong,
AVG(IF((ontime > 0.5 and prob_ontime > 0.5) or (ontime <= 0.5 and prob_ontime <= 0.5), var, NULL)) AS correct
FROM preds
GROUP BY prob_ontime
ORDER BY prob_ontime
ax = df.plot(x='prob_ontime', y='wrong', ylim=(0, 30), ylabel='taxi_out', style='r--');
df.plot(x='prob_ontime', y='correct', ax=ax, ylim=(0, 30));
"""
Explanation: Analyzing mistakes
Looking at correct vs. wrong predictions
End of explanation
"""
%%bigquery df2
SELECT
ROUND(CAST(dep_delay AS FLOAT64), 0) AS dep_delay,
AVG(IF(origin='JFK', predicted_ontime.scores[OFFSET(0)], NULL)) AS JFK,
AVG(IF(origin='SEA', predicted_ontime.scores[OFFSET(0)], NULL)) AS SEA,
FROM dsongcp.ch10_automl_evaluated
GROUP BY dep_delay
ORDER BY dep_delay ASC
ax = df2.plot(x='dep_delay', y='JFK', xlim=[-10,100], style='r--');
df2.plot(x='dep_delay', y='SEA', xlim=[-10,100], ax=ax);
%%bigquery df2
SELECT
carrier,
ROUND(CAST(dep_delay AS FLOAT64), 0) AS dep_delay,
AVG(predicted_ontime.scores[OFFSET(0)]) AS prob_ontime
FROM dsongcp.ch10_automl_evaluated
GROUP BY carrier, dep_delay
ORDER BY carrier ASC, dep_delay ASC
df = df2.copy()
df.head()
df = df2.set_index('dep_delay')
df.head()
dfg = df2[df2['dep_delay'] == 20.0].sort_values(by='prob_ontime').reset_index(drop=True)
dfg
print(dfg.loc[0]['carrier'])
print(dfg.loc[0]['prob_ontime'])
fig, ax = plt.subplots(figsize=(15,15))
df.groupby('carrier')['prob_ontime'].plot(xlim=[-10,60], legend=True, ax=ax, lw=4);
ax.annotate(dfg.loc[0]['carrier'],
xy=(20.0, dfg.loc[0]['prob_ontime']),
xycoords='data', xytext=(-50, -30),
textcoords='offset points', arrowprops=dict(arrowstyle='->', connectionstyle='arc3,rad=-0.2'));
ax.annotate(dfg.loc[4]['carrier'],
xy=(20.0, dfg.loc[4]['prob_ontime']),
xycoords='data', xytext=(-80, -60),
textcoords='offset points', arrowprops=dict(arrowstyle='->', connectionstyle='arc3,rad=-0.2'));
ax.annotate(dfg.loc[11]['carrier'],
xy=(20.0, dfg.loc[11]['prob_ontime']),
xycoords='data', xytext=(50, 60),
textcoords='offset points', arrowprops=dict(arrowstyle='->', connectionstyle='arc3,rad=-0.2'));
ax.annotate(dfg.loc[16]['carrier'],
xy=(20.0, dfg.loc[16]['prob_ontime']),
xycoords='data', xytext=(50, 30),
textcoords='offset points', arrowprops=dict(arrowstyle='->', connectionstyle='arc3,rad=-0.2'));
"""
Explanation: Categorical Features
End of explanation
"""
|
PyDataMallorca/WS_Introduction_to_data_science | ml_miguel/perroGato.ipynb | gpl-3.0 | import pandas as pd # Cargamos pandas con el alias pd
"""
Explanation: Perros o gatos?
Por Miguel Escalona
Edición Febrero 2017
Inicio del notebook
Para iniciar cualquier notebook, comenzaremos por invocar los módulos necesarios para su desarrollo. Para esto utilizaremos el comando import seguido del nombre del módulo. Cuando queramos utilizar una función interna del módulo, debemos escribir su nombre antes de la función. Por ejemplo:
python
import modulo
modulo.funcion()
en el caso en que no queramos escribir el nombre completo de un modulo, podemos colocar con alias con el comando as
python
import modulo as md
md.funcion()
finalmente, si solo queremos acceder de forma directa a todas las funciones internas del módulo sin necesidad de escribir su nombre (ni el alias) cada vez, podemos escribir
python
from modulo import *
funcion()
lo cual cargará todas las funciones del módulo sobrecargando aquellas funciones ya definidas en el código que tengan el mismo nombre.
<p class="alert alert-danger">Este último método, aunque cómodo, es el menos aconsejable de todos</span>.
End of explanation
"""
dfl = pd.read_csv('data/perros_o_gatos.csv', index_col='observacion')
print('Estos datos han sido tomados del libro Mastering machine learning with scikit-learn de Gavin Hackeling, \
PACKT publishing open source, pp. 99')
dfl # En jupyter al escribir una variable sin mas, la celda nos devuelve su contenido.
"""
Explanation: Un problema de clasificación: ¿perro o gato?
En este notebook resolveremos un problema simple de clasificación que deja en evidencia conceptos básicos de Machine Learning (ML).
El problema planteado consiste en identificar la especie del animal (perro o gato) basados en tres características: ¿el animal busca la pelota cuando se la lanzamos?, ¿el animal suele ser apático? y ¿el animal disfruta más de la comida de perro, de la de gato o del bacon?
1. Cargando los datos
Para la carga de datos usaremos la función read_csv de pandas. Pandas cuenta con un amplio listado de funciones para la carga de datos. Mas informacion en la documentación de la API.
End of explanation
"""
dfl.describe()
"""
Explanation: Los datos se componen de observaciones numeradas del 1 al 14 y 3 features o características representadas en las columnas (también se les conocen como inputs). La columna especie es la respuesta a nuestro problema, por lo que no representa un feature. Esto quiere decir que solo la usaremos para saber si el algoritmo de machine learning está haciendo una buena clasificación o no. A esta columna (especie) se la suele llamar target, label, output o y.
Aprendizaje supervisado (supervised learning)
Se dice que un problema de machine learning es supervisado si dentro de los datos tenemos el target, por lo que podemos evaluar a nuestro algoritmo durante su entrenamiento.
Aprendizaje no supervisado (unsupervised learning)
Si no contamos con las etiquetas o labels estaremos ante un problema no supervisado. El algoritmo deberá encontrar por sí mismo los patrones que puedan diferenciar los datos.
Un ejemplo de este tipo de problemas es cuando queremos reconocer objetos en una imagen. Nuestro algoritmo intentará segmentar los diferentes objetos, usando por ejemplo sus contornos, pero sin conocer la forma exacta o el objeto que debe identificar.
2. Mini Exploratory Data Analisys (EDA)
End of explanation
"""
dfl['juega al busca'].sum()
"""
Explanation: Suma, media, mediana y desviación estándard (sum, mean, median, std)
¿Cuántos animales juegan al busca?
End of explanation
"""
dfl.loc[dfl['especie']=='perro','juega al busca'].sum()
"""
Explanation: Filtros de pandas
y cuantos de estos son perros?
End of explanation
"""
labels = dfl['especie']
df = dfl[['juega al busca', 'apatico', 'comida favorita']]
df
labels
"""
Explanation: 3. Separemos la columna especies para no confundirla
End of explanation
"""
df['comida favorita'].value_counts()
"""
Explanation: ¡La variable comida favorita es del tipo categórica!
Esta variable tiene tres valores posibles. Para conocer con qué frecuencia aparece cada valor podemos utilizar el método value_counts() del dataframe
End of explanation
"""
from sklearn.feature_extraction import DictVectorizer
vectorizer = DictVectorizer(sparse=False)
ab = vectorizer.fit_transform(df.to_dict(orient='records'))
dft = pd.DataFrame(ab, columns=vectorizer.get_feature_names())
dft.head()
"""
Explanation: 4. Codificación de variables categóricas
Las variables categóricas deben ser convertidas a numéricas para poder ser interpretadas por el algorítmo de machine learning. Una posible codificación sería:
```
| comida favorita | valor |
|-----------------|---------|
|comida de gato | 0 |
|comida de perro | 1 |
|bacon | 2 |
```
Sin embargo, esta codificación asigna un orden artificial a las variables. Nuestro ordenador sabe que 0 < 1 < 2, por lo que asociará que
comida de gato < comida de perro < bacon.
Codificacion one-Hot
Este tipo de codificación representa la columna comida favorita en tres columnas de 0 o 1 de la siguiente manera
```
| comida favorita | comida favorita=comida de gato | comida favorita=comida de perro | comida favorita=bacon |
|-----------------|--------------------------------|---------------------------------|-----------------------|
|comida de gato | 1 | 0 | 0 |
|comida de perro | 0 | 1 | 0 |
|bacon | 0 | 0 | 1 |
```
Atención, se debe tener cuidado cuando se desee utilizar este tipo de codificación en datasets (muy) grandes en el que el némero de categorías son cientos o miles, pues cada categoría generará una nueva columna en nuestro dataset.
All you need is... scikit-learn (sklearn) ... well, not really.
El módulo scikit-learn contiene una gran mayoría de las herramientas que necesitamos para resolver un problema típico de machine learning. Aquí podemos encontrar algorítmos de clasificación, regresión, clustering; además de métodos para el preprocesado de datos. Mas información en el sitio oficial de scikit-learn.
Para cargar este módulo usaremos el comando
python
import sklearn
en lugar de usar su nombre completo.
Otra forma de cargar funciones de un módulo es hacer referencia a la función directamente:
python
from sklearn.preprocessing import OneHotEncoder
A pesar de que la función OneHotEncoder existe en sklearn, nosotros utilizaremos otra función llamada DictVectorizer. Esta función recibe como entrada variables numéricas y/o categóricas y devuelve las variables numéricas como flotantes y a las categóricas le aplica la codificación one-hot.
<p class="alert alert-danger">
Nota: la función `DictVectorizer` no respeta el orden de entrada de las columnas pudiendo devolver un dataframe con otro orden de columnas. Para que respete el orden de las filas, debemos especificar el parámetro **`orient='records'`**</span>.
End of explanation
"""
from numpy import log2
def entropia_perro_gato(count_perro, count_gato):
prob_perro = count_perro / float(count_perro + count_gato)
prob_gato = count_gato / float(count_perro + count_gato)
return 0.0 if not count_perro or not count_gato else -(prob_perro*log2(prob_perro) + prob_gato*log2(prob_gato))
"""
Explanation: De esta forma, nuestro dataframe ya está preparado para ser utilizado por cualquiera de los algoritmos de clasificación de scikit-learn.
5. Algoritmo de clasificación: Arbol de decisión
Un árbol de clasificación divide los datos en subconjuntos cada vez mas pequeños para ir determinando la clase a la cual pertenecen. Básicamente, lo que hace este algoritmo es buscar la pregunta necesaria para poder separar la mayor cantidad de datos en dos grupos. Luego, vuelve a buscar la siguiente pregunta que romperá mejor al subgrupo y así, sucesivamente, hasta reducir los grupos lo suficiente hasta que quedemos satisfechos.
Las preguntas
Para este ejemplo tenemos tantas preguntas posibles como features. Por ejemplo, podemos preguntar:
* El animal juega al busca, ¿si o no?
* El animal es apático, ¿si o no?
* ¿le gusta la comida de gato?
* ¿le gusta la comida de perro?
* ¿le gusta el bacon?
Pero, ¿qué deberíamos preguntar primero?
Para responder esta clase de preguntas los algoritmos de machine learning se sirven de una función de transferencia o una función de costo. Un árbol de decisión utiliza una función de tranferencia llamada Entropía para cuantificar el nivel de incertidumbre que tenemos en la clasificación de los datos. Es decir, la pregunta que sea capaz de reducir mas la incertidumbre (entropía) será la primera pregunta a hacer.
La entropía viene definida de la siguiente manera:
$H(x) = - \sum_{i=1}^n p_i \log_2 p_i$
donde $H$ es la entropía, $p_i$ es la probabilidad que sea perro o gato. Veamos cual es la entropía de nuestro problema:
Como tenemos 6 perros y 8 gatos, la probabilidad de escoger un perro al azar es $\frac{6}{14}$, y la de que sea gato es $\frac{8}{14}$. Entonces la entropía inicial de nuestro problema es:
$H(x) = -(\frac{6}{14}\log_2\frac{6}{14} + \frac{8}{14}\log_2\frac{8}{14}) = 0.9852... $
End of explanation
"""
perro = dfl['especie']=='perro'
gato = dfl['especie']=='gato'
no_busca = dfl['juega al busca']==False
si_busca = dfl['juega al busca']==True
print('A %d perros y %d gatos sí les gusta jugar al busca. H=%0.4f' % (
dfl[perro]['juega al busca'].sum(),#podemos contar sumando el numero de True
len(dfl[gato & si_busca]),#o filtrando y contando cueantos valores quedan
entropia_perro_gato(4,1),
))
print('A %d perros y %d gatos no les gusta jugar al busca. H=%0.4f' % (
len(df[perro&no_busca]),
len(df[gato&no_busca]),
entropia_perro_gato(len(dfl[perro & no_busca]),
len(dfl[gato & no_busca])),
))
"""
Explanation: Evaluemos la pregunta si le gusta jugar al busca
Evaluemos las entropías:
End of explanation
"""
print(entropia_perro_gato(0,6))
print(entropia_perro_gato(6,2))
"""
Explanation: ¿y la comida de gato?
a 0 perros y 6 gatos les gusta la comida de gatos
mientras que a 6 perros y 2 gatos no les gusta.
Su entropía es:
End of explanation
"""
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier(criterion='entropy')
classifier.fit(dft, labels)
"""
Explanation: y no te olvides de la ganancia de información (information gain)
Ahora sí, el árbol de decisión automático
End of explanation
"""
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 6)
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
feat = pd.DataFrame(index=dft.keys(), data=classifier.feature_importances_, columns=['score'])
feat = feat.sort_values(by='score', ascending=False)
feat.plot(kind='bar',rot=85)
"""
Explanation: 5.1 importancia de los features
como vamos a graficar, utilizaremos la función mágica de jupyter
python
%matplotlib inline
la cual nos permite realizar gráficos en el notebook.
End of explanation
"""
from sklearn.tree import export_graphviz
dotfile = open('perro_gato_tree.dot', 'w')
export_graphviz(
classifier,
out_file = dotfile,
filled=True,
feature_names = dft.columns,
class_names=list(labels),
rotate=True,
max_depth=None,
rounded=True,
)
dotfile.close()
"""
Explanation: 6. Visualizando el arbol, requiere graphviz
conda install graphviz
End of explanation
"""
!dot -Tpng perro_gato_tree.dot -o perro_gato_tree.png
"""
Explanation: La celda anterior exportó el árbol de decisión creado con sklearn y entrenado con nuestros datos a un archivo .dot
Este archivo lo procesaremos con el comando dot de la terminal. Desde jupyter, podemos ejecutar comandos de terminal sin salir del notebook:
End of explanation
"""
from IPython.display import Image
Image('perro_gato_tree.png', width=1000)
"""
Explanation: finalmente para cargar la imagen usamos:
End of explanation
"""
import numpy as np
np.array(classifier.predict(dft))
np.array(labels)
print('Error rate %0.4f'%((np.array(classifier.predict(dft))==np.array(labels)).sum() / float(len(labels))))
"""
Explanation: 7. Evaluación del modelo
End of explanation
"""
test = pd.read_csv('data/perros_o_gatos_TEST.csv', index_col='observacion')
test
label_test = test['especie']
del test['especie']
ab = vectorizer.transform(test.to_dict(orient='records'))
dftest = pd.DataFrame(ab, columns=vectorizer.get_feature_names())
dftest.head()
list(classifier.predict(dftest))
list(label_test)
print('Error rate %0.4f'%((np.array(classifier.predict(dftest))==np.array(label_test)).sum() / float(len(label_test))))
"""
Explanation: Ahora evaluemos sobre datos nunca vistos por el modelo!!!!!
End of explanation
"""
|
Vvkmnn/books | TensorFlowForMachineIntelligence/chapters/05_object_recognition_and_classification/Chapter 5 - 02 Convolutions.ipynb | gpl-3.0 | # setup-only-ignore
import tensorflow as tf
import numpy as np
# setup-only-ignore
sess = tf.InteractiveSession()
input_batch = tf.constant([
[ # First Input
[[0.0], [1.0]],
[[2.0], [3.0]]
],
[ # Second Input
[[2.0], [4.0]],
[[6.0], [8.0]]
]
])
kernel = tf.constant([
[
[[1.0, 2.0]]
]
])
"""
Explanation: Convolution
As the name implies, convolution operations are an important component of convolutional neural networks. The ability for a CNN to accurately match diverse patterns can be attributed to using convolution operations. These operations require complex input which was shown in the previous section. In this section we'll experiment with convolution operations and the parameters which are available to tune them.
<p style="text-align: center;"><i>Convolution operation convolving two input tensors (input and kernel) into a single output tensor which represents information from each input.</i></p>
<br />
Input and Kernel
Convolution operations in TensorFlow are done using tf.nn.conv2d in a typical situation. There are other convolution operations available using TensorFlow designed with special use cases. tf.nn.conv2d is the preferred convolution operation to begin experimenting with. For example, we can experiment with convolving two tensors together and inspect the result.
End of explanation
"""
conv2d = tf.nn.conv2d(input_batch, kernel, strides=[1, 1, 1, 1], padding='SAME')
sess.run(conv2d)
"""
Explanation: The example code creates two tensors. The input_batch tensor has a similar shape to the image_batch tensor seen in the previous section. This will be the first tensor being convolved and the second tensor will be kernel. Kernel is an important term that is interchangeable with weights, filter, convolution matrix or mask. Since this task is computer vision related, it's useful to use the term kernel because it is being treated as an [image kernel](https://en.wikipedia.org/wiki/Kernel_(image_processing). There is no practical difference in the term when used to describe this functionality in TensorFlow. The parameter in TensorFlow is named filter and it expects a set of weights which will be learned from training. The amount of different weights included in the kernel (filter parameter) will configure the amount of kernels which will be learned.
In the example code, there is a single kernel which is the first dimension of the kernel variable. The kernel is built to return a tensor which will include one channel with the original input and a second channel with the original input doubled. In this case, channel is used to describe the elements in a rank 1 tensor (vector). Channel is a term from computer vision which describes the output vector, for example an RGB image has three channels represented as a rank 1 tensor [red, green, blue]. At this time, ignore the strides and padding parameter which will be covered later and focus on the convolution (tf.nn.conv2d) output.
End of explanation
"""
lower_right_image_pixel = sess.run(input_batch)[0][1][1]
lower_right_kernel_pixel = sess.run(conv2d)[0][1][1]
lower_right_image_pixel, lower_right_kernel_pixel
"""
Explanation: The output is another tensor which is the same rank as the input_batch but includes the number of dimensions found in the kernel. Consider if input_batch represented an image, the image would have a single channel, in this case it could be considered a grayscale image (see Working with Colors). Each element in the tensor would represent one pixel of the image. The pixel in the bottom right corner of the image would have the value of 3.0.
Consider the tf.nn.conv2d convolution operation as a combination of the image (represented as input_batch) and the kernel tenser. The convolution of these two tensors create a feature map. Feature map is a broad term except in computer vision where it relates to the output of operations which work with an image kernel. The feature map now represents the convolution of these tensors by adding new layers to the output.
The relationship between the input images and the output feature map can be explored with code. Accessing elements from the input batch and the feature map are done using the same index. By accessing the same pixel in both the input and the feature map shows how the input was changed when it convolved with the kernel. In the following case, the lower right pixel in the image was changed to output the value found by multiplying <span class="math-tex" data-type="tex">\(3.0 * 1.0\)</span> and <span class="math-tex" data-type="tex">\(3.0 * 2.0\)</span>. The values correspond to the pixel value and the corresponding value found in the kernel.
End of explanation
"""
input_batch = tf.constant([
[ # First Input (6x6x1)
[[0.0], [1.0], [2.0], [3.0], [4.0], [5.0]],
[[0.1], [1.1], [2.1], [3.1], [4.1], [5.1]],
[[0.2], [1.2], [2.2], [3.2], [4.2], [5.2]],
[[0.3], [1.3], [2.3], [3.3], [4.3], [5.3]],
[[0.4], [1.4], [2.4], [3.4], [4.4], [5.4]],
[[0.5], [1.5], [2.5], [3.5], [4.5], [5.5]],
],
])
kernel = tf.constant([ # Kernel (3x3x1)
[[[0.0]], [[0.5]], [[0.0]]],
[[[0.0]], [[1.0]], [[0.0]]],
[[[0.0]], [[0.5]], [[0.0]]]
])
# NOTE: the change in the size of the strides parameter.
conv2d = tf.nn.conv2d(input_batch, kernel, strides=[1, 3, 3, 1], padding='SAME')
sess.run(conv2d)
"""
Explanation: In this simplified example, each pixel of every image is multiplied by the corresponding value found in the kernel and then added to a corresponding layer in the feature map. Layer, in this context, is referencing a new dimension in the output. With this example, it's hard to see a value in convolution operations.
Strides
The value of convolutions in computer vision is their ability to reduce the dimensionality of the input, which is an image in this case. An image's dimensionality (2D image) is its width, height and number of channels. A large image dimensionality requires an exponentially larger amount of time for a neural network to scan over every pixel and judge which ones are important. Reducing dimensionality of an image with convolutions is done by altering the strides of the kernel.
The parameter strides, causes a kernel to skip over pixels of an image and not include them in the output. It's not fair to say the pixels are skipped because they still may affect the output. The strides parameter highlights how a convolution operation is working with a kernel when a larger image and more complex kernel are used. As a convolution is sliding the kernel over the input, it's using the strides parameter to change how it walks over the input. Instead of going over every element of an input the strides parameter could configure the convolution to skip certain elements.
For example, take the convolution of a larger image and a larger kernel. In this case, it's a convolution between a 6 pixel tall, 6 pixel wide and 1 channel deep image (6x6x1) and a (3x3x1) kernel.
End of explanation
"""
# setup-only-ignore
import matplotlib as mil
#mil.use('svg')
mil.use("nbagg")
from matplotlib import pyplot
fig = pyplot.gcf()
fig.set_size_inches(4, 4)
image_filename = "./images/chapter-05-object-recognition-and-classification/convolution/n02113023_219.jpg"
image_filename = "/Users/erikerwitt/Downloads/images/n02085936-Maltese_dog/n02085936_804.jpg"
filename_queue = tf.train.string_input_producer(
tf.train.match_filenames_once(image_filename))
image_reader = tf.WholeFileReader()
_, image_file = image_reader.read(filename_queue)
image = tf.image.decode_jpeg(image_file)
sess.run(tf.initialize_all_variables())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
image_batch = tf.image.convert_image_dtype(tf.expand_dims(image, 0), tf.float32, saturate=False)
kernel = tf.constant([
[
[[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]],
[[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]],
[[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]]
],
[
[[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]],
[[ 8., 0., 0.], [ 0., 8., 0.], [ 0., 0., 8.]],
[[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]]
],
[
[[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]],
[[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]],
[[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]]
]
])
conv2d = tf.nn.conv2d(image_batch, kernel, [1, 1, 1, 1], padding="SAME")
activation_map = sess.run(tf.minimum(tf.nn.relu(conv2d), 255))
# setup-only-ignore
fig = pyplot.gcf()
pyplot.imshow(activation_map[0], interpolation='nearest')
#pyplot.show()
fig.set_size_inches(4, 4)
fig.savefig("./images/chapter-05-object-recognition-and-classification/convolution/example-edge-detection.png")
"""
Explanation: The input_batch was combined with the kernel by moving the kernel over the input_batch striding (or skipping) over certain elements. Each time the kernel was moved, it get centered over an element of input_batch. Then the overlapping values are multiplied together and the result is added together. This is how a convolution combines two inputs using what's referred to as pointwise multiplication. It may be easier to visualize using the following figure.
In this figure, the same logic is done as what is found in the code. Two tensors convolved together while striding over the input. The strides reduced the dimensionality of the output a large amount while the kernel size allowed the convolution to use all the input values. None of the input data was completely removed from striding but now the input is a smaller tensor.
Strides are a way to adjust the dimensionality of input tensors. Reducing dimensionality requires less processing power, and will keep from creating receptive fields which completely overlap. The strides parameter follows the same format as the input tensor [image_batch_size_stride, image_height_stride, image_width_stride, image_channels_stride]. Changing the first or last element of the stride parameter are rare, they'd skip data in a tf.nn.conv2d operation and not take the input into account. The image_height_stride and image_width_stride are useful to alter in reducing input dimensionality.
A challenge which comes up often with striding over the input is how to deal with a stride which doesn't evenly end at the edge of the input. The uneven striding will come up often due to image size and kernel size not matching the striding. If the image size, kernel size and strides can't be changed then padding can be added to the image to deal with the uneven area.
Padding
When a kernel is overlapped on an image it should be set to fit within the bounds of the image. At times, the sizing may not fit and a good alternative is to fill the missing area in the image. Filling the missing area of the image is known as padding the image. TensorFlow will pad the image with zeros or raise an error when the sizes don't allow a kernel to stride over an image without going past its bounds. The amount of zeros or the error state of tf.nn.conv2d is controlled by the parameter padding which has two possible values ('VALID', 'SAME').
SAME: The convolution output is the SAME size as the input. This doesn't take the filter's size into account when calculating how to stride over the image. This may stride over more of the image than what exists in the bounds while padding all the missing values with zero.
VALID: Take the filter's size into account when calculating how to stride over the image. This will try to keep as much of the kernel inside the image's bounds as possible. There may be padding in some cases but will avoid.
It's best to consider the size of the input but if padding is necessary then TensorFlow has the option built in. In most simple scenarios, SAME is a good choice to begin with. VALID is preferential when the input and kernel work well with the strides. For further information, TensorFlow covers this subject well in the convolution documentation.
Data Format
There's another parameter to tf.nn.conv2d which isn't shown from these examples named data_format. The tf.nn.conv2d docs explain how to change the data format so the input, kernel and strides follow a format other than the format being used thus far. Changing this format is useful if there is an input tensor which doesn't follow the [batch_size, height, width, channel] standard. Instead of changing the input to match, it's possible to change the data_format parameter to use a different layout.
data_format: An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
| Data Format | Definition |
|:---: | :---: |
| N | Number of tensors in a batch, the batch_size. |
| H | Height of the tensors in each batch. |
| W | Width of the tensors in each batch. |
| C | Channels of the tensors in each batch. |
Kernels in Depth
In TensorFlow the filter parameter is used to specify the kernel convolved with the input. Filters are commonly used in photography to adjust attributes of a picture, such as the amount of sunlight allowed to reach a camera's lens. In photography, filters allow a photographer to drastically alter the picture they're taking. The reason the photographer is able to alter their picture using a filter is because the filter can recognize certain attributes of the light coming in to the lens. For example, a red lens filter will absorb (block) every frequency of light which isn't red allowing only red to pass through the filter.
In computer vision, kernels (filters) are used to recognize important attributes of a digital image. They do this by using certain patterns to highlight when features exist in an image. A kernel which will replicate the red filter example image is implemented by using a reduced value for all colors except red. In this case, the reds will stay the same but all other colors matched are reduced.
The example seen at the start of this chapter uses a kernel designed to do edge detection. Edge detection kernels are common in computer vision applications and could be implemented using basic TensorFlow operations and a single tf.nn.conv2d operation.
End of explanation
"""
kernel = tf.constant([
[
[[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]],
[[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]],
[[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]]
],
[
[[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]],
[[ 5., 0., 0.], [ 0., 5., 0.], [ 0., 0., 5.]],
[[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]]
],
[
[[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]],
[[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]],
[[ 0, 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]]
]
])
conv2d = tf.nn.conv2d(image_batch, kernel, [1, 1, 1, 1], padding="SAME")
activation_map = sess.run(tf.minimum(tf.nn.relu(conv2d), 255))
# setup-only-ignore
fig = pyplot.gcf()
pyplot.imshow(activation_map[0], interpolation='nearest')
#pyplot.show()
fig.set_size_inches(4, 4)
fig.savefig("./images/chapter-05-object-recognition-and-classification/convolution/example-sharpen.png")
"""
Explanation: The output created from convolving an image with an edge detection kernel are all the areas where and edge was detected. The code assumes a batch of images is already available (image_batch) with a real image loaded from disk. In this case, the image is an example image found in the Stanford Dogs Dataset. The kernel has three input and three output channels. The channels sync up to RGB values between <span class="math-tex" data-type="tex">\([0, 255]\)</span> with 255 being the maximum intensity. The tf.minimum and tf.nn.relu calls are there to keep the convolution values within the range of valid RGB colors of <span class="math-tex" data-type="tex">\([0, 255]\)</span>.
There are many other common kernels which can be used in this simplified example. Each will highlight different patterns in an image with different results. The following kernel will sharpen an image by increasing the intensity of color changes.
End of explanation
"""
# setup-only-ignore
filename_queue.close(cancel_pending_enqueues=True)
coord.request_stop()
coord.join(threads)
"""
Explanation: The values in the kernel were adjusted with the center of the kernel increased in intensity and the areas around the kernel reduced in intensity. The change, matches patterns with intense pixels and increases their intensity outputting an image which is visually sharpened. Note that the corners of the kernel are all 0 and don't affect the output which operates in a plus shaped pattern.
These kernels match patterns in images at a rudimentary level. A convolutional neural network matches edges and more by using a complex kernel it learned during training. The starting values for the kernel are usually random and over time they're trained by the CNN's learning layer. When a CNN is complete, it starts running and each image sent in is convolved with a kernel which is then changed based on if the predicted value matches the labeled value of the image. For example, if a Sheepdog picture is considered a Pit Bull by the CNN being trained it will then change the filters a small amount to try and match Sheepdog pictures better.
Learning complex patterns with a CNN involves more than a single layer of convolution. Even the example code included a tf.nn.relu layer used to prepare the output for visualization. Convolution layers may occur more than once in a CNN but they'll likely include other layer types as well. These layers combined form the support network required for a successful CNN architecture.
End of explanation
"""
|
wikistat/Apprentissage | BackPropagation/backpropagation.ipynb | gpl-3.0 | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
sb.set_style("whitegrid")
import numpy as np
from functools import reduce
"""
Explanation: <center>
<a href="http://www.insa-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg" style="float:left; max-width: 120px; display: inline" alt="INSA"/></a>
<a href="http://wikistat.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg" style="max-width: 150px; display: inline" alt="Wikistat"/></a>
<a href="http://www.math.univ-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo_imt.jpg" width=400, style="float:right; display: inline" alt="IMT"/> </a>
</center>
High Dimensional & Deep Learning : Backpropagation in Multilayer Neural Networks
Reference : https://github.com/m2dsupsdlclass/lectures-labs
What is the Backpropagation?
Deep neural networks involve a huge number of parameters, corresponding to the weights and the biases appearing in the definition of the network. Given a training sample, all these parameters are estimated by minimizing an empirical loss function. The function to minimize is generally very complex and not convex.
The minimization of the loss function is done via an optimization algorithm such as the Stochastic Gradient Descent (SGD) algorithm or more recent variants. All these algorithms require at each step the computation of the gradient of the loss function.
The backpropagation algorithm (Rumelhart et al, 1986) is a method for computing the gradient of the loss function, in order to estimate the parameters of a - possibly deep - neural network. It is composed of a succession of a forward pass and a backward pass through the network in order to compute the gradient. It can be easily parallelisable.
Objective
The objectives of this TP are to :
* Understand the theory of the backpropagation algorithm
* Implement logistic regression and multi layers perceptron algorithms using backpropagation equations with numpy
* Use Keras to apply the same model
Library
End of explanation
"""
from sklearn.datasets import load_digits
digits = load_digits()
N = reduce(lambda x,y: x*y,digits.images[0].shape)
print("Image dimension : N=%d"%N)
K = len(set(digits.target))
print("Number of classes : K=%d"%K)
"""
Explanation: Dataset
The dataset we used is composed of 8x8 images pixel of hand written digits available within sklearn library.
sklearn.datasets.load_digits
End of explanation
"""
sample_index = 45
fig =plt.figure(figsize=(3, 3))
ax = fig.add_subplot(1,1,1)
ax.imshow(digits.images[sample_index], cmap=plt.cm.gray_r,
interpolation='nearest')
ax.set_title("image label: %d" % digits.target[sample_index])
ax.grid(False)
ax.axis('off')
"""
Explanation: Example
End of explanation
"""
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
data = np.asarray(digits.data, dtype='float32')
target = np.asarray(digits.target, dtype='int32')
X_train, X_test, y_train, y_test = train_test_split(
data, target, test_size=0.15, random_state=37)
# mean = 0 ; standard deviation = 1.0
scaler = preprocessing.StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
print("Data dimension and type")
print("X_train : " + str(X_train.shape) + ", " +str(X_train.dtype))
print("y_train : " + str(y_train.shape) + ", " +str(y_train.dtype))
print("X_test : " + str(X_test.shape) + ", " +str(X_test.dtype))
print("y_test : " + str(y_test.shape) + ", " +str(y_test.dtype))
"""
Explanation: Preprocessing
Normalization
Train / test split
End of explanation
"""
# Write here the one_hot function
def one_hot(n_classes,y):
##
return ohy
# %load solutions/one_hot_encoding.py
"""
Explanation: Utils Function
Write utils function that will be used later
One-hot encoding function
$$
OneHotEncoding(N_{class},i=4) =
\begin{bmatrix}
0\
0\
0\
0\
1\
0\
0\
0\
0\
0\
\end{bmatrix}
$$
Where $N_{class}=10$ and $i \in [0,9]$
Exercise : Implement the one hot encoding function of an integer array for a fixed number of classes (similar to keras' to_categorical):
Ensure that your function works for several vectors at a time.
End of explanation
"""
ohy = one_hot(y=3,n_classes=10)
print("Expected : [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.] \n")
print("Computed :" + str(ohy))
"""
Explanation: Make sure the solution works on 1D array :
End of explanation
"""
ohY = one_hot(n_classes=10, y=[0, 4, 9, 1])
print("Expected : [[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.] \n [0. 0. 0. 0. 1. 0. 0. 0. 0. 0.] \n [0. 0. 0. 0. 0. 0. 0. 0. 0. 1.] \n [0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]]")
print("Computed :" + str(ohY))
"""
Explanation: Make sure the solution works on 2D array :
End of explanation
"""
# keepdims option
x = np.array([[1,2,3],
[4,5,6]])
print("Sum all elements of array :")
sx = np.sum(x)
print(sx)
print("Sum all elements over axis (dimension) :" )
sx = np.sum(x, axis=-1)
print(str(sx), str(", Dimension :") ,str(sx.shape))
print("Sum all elements over axis and with keepdims (dimension) :" )
sx = np.sum(x, axis=-1, keepdims=True)
print(str(sx), str(", Dimension :") ,str(sx.shape))
# Write here the softmax function
def softmax(x):
###
return softmaxX
# %load solutions/softmax.py
"""
Explanation: The softmax function
$$
softmax(\mathbf{x}) = \frac{1}{\sum_{i=1}^{n}{e^{x_i}}}
\cdot
\begin{bmatrix}
e^{x_1}\\
e^{x_2}\\
\vdots\\
e^{x_n}
\end{bmatrix}
$$
Exercise : Implement the softmax function.
Ensure that your function works for several vectors at a time.
Hint : use the axis and keepdims argument of the numpy function np.sum.
End of explanation
"""
x = [10, 2, -3]
sx = softmax(x)
print("Expected : [9.99662391e-01 3.35349373e-04 2.25956630e-06]")
print("Computed " + str(sx))
print("Value Sum to one : %d" %np.sum(sx))
"""
Explanation: Make sure that your function works for a 1D array :
End of explanation
"""
X = np.array([[10, 2, -3],
[-1, 5, -20]])
sX = softmax(X)
print("Expected : [[9.99662391e-01 3.35349373e-04 2.25956630e-06] \n [2.47262316e-03 9.97527377e-01 1.38536042e-11]]")
print("Value found" + str(sX))
print("Value Sum to one : " + str(np.sum(sX, axis=-1)))
"""
Explanation: Make sure that your function works for a 2D array :
End of explanation
"""
# Write here the negative_log_likelihood function
EPSILON = 1e-8
def NegLogLike(Y_true, Fx):
###
return nll_mean
# %load solutions/negative_log_likelihood_function.py
"""
Explanation: Loss function
We consider the loss function associated to the cross-entropy. Minimizing this loss function corresponds to minimization of the negative log likelihood (which is equivalent to the maximization of the log likelihood).
According to course's notations, we have
$$ \ell(f(x),y) = -\log (f(x))y = - \sum{k=1}^K \mathbb{1}_{y=k} \log (f(x))_k
$$
where $(f(x))_k = \mathbb{P}(Y=k~/~x)$, the predicted probability for the class $k$, when the input equals $x$.
Exercice:
Write a function that computes the mean negative likelihood (empirical loss) of a group of observations Y_true and Fx, where Y_true and Fx are respectively the one-hot encoded representation of the observed labels and the predictions for the associated inputs $x$ i.e. :
Y_true is the one-hot encoded representation of $y$
Fx is the output of a softmax function.
End of explanation
"""
# Simple case
ohy_true = [1, 0, 0]
fx = [.99, 0.01, 0]
nll1 = NegLogLike(ohy_true, fx)
print("A small value for the loss function :")
print("Expected value : 0.01005032575249135")
print("Computed : " + str(nll1) )
# Case with bad prediction
ohy_true = [1, 0, 0]
fx = [0.01, .99, 0]
nll2 = NegLogLike(ohy_true, fx)
print("Higher value for the loss function):")
print("Expected value : 4.605169185988592")
print("Computed : " + str(nll2) )
"""
Explanation: Make sure that your implementation can compute the loss function for a single observation.
End of explanation
"""
# Zero case
ohy_true = [1, 0, 0]
fx = [0, 0.01, 0.99]
nll3 = NegLogLike(ohy_true, fx)
print("Expected value : 18.420680743952367")
print("Computed : " + str(nll3) )
"""
Explanation: Make sure that your implementation can handle the case where $ (f(x))_y =0$.
End of explanation
"""
ohY_true = np.array([[0, 1, 0],
[1, 0, 0],
[0, 0, 1]])
Fx = np.array([[0, 1, 0],
[.99, 0.01, 0],
[0, 0, 1]])
nll4 = NegLogLike(ohY_true, Fx)
print("Expected value : 0.0033501019174971905")
print("Computed : " + str(nll4) )
"""
Explanation: Make sure that your implementation can compute the empirical loss for several observations.
End of explanation
"""
def sigmoid(X):
###
return sigX
def dsigmoid(X):
###
return dsig
# %load solutions/sigmoid.py
"""
Explanation: Sigmoid Function
Implement the sigmoid and its element-wise derivative dsigmoid functions:
$$
sigmoid(x) = \frac{1}{1 + e^{-x}}
$$
$$
dsigmoid(x) = sigmoid(x) \cdot (1 - sigmoid(x))
$$
End of explanation
"""
fig = plt.figure(figsize=(12,6))
ax = fig.add_subplot(1,1,1)
x = np.linspace(-5, 5, 100)
ax.plot(x, sigmoid(x), label='sigmoid')
ax.plot(x, dsigmoid(x), label='dsigmoid')
ax.legend(loc='best');
"""
Explanation: Display the sigmoid function and its derivative
End of explanation
"""
class LogisticRegression():
def __init__(self, input_size, output_size):
self.W = np.random.uniform(size=(input_size, output_size),
high=0.1, low=-0.1)
self.b = np.random.uniform(size=output_size,
high=0.1, low=-0.1)
self.output_size = output_size
def forward(self, X):
###
return sZ
def grad_loss(self, x, y_true):
###
grads = {"W": grad_W, "b": grad_b}
return grads
def train(self, x, y, learning_rate):
###
def loss(self, x, y):
nll = NegLogLike(one_hot(self.output_size, y), self.forward(x))
return nll
def predict(self, X):
if len(X.shape) == 1:
return np.argmax(self.forward(X))
else:
return np.argmax(self.forward(X), axis=1)
def accuracy(self, X, y):
y_preds = np.argmax(self.forward(X), axis=1)
acc = np.mean(y_preds == y)
return acc
# %load solutions/lr_class
"""
Explanation: Logistic Regression
In this section we will implement a logistic regression model trainable with SGD one observation at a time (On-line gradient descent).
Implementation
Complete the LogisticRegression class by following these steps (Use the functions you have written above) :
Notation : $x \in \mathbb{R}^N$, $y \in [0,...,K]$, $W \in \mathbb{R}^{K,N}$, $b \in \mathbb{R}^K$
Implement the forward function which computes the prediction of the model for the input $x$:
$$f(x) = softmax(\mathbf{W} x + b)$$
Implement the grad_loss function which computes the gradient of the loss function $ \ell(f(x),y) = -\log (f(x))_y $ (for an input $x$ and its corresponding observed output $y$) with respect to the parameters of the model $W$ and $b$ :
\begin{array}{ll}
grad_W &= \frac{d}{dW} [-\log (f(x))_y] \
grad_b &= \frac{d}{db} [-\log (f(x))_y]
\end{array}
Hint
\begin{array}{ll}
\frac{d}{dW_{i,j}} [-\log (f(x))y] &=
\begin{cases}
[f(x){y}-1]x_j, & \text{if}\ i=y \
f(x)_{i}x_j, & \text{otherwise}
\end{cases} \
\frac{d}{db_{i}} [-\log (f(x))y] &=
\begin{cases}
f(x){y}-1, & \text{if}\ i=y \
f(x)_{i}, & \text{otherwise}
\end{cases} \
\end{array}
Implement the train function which uses the grad function output to update $\mathbf{W}$ and $b$ with traditional SGD update without momentum :
\begin{array}{ll}
W &= W - \varepsilon \frac{d}{dW} [-\log (f(x))_y]\
b &= b - \varepsilon \frac{d}{db} [-\log (f(x))_y]
\end{array}
End of explanation
"""
# Init the model
lr = LogisticRegression(N, K)
print("Evaluation of the untrained model:")
train_loss = lr.loss(X_train, y_train)
train_acc = lr.accuracy(X_train, y_train)
test_acc = lr.accuracy(X_test, y_test)
print("train loss: %0.4f, train acc: %0.3f, test acc: %0.3f"
% (train_loss, train_acc, test_acc))
lr.W.shape
"""
Explanation: Evaluate the model without training
End of explanation
"""
def plot_prediction(model, sample_idx=0, classes=range(10)):
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, figsize=(10, 4))
ax0.imshow(scaler.inverse_transform(X_test[sample_idx]).reshape(8, 8), cmap=plt.cm.gray_r,
interpolation='nearest')
ax0.set_title("True image label: %d" % y_test[sample_idx]);
ax0.grid(False)
ax0.axis('off')
ax1.bar(classes, one_hot(len(classes), y_test[sample_idx]), label='true')
ax1.bar(classes, model.forward(X_test[sample_idx]), label='prediction', color="red")
ax1.set_xticks(classes)
prediction = model.predict(X_test[sample_idx])
ax1.set_title('Output probabilities (prediction: %d)'
% prediction)
ax1.set_xlabel('Digit class')
ax1.legend()
plot_prediction(lr, sample_idx=0)
"""
Explanation: Evaluate the randomly initialized model on the first example:
End of explanation
"""
learning_rate = 0.01
for i, (x, y) in enumerate(zip(X_train, y_train)):
lr.train(x, y, learning_rate)
if i % 100 == 0:
train_loss = lr.loss(X_train, y_train)
train_acc = lr.accuracy(X_train, y_train)
test_acc = lr.accuracy(X_test, y_test)
print("Update #%d, train loss: %0.4f, train acc: %0.3f, test acc: %0.3f"
% (i, train_loss, train_acc, test_acc))
"""
Explanation: Train the model for one epoch
End of explanation
"""
plot_prediction(lr, sample_idx=0)
"""
Explanation: Evaluate the trained model on the first example:
End of explanation
"""
class NeuralNet():
"""MLP with 1 hidden layer with a sigmoid activation"""
def __init__(self, input_size, hidden_size, output_size):
self.W_h = np.random.uniform(
size=(input_size, hidden_size), high=0.01, low=-0.01)
self.b_h = np.zeros(hidden_size)
self.W_o = np.random.uniform(
size=(hidden_size, output_size), high=0.01, low=-0.01)
self.b_o = np.zeros(output_size)
self.output_size = output_size
def forward(self, X, keep_activation=False):
###
rep = [fx, h, z_h] if keep_activation else fx
return rep
def loss(self, X, y):
fx = self.forward(X)
ohy = one_hot(self.output_size, y)
nll = NegLogLike(ohy, fx)
return nll
def grad_loss(self, X, y_true):
####
grads = {"W_h": grad_W_h, "b_h": grad_b_h,
"W_o": grad_W_o, "b_o": grad_b_o}
return grads
def train(self, x, y, learning_rate):
# Traditional SGD update on one sample at a time
grads = self.grad_loss(x, y)
self.W_h = self.W_h - learning_rate * grads["W_h"]
self.b_h = self.b_h - learning_rate * grads["b_h"]
self.W_o = self.W_o - learning_rate * grads["W_o"]
self.b_o = self.b_o - learning_rate * grads["b_o"]
def predict(self, X):
fx = self.forward(X)
if len(X.shape) == 1:
yp = np.argmax(fx)
else:
yp = np.argmax(fx, axis=1)
return yp
def accuracy(self, X, y):
y_preds = np.argmax(self.forward(X), axis=1)
return np.mean(y_preds == y)
# %load solutions/nn_class.py
"""
Explanation: Multi Layer Perceptron
In this section we consider a neural network model with one hidden layer using the sigmoid activation function.
You will implement the backpropagation algorithm (with the chain rule).
Implementation
Complete the NeuralNet class following these steps :
Notation : $x \in \mathbb{R}^N$, $h \in \mathbb{R}^H$, $y \in [0,...,K]$, $W^{h} \in \mathbb{R}^{H,N}$, $b^h \in \mathbb{R}^H$, $W^{o} \in \mathbb{R}^{K,H}$, $b^o \in \mathbb{R}^K$
Implement the forward function for a model with one hidden layer with a sigmoid activation function:
\begin{array}{lll}
\mathbf{h} &= sigmoid(\mathbf{W}^h \mathbf{x} + \mathbf{b^h}) &= sigmoid(z^h(x)) \
f(x) &= softmax(\mathbf{W}^o \mathbf{h} + \mathbf{b^o}) &= softmax(z^o(x))\
\end{array}
which returns $y$ if keep_activation = False and $y$, $h$ and $z^h(x)$ otherwise (we keep all the intermediate values).
Implement the grad_loss function which computes the gradient of the loss function (for an $x$ and its corresponding observed output $y$) with respect to the parameters of the network $W^h$, $b^h$, $W^o$ and $b^o$ :
\begin{array}{ll}
\nabla_{W^{o}}loss &= \frac{d}{dW^{o}} [-\log (f(x))y] \
\nabla{b^{o}}loss &= \frac{d}{db^{o}} [-\log (f(x))y] \
\nabla{W^{h}}loss &= \frac{d}{dW^{h}} [-\log (f(x))y] \
\nabla{b^{h}}loss &= \frac{d}{db^{h}} [-\log (f(x))_y]
\end{array}
Hint
\begin{array}{ll}
\frac{d}{dz^0_{i}} [-\log (f(x))y] &=
\begin{cases}
f(x){y}-1, & \text{if}\ i=y \
f(x){i}, & \text{otherwise}
\end{cases} \
\frac{d}{dW^o{i,j}} [-\log (f(x))y] &=
\begin{cases}
[f(x){y}-1]h_j, & \text{if}\ i=y \
f(x)_{i}h_j, & \text{otherwise}
\end{cases} \
\frac{d}{db^o_{i}} [-\log (f(x))y] &=
\begin{cases}
f(x){y}-1, & \text{if}\ i=y \
f(x){i}, & \text{otherwise}
\end{cases} \
\frac{d}{dh{j}} [-\log (f(x))y] &= \nabla{z^{o}}loss ~\cdot~ W^o_{.,j} \
\frac{d}{dz^h_{j}} [-\log (f(x))y] &= \nabla{z^{o}}loss ~\cdot~ W^o_{.,j} * dsigmoid(z^h_{j}) \
\frac{d}{dW^h_{j,l}} [-\log (f(x))y] &= \nabla{z^h}loss_j * x_l \
\frac{d}{db^h_{j}} [-\log (f(x))y] &= \nabla{z^h}loss_j \
\end{array}
End of explanation
"""
H = 10
model = NeuralNet(N, H, K)
print("Evaluation of the untrained model:")
train_loss = model.loss(X_train, y_train)
train_acc = model.accuracy(X_train, y_train)
test_acc = model.accuracy(X_test, y_test)
print("train loss: %0.4f, train acc: %0.3f, test acc: %0.3f"
% (train_loss, train_acc, test_acc))
plot_prediction(model, sample_idx=5)
"""
Explanation: Evaluate the model without training
End of explanation
"""
losses, losses_test, accuracies, accuracies_test = [], [], [], []
losses.append(model.loss(X_train, y_train))
losses_test.append(model.loss(X_test, y_test))
accuracies.append(model.accuracy(X_train, y_train))
accuracies_test.append(model.accuracy(X_test, y_test))
print("Random init: train loss: %0.5f, train acc: %0.3f, test acc: %0.3f"
% (losses[-1], accuracies[-1], accuracies_test[-1]))
for epoch in range(15):
for i, (x, y) in enumerate(zip(X_train, y_train)):
model.train(x, y, 0.1)
losses.append(model.loss(X_train, y_train))
losses_test.append(model.loss(X_test, y_test))
accuracies.append(model.accuracy(X_train, y_train))
accuracies_test.append(model.accuracy(X_test, y_test))
print("Epoch #%d, train loss: %0.5f, train acc: %0.3f, test acc: %0.3f"
% (epoch + 1, losses[-1], accuracies[-1], accuracies_test[-1]))
plot_prediction(model, sample_idx=5)
"""
Explanation: Train the model for several epochs
End of explanation
"""
fig = plt.figure(figsize=(20,5))
ax = fig.add_subplot(1,2,1)
ax.plot(losses,label='train')
ax.plot(losses_test,label='test')
ax.set_xlabel("Epochs")
ax.set_ylabel("Loss")
ax.legend(loc='best');
ax.set_title("Training loss");
ax = fig.add_subplot(1,2,2)
ax.plot(accuracies, label='train')
ax.plot(accuracies_test, label='test')
ax.set_ylabel("accuracy")
ax.set_xlabel("Epochs")
ax.legend(loc='best');
ax.set_title("Accuracy");
plot_prediction(model, sample_idx=4)
"""
Explanation: Loss evolution per epoch
End of explanation
"""
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.utils import to_categorical
n_features = 64
n_classes = 10
n_hidden = 10
# %load solutions/mlp_keras.py
"""
Explanation: Tensorflow/keras
Implement the same multi layer perceptron using Keras
End of explanation
"""
# %load solutions/plot_prediction_keras.py
"""
Explanation: Implement a function that produces the same results that the plot_prediction function but with keras model output
End of explanation
"""
# %load solutions/compare_loss_acc.py
"""
Explanation: Compare loss and accuracy of keras and numpy
End of explanation
"""
|
andrewzwicky/puzzles | FiveThirtyEightRiddler/2016-10-14/2016-10-14.ipynb | mit | import itertools
# heads = True
# tails = False
# Initialize coins to all heads
coins = [True]*100
for factor in range(100):
# This will generate N zeros, then a 1. This repeats forever
flip_generator = itertools.cycle([0]*factor+[1])
# This will take the first 100 items from the generator
flips = itertools.islice(flip_generator,100)
for index, flip in enumerate(flips):
if flip:
coins[index] = not coins[index]
# 1 has to be added to account for python 0-indexing
coins_tails = [index+1 for index,state in enumerate(coins) if state == False]
print(coins_tails)
import numpy as np
import itertools
# Alternative approach which counts the amount of flips. If even, the coin remains heads up.
# If odd, the coin would end up tails up.
total_flips = [0]*100
for factor in range(100):
# This will generate N zeros, then a 1. This repeats forever
flip_generator = itertools.cycle([0]*factor+[1])
# This will take the first 100 items from the generator
flips = list(itertools.islice(flip_generator,100))
total_flips = np.sum((total_flips,flips),axis=0)
# 1 has to be added to account for python 0-indexing
odd_flips = [index+1 for index,num_flips in enumerate(coins) if num_flips % 2 == 0]
print(odd_flips)
"""
Explanation: Riddler Express
You place 100 coins heads up in a row and number them by position, with the coin all the way on the left No. 1 and the one on the rightmost edge No. 100. Next, for every number N, from 1 to 100, you flip over every coin whose position is a multiple of N. For example, first you’ll flip over all the coins, because every number is a multiple of 1. Then you’ll flip over all the even-numbered coins, because they’re multiples of 2. Then you’ll flip coins No. 3, 6, 9, 12… And so on.
What do the coins look like when you’re done? Specifically, which coins are heads down?
End of explanation
"""
%matplotlib inline
import numpy as np
NUM_SPACES = 1000
probs = np.zeros((1000,1000))
# Seed first column of probabilities
# The first 6 values should be 1/6
probs[0:6,0] = np.array([1/6]*6)
for col in np.arange(1,NUM_SPACES):
for row in np.arange(NUM_SPACES):
target_col = col-1
start_row = max(0,row-6)
end_row = max(0,row)
new_val = sum(probs[start_row:end_row,target_col])/6
probs[row,col] = new_val
from matplotlib import pyplot as plt
sum_probs = np.sum(probs,axis=1)
x1 = np.arange(1,31)
y1 = sum_probs[:30]
plt.plot(x1,y1,marker='.',color='b')
plt.ylim(0)
plt.draw()
print(np.argmax(sum_probs)+1)
"""
Explanation: Classic Riddler
While traveling in the Kingdom of Arbitraria, you are accused of a heinous crime. Arbitraria decides who’s guilty or innocent not through a court system, but a board game. It’s played on a simple board: a track with sequential spaces numbered from 0 to 1,000. The zero space is marked “start,” and your token is placed on it. You are handed a fair six-sided die and three coins. You are allowed to place the coins on three different (nonzero) spaces. Once placed, the coins may not be moved.
After placing the three coins, you roll the die and move your token forward the appropriate number of spaces. If, after moving the token, it lands on a space with a coin on it, you are freed. If not, you roll again and continue moving forward. If your token passes all three coins without landing on one, you are executed. On which three spaces should you place the coins to maximize your chances of survival?
Extra credit: Suppose there’s an additional rule that you cannot place the coins on adjacent spaces. What is the ideal placement now? What about the worst squares — where should you place your coins if you’re making a play for martyrdom?
End of explanation
"""
second_probs = np.zeros((1000,1000))
# Seed first column of probabilities
# The first 5 values should be 1/6
second_probs[0:5,0] = np.array([1/6]*5)
for col in np.arange(1,NUM_SPACES):
for row in np.arange(NUM_SPACES):
target_col = col-1
start_row = max(0,row-6)
end_row = max(0,row)
new_val = sum(second_probs[start_row:end_row,target_col])/6
if row == 5:
second_probs[row,col] = 0
else:
second_probs[row,col] = new_val
from matplotlib import pyplot as plt
sum_second_probs = np.sum(second_probs,axis=1)
x2 = np.arange(1,31)
y2 = sum_second_probs[:30]
plt.plot(x2[:5],y2[:5],marker='.',color='b')
plt.plot(x2[6:31],y2[6:31],marker='.',color='b')
plt.ylim(0)
plt.draw()
print(np.argmax(sum_second_probs)+1)
third_probs = np.zeros((1000,1000))
# Seed first column of probabilities
# The first 4 values should be 1/6
third_probs[0:4,0] = np.array([1/6]*4)
for col in np.arange(1,NUM_SPACES):
for row in np.arange(NUM_SPACES):
target_col = col-1
start_row = max(0,row-6)
end_row = max(0,row)
new_val = sum(third_probs[start_row:end_row,target_col])/6
if row == 5 or row == 4:
third_probs[row,col] = 0
else:
third_probs[row,col] = new_val
from matplotlib import pyplot as plt
sum_third_probs = np.sum(third_probs,axis=1)
x3 = np.arange(1,31)
y3 = sum_third_probs[:30]
plt.plot(x3[:4],y3[:4],marker='.',color='b')
plt.plot(x3[6:31],y3[6:31],marker='.',color='b')
plt.ylim(0)
plt.draw()
print(np.argmax(sum_third_probs)+1)
plt.plot(x1,y1,marker='.',color='k')
plt.plot(x2[:5],y2[:5],marker='.',color='b')
plt.plot(x2[6:31],y2[6:31],marker='.',color='b')
plt.plot(x3[:4],y3[:4],marker='.',color='r')
plt.plot(x3[6:31],y3[6:31],marker='.',color='r')
plt.ylim(0)
plt.draw()
print([np.argmax(sum_probs)+1,
np.argmax(sum_second_probs)+1,
np.argmax(sum_third_probs)+1])
# Implementing the recursive solution from
# http://www.laurentlessard.com/bookproofs/the-deadly-board-game/
p_cache = dict()
def p(k):
try:
return p_cache[k]
except KeyError:
if k == 0:
answer = float(1)
elif k < 0:
answer = float(0)
else:
answer = float((p(k-1)+p(k-2)+p(k-3)+p(k-4)+p(k-5)+p(k-6))/6)
p_cache[k] = answer
return answer
def q(k,m):
return p(k)+p(m)-p(k)*p(m-k)
def r(k,m,n):
return p(k)+p(m)+p(n)-p(k)*p(m-k)-p(k)*p(n-k)-p(m)*p(n-m)+p(k)*p(m-k)*p(n-m)
v = range(1,20)
#single = [p(k) for k in v]
#double = [[q(k,m) for k in v] for m in v]
p_vec = np.vectorize(p)
q_vec = np.vectorize(q)
r_vec = np.vectorize(r)
single = np.fromfunction(p_vec,(20,))
double = np.fromfunction(q_vec,(20,20))
triple = np.fromfunction(r_vec,(20,20,20))
np.argmax(triple[1:20,1:20,1:20])
plt.plot(v,single[1:],marker='.')
plt.show()
plt.imshow(double[1:,1:], cmap='viridis',interpolation ='nearest',extent = (0.5,19.5,19.5,0.5))
plt.show()
import itertools
fig = plt.figure()
im = plt.imshow(triple[1:20,1:20,1],
cmap='viridis',
interpolation='nearest',
extent = (0.5,19.5,19.5,0.5))
cycler = itertools.cycle(v)
def updatefig(i):
z = next(cycler)
im.set_array(triple[1:20,1:20,z])
return [im]
ani = animation.FuncAnimation(fig, updatefig, interval=200, blit=True)
HTML(ani.to_html5_video())
m = np.max(triple[1:,1:,1:])
i = np.argmax(m)
np.unravel_index(i,(20,20,20))
m = np.max(double[1:,1:])
i = np.argmax(m)
np.unravel_index(i,(20,20))
"""
Explanation: If I would not have seen this particular tweet (https://twitter.com/xaqwg/status/787791061821161472), I may have stopped there, with the answers [5,6,11].
However, I see now how the next coin must account for the odds assuming the 6 is NOT hit (because the game must be over). If a significant portion of 11's probability comes from rolls that have previously hit 6, it may not be the best choice.
However, I wasn't able to find out why my graphs were not modified for the spaces < 3. I don't see how those probabilities could have been modified, but I'll wait to see the answer.
End of explanation
"""
|
planet-os/notebooks | api-examples/SMAP_package-api.ipynb | mit | import time
import os
from package_api import download_data
import xarray as xr
from netCDF4 import Dataset, num2date
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
import matplotlib
import datetime
import warnings
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
"""
Explanation: Soil Moisture Active Passive (SMAP) Level 4 Data demo
In this demo we are downloading data using Planet OS Package-API which let's us use bigger amount of data with less time than raster API.
We are showing Portugal and Spain droughts what might have been a reason for over 600 wildfires that happend during summer and autumn 2017. Note that dates in the demo has changed and therefore description might not be relevant
End of explanation
"""
API_key = open('APIKEY').read().strip()
"""
Explanation: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
End of explanation
"""
def get_start_end(days):
date = datetime.datetime.now() - datetime.timedelta(days=days)
time_start = date.strftime('%Y-%m-%d') + 'T16:00:00'
time_end = date.strftime('%Y-%m-%d') + 'T21:00:00'
return time_start,time_end
latitude_south = 15.9; latitude_north = 69.5
longitude_west = -17.6; longitude_east = 38.6
area = 'europe'
days = 6
time_start,time_end = get_start_end(days)
dataset_key = 'nasa_smap_spl4smau'
variable = 'Analysis_Data__sm_surface_analysis'
"""
Explanation: Here we define the area we are intrested, time range from where we want to have the data, dataset key to use and variable name.
End of explanation
"""
folder = os.path.realpath('.') + '/'
"""
Explanation: This one here is generating working directory, we need it to know where we are going to save data. No worries, we will delete file after using it!
End of explanation
"""
def make_image(lon,lat,data,date,latitude_north, latitude_south,longitude_west, longitude_east,unit,**kwargs):
m = Basemap(projection='merc', lat_0 = 55, lon_0 = -4,
resolution = 'i', area_thresh = 0.05,
llcrnrlon=longitude_west, llcrnrlat=latitude_south,
urcrnrlon=longitude_east, urcrnrlat=latitude_north)
lons,lats = np.meshgrid(lon,lat)
lonmap,latmap = m(lons,lats)
if len(kwargs) > 0:
fig=plt.figure(figsize=(10,8))
plt.subplot(221)
m.drawcoastlines()
m.drawcountries()
c = m.pcolormesh(lonmap,latmap,data,vmin = 0.01,vmax = 0.35)
plt.title(date)
plt.subplot(222)
m.drawcoastlines()
m.drawcountries()
plt.title(kwargs['date_later'])
m.pcolormesh(lonmap,latmap,kwargs['data_later'],vmin = 0.01,vmax = 0.35)
else:
fig=plt.figure(figsize=(9,7))
m.drawcoastlines()
m.drawcountries()
c = m.pcolormesh(lonmap,latmap,data,vmin = 0.01,vmax = 0.35)
plt.title(date)
cbar = plt.colorbar(c)
cbar.set_label(unit)
plt.show()
"""
Explanation: Now we ara making a function for making images
End of explanation
"""
try:
package_key = download_data(folder,dataset_key,API_key,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,variable,area)
except:
days = 7
time_start,time_end = get_start_end(days)
package_key = download_data(folder,dataset_key,API_key,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,variable,area)
"""
Explanation: Here we are downloading data by using Package-API. If you are intrested how data is downloaded, find the file named package_api.py from notebook folder.
End of explanation
"""
filename_europe = package_key + '.nc'
data = xr.open_dataset(filename_europe)
surface_soil_moisture_data = data.Analysis_Data__sm_surface_analysis
unit = surface_soil_moisture_data.units
surface_soil_moisture = data.Analysis_Data__sm_surface_analysis.values[0,:,:]
surface_soil_moisture = np.ma.masked_where(np.isnan(surface_soil_moisture),surface_soil_moisture)
latitude = data.lat; longitude = data.lon
lat = latitude.values
lon = longitude.values
date = str(data.time.values[0])[:-10]
"""
Explanation: Now we have data and we are reading it in using xarray:
End of explanation
"""
make_image(lon,lat,surface_soil_moisture,date,latitude_north, latitude_south,longitude_west, longitude_east,unit)
"""
Explanation: Here we are making image by using function defined above.
In this image we can see how dry Iberian peninsula (Portugal and Spain) was during wildfires in October. On 15th October strong winds from the Hurricane Ophelia quickly spread flames along the Iberian coast. We can see that on that day Iberian peninsula soil moisture was comparable with African deserts.
End of explanation
"""
iberia_west = -10; iberia_east = 3.3
iberia_south = 35; iberia_north = 45
lon_ib = longitude.sel(lon=slice(iberia_west,iberia_east)).values
lat_ib = latitude.sel(lat=slice(iberia_north,iberia_south)).values
soil_ib = surface_soil_moisture_data.sel(lat=slice(iberia_north,iberia_south),lon=slice(iberia_west,iberia_east)).values[0,:,:]
soil_ib = np.ma.masked_where(np.isnan(soil_ib),soil_ib)
"""
Explanation: So let's see Portugal and Spain littlebit closer. For that we need to define the area and we will slice data from this area.
End of explanation
"""
days2 = days -1
time_start, time_end = get_start_end(days2)
try:
package_key_iberia = download_data(folder,dataset_key,API_key,iberia_west,iberia_east,iberia_south,iberia_north,time_start,time_end,variable,area)
except:
days2 = days - 2
time_start, time_end = get_start_end(days2)
package_key_iberia = download_data(folder,dataset_key,API_key,iberia_west,iberia_east,iberia_south,iberia_north,time_start,time_end,variable,area)
filename_iberia = package_key_iberia + '.nc'
data_later = xr.open_dataset(filename_iberia)
soil_data_later = data_later.Analysis_Data__sm_surface_analysis
soil_later = data_later.Analysis_Data__sm_surface_analysis.values[0,:,:]
soil_later = np.ma.masked_where(np.isnan(soil_later),soil_later)
latitude_ib = data.lat; longitude_ib = data.lon
lat_ibl = latitude_ib.values
lon_ibl = longitude_ib.values
date_later = str(data_later.time.values[0])[:-10]
"""
Explanation: Let's also import some data from later as well.
End of explanation
"""
make_image(lon_ib,lat_ib,soil_ib,date,iberia_north, iberia_south,iberia_west, iberia_east, unit, data_later = soil_later,date_later = date_later)
"""
Explanation: Now we are making two images from the same area - Portugal and Spain.
On the left image we can see soil moisture values on 15th October and on the right image we can see soil moisture on 21th October.
We can see that the land was getting littlebit wetter with a few days. It even helped firefighters to get wildfires under control.
End of explanation
"""
if os.path.exists(filename_europe):
os.remove(filename_europe)
if os.path.exists(filename_iberia):
os.remove(filename_iberia)
"""
Explanation: Finally, let's delete files we downloaded:
End of explanation
"""
|
dimitri-yatsenko/pipeline | python/example/DLC_workflow_detailed_explanation.ipynb | lgpl-3.0 | import datajoint as dj
from pipeline import pupil
"""
Explanation: pupil_new explanation (in detail)
This is a notebook on explaining deeplabcut workflow (Detailed version)
Let's import pupil first (and datajoint)
End of explanation
"""
dj.ERD(pupil.schema)
"""
Explanation: OK, now let's see what is under pupil module. Simplest way of understanding this module is calling dj.ERD
End of explanation
"""
pupil.ConfigDeeplabcut()
pupil.ConfigDeeplabcut.heading
"""
Explanation: There are 3 particular tables we want to pay attention:
1. ConfigDeeplabcut (dj.Manual)
2. TrackedLabelsDeeplabcut (dj.Computed)
3. FittedContourDeeplabcut (dj.Computed)
let's look at ConfigDeeplabcut first
ConfigDeeplabcut
End of explanation
"""
pupil.TrackedLabelsDeeplabcut()
pupil.TrackedLabelsDeeplabcut.heading
"""
Explanation: ConfigDeeplabcut is a table used to load configuration settings specific to DeepLabCut (DLC) model. Whenever we update our model for some reason (which is going to be Donnie most likely), one needs to ensure that the new config_path with appropriate shuffle and trainingsetindex is provided into this table.
For now, there is only one model (i.e. one model configuration), therefore only 1 entry.
Now let's look at TrackedLabelsDeeplabcut
TrackedLabelDeeplabcut
End of explanation
"""
pupil.TrackedLabelsDeeplabcut.create_tracking_directory?
"""
Explanation: First thing first. TrackedLabelsDeeplabcut takes ConfigDeeplabcut as a foreign key (as you can see from dj.ERD)
Under TrackedLabelsDeeplabcut, there are 3 part tables
Also, TrackedLabelsDeeplabcut is a complex table that performs the following:
Given a specific key (i.e. aniaml_id, session, scan_idx), it creates a needed directory structure by calling create_tracking_directory.
Make a 5 sec long short clip starting from the middle of the original video via make_short_video
Using DLC model, predict/ find labels on short video via predict_labels
From the labels on short video, obtain coordinates to be used to crop the original video via obtain_cropping_coords
Add additional pixels on cropping coordinates via add_pixels
Using the coordinates from step 4, crop and compress original video via make_compressed_cropped_video
Predict on compressed_cropped_video
I know it is alot to digest, so let's look at it one by one
1. Given a specific key (i.e. aniaml_id, session, scan_idx), it creates a needed directory structure by calling create_tracking_directory.
Let's call create_tracking_directory? and see what that does
End of explanation
"""
# Uncomment this cell to see
import os
key = dict(animal_id = 20892, session=10, scan_idx=10)
tracking_dir = (pupil_new.TrackedLabelsDeeplabcut & key).fetch1('tracking_dir')
print(os.listdir(tracking_dir))
print(os.listdir(os.path.join(tracking_dir, 'short')))
print(os.listdir(os.path.join(tracking_dir, 'compressed_cropped')))
"""
Explanation: Basically, given a specific video, it creates a tracking directory, add a symlink to the original video inside the tracking directory. Then it creates 2 sub direcotories, short and compressed_cropped. The reason we make such hierarchy is that
1. we want to compress the video (not over time but only over space) such that we reduce the size of the video while DLC can still predict reliably
2. we do not want to predict on the entire video, but only around the pupil area, hence we need to crop
3. In order to crop, we need to know where the pupil is, hence make a 5 sec long (or short video). Then using DLC model, find appropriate cropping coordinates.
One can actually see a real example by looking at case 20892_10_10, one of the entires in the table
End of explanation
"""
pupil.TrackedLabelsDeeplabcut.OriginalVideo()
"""
Explanation: .pickle and .h5 files are generated by DLC model and are used to predict labels. We will talk about it very soon, but for now, notice that under tracking_dir, we have the behavior video, 20892_9_10_beh.avi. This is, however, only a symlink to the actual video. Hence, even if we accidentally delete it, no harm to the actual video itself :)
Also, some original video info are saved in the part table, OriginalVideo. I think both the primary and secondary keys are self-explantory
End of explanation
"""
pupil.TrackedLabelsDeeplabcut.make_short_video?
"""
Explanation: 2. Make a 5 sec long short clip starting from the middle of the original video via make_short_video
End of explanation
"""
pupil.TrackedLabelsDeeplabcut.ShortVideo()
"""
Explanation: This function is quite straightforward. Using the symlink, we access the original video, then find the middle frame, which then is converted into actual time (in format of hr:min:sec). Then, using ffmpeg, we extract 5 second long video and save it under short directory.
For ShortVideo part table, it saves both the path to the short video (video_path) and starting_frame. starting_frame indicates the middle frame number of the original video.
End of explanation
"""
pupil.TrackedLabelsDeeplabcut.predict_labels?
"""
Explanation: 3. Using DLC model, predict/ find labels on short video via predict_labels
End of explanation
"""
pupil.TrackedLabelsDeeplabcut.obtain_cropping_coords?
"""
Explanation: Using DLC model, we predict on short video that was made from step 2. Quite straightforward here.
4. From the labels on short video, obtain coordinates to be used to crop the original video via obtain_cropping_coords
End of explanation
"""
import pandas as pd
df_short = pd.read_hdf(os.path.join(tracking_dir,'short', '20892_10_00010_beh_shortDeepCut_resnet50_pupil_trackFeb12shuffle1_600000.h5'))
df_short.head()
"""
Explanation: To fully understand what is going on here, a bit of backgound on deeplabcut (DLC) is needed. When DLC predicts a label, it returns a likelihood of a label (value between 0 and 1.0). Here, I used 0.9 as a threshold to filter out whether the predicted label is accurate or not.
For example, we can take a quick look on how DLC predicted on short video clip
End of explanation
"""
pupil.TrackedLabelsDeeplabcut.add_pixels?
"""
Explanation: 0.90 is probably more than good enough given how confident DLC thinks about the locations of bodyparts.
But sometimes, as any DL models, DLC can predict at somewhere completely wrong with high confidence. To filter those potential outliers, we only retain values within 1 std. from mean. Then, we find min and max values in x and y coordinates from 5 second long video.
Btw, we only look at the eyelid_top, eyelid_bottom, eyelid_left, eyelid_right as they are, in theory, extremes of the bounding box to draw.
5. Add additional pixels on cropping coordinates via add_pixels
End of explanation
"""
pupil.TrackedLabelsDeeplabcut.make_compressed_cropped_video?
"""
Explanation: Now that we have coords to crop around, we add addtional pixels (100 specifically) on top.
In my experience, 100 pixels were enough to ensure that even in drastic eyelid movements (i.e. eyes being super wide open), all the body parts are within the cropping coordinates.
6. Using the coordinates from step 5, crop and compress original video via make_compressed_cropped_video
End of explanation
"""
pupil.TrackedLabelsDeeplabcut.CompressedCroppedVideo()
"""
Explanation: Using cropping coordinates from step 5, we compress and crop the original video via ffmpeg and save it under compressed_cropped directory. This takes around 15-25 minutes.
In CompressedCroppedVideo part table, one can see the cropping coords (after adding added_pixels), how many pixels added, and video_path to compressed_cropped video
End of explanation
"""
pupil.TrackedLabelsDeeplabcut.predict_labels?
"""
Explanation: 7. Predict on compressed_cropped_video
End of explanation
"""
pupil.FittedContourDeeplabcut()
"""
Explanation: Same as step 3, but this time, using cropped_compressed video. MAKE SURE YOU HAVE GPU AVAILABLE. Otherwise, this will take significantly long time. With GPU enabled, this takes around 20-40 minutes.
FittedContourDeeplabcut
Now that we have a tracked labels, now it is time to fit. Here, we fit both a circle and and ellipse.
End of explanation
"""
print(key)
(pupil.FittedContourDeeplabcut & key).Circle.heading
(pupil.FittedContourDeeplabcut & key).Circle()
"""
Explanation: Circle
End of explanation
"""
from pipeline.utils import DLC_tools
DLC_tools.PupilFitting.detect_visible_pupil_area?
"""
Explanation: For circle, we save center coordinates in a tuple format, radius in float, and visible_portion in float. visible_portion is defined as the following: Given a fitted circle or an ellipse, subtract the area that is occuluded by eyelids, and return the portion of visible pupil area w.r.t. the fitted area. In theory, the value ranges from 0 (pupil is completely invisible) to 1 (pupil is completely visible). However there are cases where visible portion cannot be calculated:
DLC failed to predict all eyelid labels, hence visible region cannot be obtained (evaluated to -1)
Because the number of predicted pupil labels are less than 3 for circle (and 6 for ellipse), fitting did not happen. Hence we do not know the area of the pupil as well as visible region (evaluated to -2)
Both case 1 and 2 happened (evaluated to -3)
In the beginning of the videos, we have black screens, hence both eyelids and pupil labels are not predicted, which evaluated to -3.
As visible_portion comment indicates, one can find the same information from DLC_tools.PupilFitting.detect_visible_pupil_area
End of explanation
"""
print(key)
(pupil.FittedContourDeeplabcut & key).Ellipse.heading
(pupil.FittedContourDeeplabcut & key).Ellipse()
"""
Explanation: Ellipse table is very similar to that of a circle table
Ellipse
End of explanation
"""
|
tcstewar/testing_notebooks | Intercept Distribution .ipynb | gpl-2.0 | %matplotlib inline
import pylab
import numpy as np
import nengo
import seaborn
import pytry
import pandas
"""
Explanation: Intercept Distribution
This notebook shows how to define intercepts that are uniform in the area allocated to each neuron, and shows that this improves decoder accuracy.
Distributing intercepts uniformly between -1 and 1 is clearly the right thing to do in 1-dimension. But does it also make sense in higher dimensions?
The intercept effectively defines what proportion of the represented area this neuron fires for. In 1-dimension, if we uniformly distribute the intercepts, then we also get a uniform distribution of represented area (i.e. a neuron with an intercept of 0 is active for 50% of the space, and a neuron with an intercept of 0.5 is active for 25% of the space). But, in 2-dimensions, an intercept of 0.5 will be active for less than 25% of the space. In higher dimensions, this gets even worse. Indeed, in 32 dimensions, we may end up with very large numbers of neurons that are either always on or always off, both of which are fairly useless.
End of explanation
"""
def plot_intercept_distribution(ens):
pylab.subplot(1,2,1)
intercepts = ens.intercepts.sample(ens.n_neurons)
seaborn.distplot(intercepts, bins=20)
pylab.xlabel('intercept')
pylab.subplot(1,2,2)
pts = ens.eval_points.sample(n=1000, d=ens.dimensions)
model = nengo.Network()
model.ensembles.append(ens)
sim = nengo.Simulator(model)
_, activity = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=pts)
p = np.mean(activity>0, axis=0)
seaborn.distplot(p, bins=20)
pylab.xlabel('proportion of pts neuron is active for')
for D in [1, 2, 4, 8, 16, 32]:
ens = nengo.Ensemble(n_neurons=10000, dimensions=D, add_to_container=False)
pylab.figure(figsize=(14,4))
plot_intercept_distribution(ens)
pylab.title('Dimensions = %d' % D)
"""
Explanation: We start by empirically computing this distribution, just by generating tuning curves and counting the number of non-zero values for each neuron.
End of explanation
"""
import scipy.special
def analytic_proportion(x, d):
flip = False
if x < 0:
x = -x
flip = True
if x >= 1.0:
value = 0
else:
value = 0.5 * scipy.special.betainc((d+1)/2.0, 0.5, 1 - x**2)
if flip:
value = 1.0 - value
return value
print analytic_proportion(0.5, 2)
"""
Explanation: Those peaks at 0% and 100% are very worrying! It looks like a lot of these neurons aren't really helping much.
To understand this a bit better, let's see if we can directly compute this mapping from intercept to proportion active. To do this, we need to compute the volume of a hyperspherical cap https://en.wikipedia.org/wiki/Spherical_cap#Hyperspherical_cap
<img src=https://upload.wikimedia.org/wikipedia/commons/thumb/2/2f/Spherical_cap_diagram.tiff/lossless-page1-220px-Spherical_cap_diagram.tiff.png>
The formula for the volume in general is $V = {1 \over 2} C_d r^d I_{2rh-h^2 \over r^2}({d+1 \over 2}, {1 \over 2})$ where $C_d$ is the volume of a unit hyperball of dimension $d$ and $I_x(a,b)$ is the regularized incomplete beta function.
In our case, if $x$ is the intercept, then $r=1$, $h=1-x$. We want the proportion, so we divide by $C_d$, leaving:
$p={1 \over 2} I_{1-x^2}({{d+1} \over 2}, {1 \over 2})$
Of course, this formula only works for $x>=0$. For $x<0$ we can flip the sign of $x$ and then subtract 1 at the end.
End of explanation
"""
def plot_intercept_distribution(ens):
pylab.subplot(1,3,1)
intercepts = ens.intercepts
if isinstance(intercepts, nengo.dists.Distribution):
intercepts = intercepts.sample(ens.n_neurons)
seaborn.distplot(intercepts, bins=np.linspace(-1.2, 1.2, 25))
pylab.xlabel('intercept')
pylab.xlim(-1.5, 1.5)
pylab.subplot(1,3,2)
pts = ens.eval_points.sample(n=1000, d=ens.dimensions)
model = nengo.Network()
model.ensembles.append(ens)
sim = nengo.Simulator(model)
_, activity = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=pts)
p = np.mean(activity>0, axis=0)
seaborn.distplot(p, bins=20)
pylab.xlabel('proportion of pts neuron is active for\n(sampled)')
p2 = [analytic_proportion(x, ens.dimensions) for x in intercepts]
pylab.subplot(1,3,3)
seaborn.distplot(p2, bins=20)
pylab.xlabel('proportion of pts neuron is active for\n(analytic)')
ens = nengo.Ensemble(n_neurons=10000, dimensions=16, add_to_container=False)
pylab.figure(figsize=(14,3))
plot_intercept_distribution(ens)
"""
Explanation: This indicates an intercept of 0.5 in 2-dimensons only fires for 19.5% of the represented area.
Let's compare this analytic proportion to the empirically estimated proportion, just to check if it works.
End of explanation
"""
def find_x_for_p(p, d):
sign = 1
if p > 0.5:
p = 1.0 - p
sign = -1
return sign * np.sqrt(1-scipy.special.betaincinv((d+1)/2.0, 0.5, 2*p))
print find_x_for_p(0.7, 2)
"""
Explanation: Those look pretty similar to me! This shows our analytic approach is working well.
Now, we need to reverse this effect. What we want is a system that we can give an p value, and it will find the x-intercept value that gives that proportion. Fortunately, scipy has an inverse for the beta function, so a bit of algebra on $p={1 \over 2} I_{1-x^2}({{d+1} \over 2}, {1 \over 2})$ gives us
$x = \sqrt{1-I^{-1}_{2p}({{d+1} \over 2}, {1 \over 2})}$
End of explanation
"""
ens = nengo.Ensemble(n_neurons=10000, dimensions=16, add_to_container=False)
intercepts = ens.intercepts.sample(n=ens.n_neurons, d=1)[:,0]
intercepts2 = [find_x_for_p(x_int/2+0.5, ens.dimensions) for x_int in intercepts]
ens.intercepts = intercepts2
pylab.figure(figsize=(14,8))
pylab.subplot(2, 2, 1)
seaborn.distplot(intercepts)
pylab.xlabel('original intercepts')
pylab.subplot(2, 2, 2)
seaborn.distplot([analytic_proportion(x, ens.dimensions) for x in intercepts])
pylab.xlabel('proportion for original intercepts')
pylab.subplot(2, 2, 3)
seaborn.distplot(intercepts2)
pylab.xlabel('new intercepts')
pylab.subplot(2, 2, 4)
seaborn.distplot([analytic_proportion(x, ens.dimensions) for x in intercepts2])
pylab.xlabel('proportion for new intercepts')
"""
Explanation: Let's see what happens when we apply this transformation to the intercepts
End of explanation
"""
for D in [1, 2, 4, 8, 16, 32]:
ens = nengo.Ensemble(n_neurons=10000, dimensions=D, add_to_container=False)
intercepts = ens.intercepts.sample(n=ens.n_neurons, d=1)[:,0]
intercepts2 = [find_x_for_p(x_int/2+0.5, ens.dimensions) for x_int in intercepts]
ens.intercepts = intercepts2
pylab.figure(figsize=(14,4))
plot_intercept_distribution(ens)
pylab.title('Dimensions = %d' % D)
"""
Explanation: The new intercept distribution does what we want it to do! It results in a uniform distribution of the proportion of the represented area that is active for each neuron.
Let's confirm this in multiple dimensions.
End of explanation
"""
class AreaIntercepts(nengo.dists.Distribution):
dimensions = nengo.params.NumberParam('dimensions')
base = nengo.dists.DistributionParam('base')
def __init__(self, dimensions, base=nengo.dists.Uniform(-1, 1)):
super(AreaIntercepts, self).__init__()
self.dimensions = dimensions
self.base = base
def __repr(self):
return "AreaIntercepts(dimensions=%r, base=%r)" % (self.dimensions, self.base)
def transform(self, x):
sign = 1
if x > 0:
x = -x
sign = -1
return sign * np.sqrt(1-scipy.special.betaincinv((self.dimensions+1)/2.0, 0.5, x+1))
def sample(self, n, d=None, rng=np.random):
s = self.base.sample(n=n, d=d, rng=rng)
for i in range(len(s)):
s[i] = self.transform(s[i])
return s
"""
Explanation: Okay, this formula works! Now, does it improve decoder accuracy?
To help check this, let's start by making a nengo.dists.Distribution that does this transformation for us. It takes any intercept distribution and does the above transformation. Note that we do a quick substitution of $p = {{x_{intercept} + 1} \over 2}$ to turn an intercept into a desired probability. This changes the $2p$ to $x+1$.
End of explanation
"""
class AreaInterceptTrial(pytry.Trial):
def params(self):
self.param('number of neurons per dimension', n_per_d=50)
self.param('number of dimensions', d=1)
self.param('use AreaIntercept distribution', use_area=False)
def evaluate(self, p):
model = nengo.Network(seed=p.seed)
intercepts = nengo.dists.Uniform(-1, 1)
if p.use_area:
intercepts = AreaIntercepts(p.d, base=intercepts)
n_neurons = p.n_per_d * p.d
with model:
ens = nengo.Ensemble(n_neurons, p.d, intercepts=intercepts)
def func_constant(x):
return 1
constant = nengo.Node(size_in=1)
c_constant = nengo.Connection(ens, constant, function=func_constant)
def func_linear(x):
return x
linear = nengo.Node(size_in=p.d)
c_linear = nengo.Connection(ens, linear, function=func_linear)
def func_square(x):
return x**2
square = nengo.Node(size_in=p.d)
c_square = nengo.Connection(ens, square, function=func_square)
def func_quad(x):
r = []
for i, xx in enumerate(x):
for j, yy in enumerate(x[i+1:]):
r.append(xx*yy)
return r
count = len(func_quad(np.zeros(p.d)))
if count > 0:
quad = nengo.Node(size_in=count)
c_quad = nengo.Connection(ens, quad, function=func_quad)
else:
c_quad = None
sim = nengo.Simulator(model)
return dict(
constant = np.mean(sim.data[c_constant].solver_info['rmses']),
linear = np.mean(sim.data[c_linear].solver_info['rmses']),
square = np.mean(sim.data[c_square].solver_info['rmses']),
quad = np.mean(sim.data[c_quad].solver_info['rmses']) if c_quad is not None else 0,
)
"""
Explanation: Now let's define a pytry.Trial to let us explore the accuracy of the system. As we vary the number of dimensions, we will see how well we can decode a constant function ($1$), linear value ($x_i$), squared value ($x_i^2$), and other quadratic combinatons ($x_i y_i$).
End of explanation
"""
seed = 1
print AreaInterceptTrial().run(d=16, verbose=False, seed=seed)
print AreaInterceptTrial().run(d=16, verbose=False, use_area=True, seed=seed)
"""
Explanation: Now let's check to see how it performs
End of explanation
"""
for seed in range(30, 40): # be sure to change this range each time you run it
# or it will re-generate the same data as before
# (the data is being stored in the directory "decode2")
print 'seed', seed
for d in [1, 2, 4, 8, 16, 32]:
AreaInterceptTrial().run(d=d, verbose=False, seed=seed, data_dir='decode2')
AreaInterceptTrial().run(d=d, verbose=False, seed=seed, data_dir='decode2', use_area=True)
"""
Explanation: This looks like an improvement in all cases except for the constant! Let's see if this holds up for other dimensions (and averaging over many runs with different seeds).
End of explanation
"""
df = pandas.DataFrame(pytry.read('decode2'))
pylab.figure()
seaborn.barplot('d', 'constant', hue='use_area', data=df)
pylab.ylim(0, 0.06)
pylab.title('$y=1$', fontsize=20)
pylab.figure()
seaborn.barplot('d', 'linear', hue='use_area', data=df)
pylab.ylim(0, 0.06)
pylab.title('$y=x_i$', fontsize=20)
pylab.figure()
seaborn.barplot('d', 'square', hue='use_area', data=df)
pylab.ylim(0, 0.06)
pylab.title('$y=x_i^2$', fontsize=20)
pylab.figure()
seaborn.barplot('d', 'quad', hue='use_area', data=df)
pylab.ylim(0, 0.06)
pylab.title('$y=x_i y_i$', fontsize=20)
pylab.show()
"""
Explanation: Now let's load up this data and plot it.
End of explanation
"""
|
guyk1971/deep-learning | image-classification/dlnd_image_classification_mysol.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
# I replaced the above cell with this one (assuming I already have the data in my disk)
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
cifar10_dataset_folder_path = '/home/guy/datasets/cifar-10-batches-py'
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
return x/np.max(x)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
n_values=10
labels=np.arange(n_values)
return np.eye(n_values)[x]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
# print(image_shape)
# print (type(image_shape))
return tf.placeholder(tf.float32,shape=tuple([None]+list(image_shape)),name="x")
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
return tf.placeholder(tf.int32,shape=(None,n_classes),name="y")
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32,name="keep_prob")
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
W = tf.Variable(tf.truncated_normal(list(conv_ksize)+[x_tensor.shape.as_list()[3],conv_num_outputs],stddev=0.1),name='conv_weights')
b= tf.Variable(tf.zeros([conv_num_outputs]),name='conv_biases')
out = tf.nn.conv2d(x_tensor,
W,
[1]+list(conv_strides)+[1],
"SAME")
out=tf.nn.bias_add(out,b)
out=tf.nn.relu(out) # apply activation
out=tf.nn.max_pool(out,[1]+list(pool_ksize)+[1],[1]+list(pool_strides)+[1],'SAME') # max pooling
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
return tf.reshape(x_tensor,[-1,np.prod(x_tensor.shape.as_list()[1:])])
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
n_features=x_tensor.shape.as_list()[1]
W = tf.Variable(tf.truncated_normal([n_features,num_outputs],stddev=0.1),name='fc_weights')
b= tf.Variable(tf.zeros([num_outputs]),name='fc_biases')
out=tf.matmul(x_tensor,W)
out=tf.nn.bias_add(out,b)
out=tf.nn.relu(out) # apply activation
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
n_features=x_tensor.shape.as_list()[1]
W = tf.Variable(tf.truncated_normal([n_features,num_outputs],stddev=0.1),name='out_weights')
b= tf.Variable(tf.zeros([num_outputs]),name='out_biases')
out=tf.matmul(x_tensor,W)
out=tf.nn.bias_add(out,b)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
layer = conv2d_maxpool(x, 64, (3,3), (1,1) ,(2,2) , (2,2))
layer = conv2d_maxpool(layer, 128, (3, 3), (1, 1), (2, 2), (2, 2))
# layer = conv2d_maxpool(layer, 256, (3, 3), (1, 1), (2, 2), (2, 2))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
layer = flatten(layer)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
layer=fully_conn(layer,128)
layer=tf.nn.dropout(layer,keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out=output(layer,10)
# TODO: return output
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
session.run(optimizer,feed_dict={x:feature_batch,y:label_batch,keep_prob:keep_probability})
return
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
Loss_valid,Acc_valid=session.run([cost,accuracy],feed_dict={x:feature_batch,y:label_batch,keep_prob:1.0})
print('validation loss:{} , validation accuracy:{}'.format(Loss_valid,Acc_valid))
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 20
batch_size = 128
keep_probability = 1.0
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
kazzz24/deep-learning | language-translation/dlnd_language_translation.mine.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def single_text_to_ids(text, vocab_to_int, add_EOS):
id_text = []
for sentence in text.split('\n'):
id_sentence = []
for word in sentence.split():
id_sentence.append(vocab_to_int[word])
if add_EOS:
id_sentence.append(vocab_to_int['<EOS>'])
#print(sentence)
#print(id_sentence)
id_text.append(id_sentence)
#print(id_text)
return id_text
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# TODO: Implement Function
#print(source_text)
#print(target_text)
#print(source_vocab_to_int)
#print(target_vocab_to_int)
source_id_text = single_text_to_ids(source_text, source_vocab_to_int, False)
target_id_text = single_text_to_ids(target_text, target_vocab_to_int, True)
return source_id_text, target_id_text
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
"""
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return input, targets, learning_rate, keep_prob
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
"""
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_decoding_input(process_decoding_input)
"""
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
"""
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
"""
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
lstm = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
enc_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
_, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32)
return enc_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
"""
# TODO: Implement Function
#with tf.variable_scope("decoding") as decoding_scope:
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
return train_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
"""
# TODO: Implement Function
# Inference Decoder
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
"""
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
"""
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
# Decoder RNNs
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
lstm = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
#with tf.variable_scope("decoding") as decoding_scope:
train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
maximum_length = sequence_length - 1
inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, inference_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
#Apply embedding to the input data for the encoder.
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
#Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
#Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
#Apply embedding to the target data for the decoder.
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
#Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
train_logits, inference_logits = decoding_layer(dec_embed_input, dec_embeddings, enc_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return train_logits, inference_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
"""
# Number of Epochs
epochs = 20
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 128
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 200
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import time
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
if batch_i % 10 == 0:
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
lower_sentence = sentence.lower()
id_seq = []
for word in lower_sentence.split():
id_seq.append(vocab_to_int.get(word, vocab_to_int['<UNK>']))
return id_seq
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
anhaidgroup/py_entitymatching | notebooks/guides/step_wise_em_guides/Performing Blocking Using Built-In Blockers (Attr. Equivalence Blocker).ipynb | bsd-3-clause | %load_ext autotime
# Import py_entitymatching package
import py_entitymatching as em
import os
import pandas as pd
"""
Explanation: Introduction
Blocking is typically done to reduce the number of tuple pairs considered for matching. There are several blocking methods proposed. The py_entitymatching package supports a subset of such blocking methods (#ref to what is supported). One such supported blocker is attribute equivalence blocker. This IPython notebook illustrates how to perform blocking using attribute equivalence blocker.
First, we need to import py_entitymatching package and other libraries as follows:
End of explanation
"""
# Get the datasets directory
datasets_dir = em.get_install_path() + os.sep + 'datasets'
# Get the paths of the input tables
path_A = datasets_dir + os.sep + 'person_table_A.csv'
path_B = datasets_dir + os.sep + 'person_table_B.csv'
# Read the CSV files and set 'ID' as the key attribute
A = em.read_csv_metadata(path_A, key='ID')
B = em.read_csv_metadata(path_B, key='ID')
A.head()
B.head()
"""
Explanation: Then, read the input tablse from the datasets directory
End of explanation
"""
# Instantiate attribute equivalence blocker object
ab = em.AttrEquivalenceBlocker()
"""
Explanation: Different Ways to Block Using Attribute Equivalence Blocker
Once the tables are read, we can do blocking using attribute equivalence blocker.
There are three different ways to do attribute equivalence blocking:
Block two tables to produce a candidate set of tuple pairs.
Block a candidate set of tuple pairs to typically produce a reduced candidate set of tuple pairs.
Block two tuples to check if a tuple pair would get blocked.
Block Tables to Produce a Candidate Set of Tuple Pairs
End of explanation
"""
# Use block_tables to apply blocking over two input tables.
C1 = ab.block_tables(A, B,
l_block_attr='zipcode', r_block_attr='zipcode',
l_output_attrs=['name', 'birth_year', 'zipcode'],
r_output_attrs=['name', 'birth_year', 'zipcode'],
l_output_prefix='l_', r_output_prefix='r_')
# Display the candidate set of tuple pairs
C1.head()
"""
Explanation: For the given two tables, we will assume that two persons with different zipcode values do not refer to the same real world person. So, we apply attribute equivalence blocking on zipcode. That is, we block all the tuple pairs that have different zipcodes.
End of explanation
"""
# Show the metadata of C1
em.show_properties(C1)
id(A), id(B)
"""
Explanation: Note that the tuple pairs in the candidate set have the same zipcode.
The attributes included in the candidate set are based on l_output_attrs and r_output_attrs mentioned in block_tables command (the key columns are included by default). Specifically, the list of attributes mentioned in l_output_attrs are picked from table A and the list of attributes mentioned in r_output_attrs are picked from table B. The attributes in the candidate set are prefixed based on l_output_prefix and r_ouptut_prefix parameter values mentioned in block_tables command.
End of explanation
"""
# Introduce some missing values
A1 = em.read_csv_metadata(path_A, key='ID')
A1.ix[0, 'zipcode'] = pd.np.NaN
A1.ix[0, 'birth_year'] = pd.np.NaN
A1
# Use block_tables to apply blocking over two input tables.
C2 = ab.block_tables(A1, B,
l_block_attr='zipcode', r_block_attr='zipcode',
l_output_attrs=['name', 'birth_year', 'zipcode'],
r_output_attrs=['name', 'birth_year', 'zipcode'],
l_output_prefix='l_', r_output_prefix='r_',
allow_missing=True) # setting allow_missing parameter to True
len(C1), len(C2)
C2
"""
Explanation: Note that the metadata of C1 includes key, foreign key to the left and right tables (i.e A and B) and pointers to left and right tables.
Handling Missing Values
If the input tuples have missing values in the blocking attribute, then they are ignored by default. This is because, including all possible tuple pairs with missing values can significantly increase the size of the candidate set. But if you want to include them, then you can set allow_missing paramater to be True.
End of explanation
"""
# Instantiate Attr. Equivalence Blocker
ab = em.AttrEquivalenceBlocker()
# Use block_tables to apply blocking over two input tables.
C3 = ab.block_candset(C1, l_block_attr='birth_year', r_block_attr='birth_year')
C3.head()
"""
Explanation: The candidate set C2 includes all possible tuple pairs with missing values.
Block a Candidate Set of Tuple Pairs
In the above, we see that the candidate set produced after blocking over input tables include tuple pairs that have different birth years. We will assume that two persons with different birth years cannot refer to the same person. So, we block the candidate set of tuple pairs on birth_year. That is, we block all the tuple pairs that have different birth years.
End of explanation
"""
# Show the metadata of C1
em.show_properties(C3)
id(A), id(B)
"""
Explanation: Note that, the tuple pairs in the resulting candidate set have the same birth year.
The attributes included in the resulting candidate set are based on the input candidate set (i.e the same attributes are retained).
End of explanation
"""
# Display C2 (got by blocking over A1 and B)
C2
em.show_properties(C2)
em.show_properties(A1)
"""
Explanation: As we saw earlier the metadata of C3 includes the same metadata as C1. That is, it includes key, foreign key to the left and right tables (i.e A and B) and pointers to left and right tables.
Handling Missing Values
If the tuple pairs included in the candidate set have missing values in the blocking attribute, then they are ignored by default. This is because, including all possible tuple pairs with missing values can significantly increase the size of the candidate set. But if you want to include them, then you can set allow_missing paramater to be True.
End of explanation
"""
A1.head()
C4 = ab.block_candset(C2, l_block_attr='birth_year', r_block_attr='birth_year', allow_missing=False)
C4
# Set allow_missing to True
C5 = ab.block_candset(C2, l_block_attr='birth_year', r_block_attr='birth_year', allow_missing=True)
len(C4), len(C5)
C5
"""
Explanation: We see that A1 is the left table to C2.
End of explanation
"""
# Display the first tuple from table A
A.ix[[0]]
# Display the first tuple from table B
B.ix[[0]]
# Instantiate Attr. Equivalence Blocker
ab = em.AttrEquivalenceBlocker()
# Apply blocking to a tuple pair from the input tables on zipcode and get blocking status
status = ab.block_tuples(A.ix[0], B.ix[0], l_block_attr='zipcode', r_block_attr='zipcode')
# Print the blocking status
print(status)
"""
Explanation: Block Two tuples To Check If a Tuple Pair Would Get Blocked
We can apply attribute equivalence blocking to a tuple pair to check if it is going to get blocked. For example, we can check if the first tuple from A and B will get blocked if we block on zipcode.
End of explanation
"""
|
joelowj/Udacity-Projects | Udacity-Deep-Learning-Foundation-Nanodegree/Project-2/dlnd_image_classification.ipynb | apache-2.0 | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
normOfX = list()
minOfX = np.min(x)
maxOfX = np.max(x)
for elements in x:
normOfX.append((elements - minOfX) / (maxOfX - minOfX))
return np.array(normOfX)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
oneHotEncodedVector = np.zeros((len(x),10))
for i,j in enumerate(x):
oneHotEncodedVector[i][j] = 1
return oneHotEncodedVector
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32,
shape=[None, image_shape[0], image_shape[1], image_shape[2]],
name='x')
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32,
shape=[None, n_classes],
name='y')
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32,
name='keep_prob')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
depth = x_tensor.get_shape().as_list()[-1]
padding = 'SAME'
conStrides = [1, *conv_strides, 1]
poolStrides = [1, *pool_strides, 1]
poolKSize = [1, *pool_ksize, 1]
biases = tf.Variable(tf.zeros(conv_num_outputs))
weights = tf.Variable(tf.truncated_normal([*conv_ksize, depth, conv_num_outputs],stddev=0.1))
conv_layer = tf.nn.conv2d(x_tensor, weights, conStrides, padding)
conv_layer = tf.nn.bias_add(conv_layer, biases)
conv_layer = tf.nn.relu(conv_layer)
conv_layer = tf.nn.max_pool(conv_layer, poolKSize,
poolStrides, padding)
return conv_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
return tf.contrib.layers.flatten(x_tensor)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
return tf.contrib.layers.fully_connected(x_tensor, num_outputs)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
return tf.contrib.layers.fully_connected(inputs = x_tensor, num_outputs=num_outputs,activation_fn=None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 12
conv_ksize = (3, 3)
conv_strides = (1, 1)
pool_ksize = (2, 2)
pool_strides = (2, 2)
layer1 = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
layer2 = conv2d_maxpool(layer1, conv_num_outputs * 2, conv_ksize, conv_strides, pool_ksize, pool_strides)
layer3 = conv2d_maxpool(layer2, conv_num_outputs * 4, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flatten_layer3 = flatten(layer3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fully_connected_layer1 = fully_conn(flatten_layer3, 576)
fully_connected_layer1 = tf.nn.dropout(fully_connected_layer1, keep_prob)
fully_connected_layer2 = fully_conn(fully_connected_layer1, 384)
fully_connected_layer2 = tf.nn.dropout(fully_connected_layer2, keep_prob)
fully_connected_layer3 = fully_conn(fully_connected_layer2, 192)
fully_connected_layer3 = tf.nn.dropout(fully_connected_layer3, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
output_layer = output(fully_connected_layer3, 10)
# TODO: return output
return output_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# TODO: Implement Function
session.run(optimizer, {x: feature_batch, y: label_batch, keep_prob: keep_probability})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
train_loss = session.run(cost, {x: feature_batch, y: label_batch, keep_prob: 1.})
valid_loss = session.run(cost, {x: valid_features, y: valid_labels, keep_prob: 1.})
valid_acc = session.run(accuracy, {x: valid_features, y: valid_labels, keep_prob: 1.})
print('Train Loss: {:>10.6f}, Validation Loss: {:>10.6f}, Validation Accuracy: {:.6f}'
.format(train_loss, valid_loss, valid_acc))
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 20
batch_size = 256
keep_probability = 0.5
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
mne-tools/mne-tools.github.io | stable/_downloads/f5853db1ea98f82173310d147f23289c/compute_mne_inverse_epochs_in_label.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import apply_inverse_epochs, read_inverse_operator
from mne.minimum_norm import apply_inverse
print(__doc__)
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
fname_inv = meg_path / 'sample_audvis-meg-oct-6-meg-inv.fif'
fname_raw = meg_path / 'sample_audvis_filt-0-40_raw.fif'
fname_event = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
label_name = 'Aud-lh'
fname_label = meg_path / 'labels' / f'{label_name}.label'
event_id, tmin, tmax = 1, -0.2, 0.5
# Using the same inverse operator when inspecting single trials Vs. evoked
snr = 3.0 # Standard assumption for average data but using it for single trial
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Load data
inverse_operator = read_inverse_operator(fname_inv)
label = mne.read_label(fname_label)
raw = mne.io.read_raw_fif(fname_raw)
events = mne.read_events(fname_event)
# Set up pick list
include = []
# Add a bad channel
raw.info['bads'] += ['EEG 053'] # bads + 1 more
# pick MEG channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=include, exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13,
eog=150e-6))
# Get evoked data (averaging across trials in sensor space)
evoked = epochs.average()
# Compute inverse solution and stcs for each epoch
# Use the same inverse operator as with evoked data (i.e., set nave)
# If you use a different nave, dSPM just scales by a factor sqrt(nave)
stcs = apply_inverse_epochs(epochs, inverse_operator, lambda2, method, label,
pick_ori="normal", nave=evoked.nave)
# Mean across trials but not across vertices in label
mean_stc = sum(stcs) / len(stcs)
# compute sign flip to avoid signal cancellation when averaging signed values
flip = mne.label_sign_flip(label, inverse_operator['src'])
label_mean = np.mean(mean_stc.data, axis=0)
label_mean_flip = np.mean(flip[:, np.newaxis] * mean_stc.data, axis=0)
# Get inverse solution by inverting evoked data
stc_evoked = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori="normal")
# apply_inverse() does whole brain, so sub-select label of interest
stc_evoked_label = stc_evoked.in_label(label)
# Average over label (not caring to align polarities here)
label_mean_evoked = np.mean(stc_evoked_label.data, axis=0)
"""
Explanation: Compute MNE-dSPM inverse solution on single epochs
Compute dSPM inverse solution on single trial epochs restricted
to a brain label.
End of explanation
"""
times = 1e3 * stcs[0].times # times in ms
plt.figure()
h0 = plt.plot(times, mean_stc.data.T, 'k')
h1, = plt.plot(times, label_mean, 'r', linewidth=3)
h2, = plt.plot(times, label_mean_flip, 'g', linewidth=3)
plt.legend((h0[0], h1, h2), ('all dipoles in label', 'mean',
'mean with sign flip'))
plt.xlabel('time (ms)')
plt.ylabel('dSPM value')
plt.show()
"""
Explanation: View activation time-series to illustrate the benefit of aligning/flipping
End of explanation
"""
# Single trial
plt.figure()
for k, stc_trial in enumerate(stcs):
plt.plot(times, np.mean(stc_trial.data, axis=0).T, 'k--',
label='Single Trials' if k == 0 else '_nolegend_',
alpha=0.5)
# Single trial inverse then average.. making linewidth large to not be masked
plt.plot(times, label_mean, 'b', linewidth=6,
label='dSPM first, then average')
# Evoked and then inverse
plt.plot(times, label_mean_evoked, 'r', linewidth=2,
label='Average first, then dSPM')
plt.xlabel('time (ms)')
plt.ylabel('dSPM value')
plt.legend()
plt.show()
"""
Explanation: Viewing single trial dSPM and average dSPM for unflipped pooling over label
Compare to (1) Inverse (dSPM) then average, (2) Evoked then dSPM
End of explanation
"""
|
IST256/learn-python | content/lessons/08-Lists/HW-Lists.ipynb | mit | ! curl https://raw.githubusercontent.com/mafudge/datasets/master/ist256/08-Lists/test-fudgemart-products.txt -o test-fudgemart-products.txt
! curl https://raw.githubusercontent.com/mafudge/datasets/master/ist256/08-Lists/fudgemart-products.txt -o fudgemart-products.txt
"""
Explanation: Homework: The Fudgemart Products Catalog
The Problem
Fudgemart, a knockoff of a company with a similar name, has hired you to create a program to browse their product catalog.
Write an ipython interactive program that allows the user to select a product category from the drop-down and then displays all of the fudgemart products within that category. You can accomplish this any way you like and the only requirements are you must:
load each product from the fudgemart-products.txt file into a list.
build the list of product catagories dynamically ( you cannot hard-code the categories in)
print the product name and price for all products selected
use ipython interact to create a drop-down for the user interface.
FILE FORMAT:
the file fudgemart-products.txt has one row per product
each row is delimited by a | character.
there are three items in each row. category, product name, and price.
Example Row: Hardware|Ball Peen Hammer|15.99
Category = Hardware
Product = Ball Peen Hammer
Price = 15.99
HINTS:
Draw upon the lessons and examples in the lab and small group. We covered using interact with a dropdown, reading from files into lists, etc.
There is a sample file, test-fudgemart-products.txt which you can use to test your code and not have to deal with the number of rows in the actual file fudgemart-products.txt. Your code should work with either file. The test file has 3 products and 2 categories. One it works with the test file, switch to the other file!
The unique challenge of this homework creating the list of prodct categories. You can do this when you read the file or use the list of all products to create the categories.
Code to fetch data files
End of explanation
"""
# Step 2: Write code here
"""
Explanation: Part 1: Problem Analysis
Inputs:
TODO: Inputs
Outputs:
TODO: Outputs
Algorithm (Steps in Program):
```
TODO:Steps Here
```
Part 2: Code Solution
You may write your code in several cells, but place the complete, final working copy of your code solution within this single cell below. Only the within this cell will be considered your solution. Any imports or user-defined functions should be copied into this cell.
End of explanation
"""
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit()
"""
Explanation: Part 3: Questions
Explain the approach you used to build the prodcut categories.
--== Double-Click and Write Your Answer Below This Line ==--
If you opened the fudgemart-products.txt and added a new product row at the end, would your program still run? Explain.
--== Double-Click and Write Your Answer Below This Line ==--
Did you write any user-defined functions? If so, why? If not, why not?
--== Double-Click and Write Your Answer Below This Line ==--
Part 4: Reflection
Reflect upon your experience completing this assignment. This should be a personal narrative, in your own voice, and cite specifics relevant to the activity as to help the grader understand how you arrived at the code you submitted. Things to consider touching upon: Elaborate on the process itself. Did your original problem analysis work as designed? How many iterations did you go through before you arrived at the solution? Where did you struggle along the way and how did you overcome it? What did you learn from completing the assignment? What do you need to work on to get better? What was most valuable and least valuable about this exercise? Do you have any suggestions for improvements?
To make a good reflection, you should journal your thoughts, questions and comments while you complete the exercise.
Keep your response to between 100 and 250 words.
--== Double-Click and Write Your Reflection Below Here ==--
End of explanation
"""
|
MargaritaLubimova/python_park_mail | homework/homework1.ipynb | mit | def is_number(str):
try:
int(str)
return True
except:
return False
isnumber = False
while not isnumber:
year = input('Введите год: ')
isnumber = is_number(year)
if isnumber:
if (int(year) % 4 == 0 and int(year) % 100 != 0) or int(year) % 400 == 0:
print ('Год високосный')
else:
print ('Год не високосный')
break
else:
print ('Вы ввели некорректное значение')
continue
"""
Explanation: Задача
Определить, является ли введеный год високосным. Год является високосным, если его номер кратен 4, но не кратен 100, а также если он кратен 400
End of explanation
"""
for i in range (1, 101):
if i % 3 == 0:
if i % 5 == 0:
print (i, 'FizzBuzz')
continue
print (i, 'Fizz')
if i % 5 == 0:
print (i, 'Buzz')
"""
Explanation: FizzBuzz
Напишите программу, которая выводит на экран числа от 1 до 100. При этом вместо чисел, кратных трем, программа должна выводить слово Fizz, а вместо чисел, кратных пяти — слово Buzz. Если число кратно пятнадцати, то программа должна выводить слово FizzBuzz.
End of explanation
"""
s = 0
a = 0
arr = []
x = 999
while s < x:
s += 1
if s % 3 == 0 or s % 5 == 0:
arr.append(s)
a += s
continue
print (a)
"""
Explanation: Problem 1
Multiples of 3 and 5
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000.
End of explanation
"""
s = 0
i = 0
c = []
a = [1, 2]
d = 4000000
x = True
while x:
s = a[i] + a[i + 1]
if s > d:
x = False
break
a.append(s)
i += 1
continue
for e in range(len(a)):
if a[e] % 2 == 0:
c.append(a[e])
print (sum(c), a, c)
"""
Explanation: Problem 2
Even Fibonacci numbers
Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
End of explanation
"""
maxF = 600851475143
i = 2
p = 1
PrimeFacrors = []
while i < maxF:
if maxF % i == 0:
p *= i
PrimeFacrors.append(i)
if p == maxF:
break
i += 1
continue
print (max(PrimeFacrors), p)
"""
Explanation: Problem 3
Largest prime factor
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ?
End of explanation
"""
def palindrome(number):
return str(number) == str(number)[::-1]
MIN = 100
MAX = 1000
def largest():
max_number = 0
for i in range(MIN, MAX):
for j in range(MIN, MAX):
if palindrome(i*j) and i*j > max_number:
max_number = i*j
return max_number
print (largest())
"""
Explanation: Problem 4
Largest palindrome product
A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.
Find the largest palindrome made from the product of two 3-digit numbers.
End of explanation
"""
import timeit
start = timeit.default_timer()
num = 1
i = 1
while i < 19:
for j in range(1, 21):
if num % j == 0:
i += 1
continue
else:
num += 1
i = 1
stop = timeit.default_timer()
print (num, "Time: ", start - stop)
import timeit
start = timeit.default_timer()
i = 20
set_num = set()
range_num = list()
counter = 0
while i > 2:
for j in range (1, 21):
if i % j == 0:
range_num.append(j)
counter += 1
continue
if counter > 2:
for k in range_num:
if (k != i):
set_num.add(k)
else:
set_num.add(i)
range_num = []
counter = 0
i -= 1
counter = 1
for m in set_num:
counter = counter * m
stop = timeit.default_timer()
print (set_num, "Number: ", counter / 10 / 9 / 8, "Time: ", stop - start)
"""
Explanation: Problem 5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.
What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?
Ниже приведены два решения данной задачи, первое в лоб) (простой перебор и время расчета около 100 секунд), второе не успела довести до ума.
End of explanation
"""
|
wesleybeckner/salty | scripts/vae/wes_vae_two.ipynb | mit | plt.hist(values.map(len))
def pad_smiles(smiles_string, smile_max_length):
if len(smiles_string) < smile_max_length:
return smiles_string + " " * (smile_max_length - len(smiles_string))
padded_smiles = [pad_smiles(i, smile_max_length) for i in values if pad_smiles(i, smile_max_length)]
shuffle(padded_smiles)
def create_char_list(char_set, smile_series):
for smile in smile_series:
char_set.update(set(smile))
return char_set
char_set = set()
char_set = create_char_list(char_set, padded_smiles)
print(len(char_set))
char_set
char_list = list(char_set)
chars_in_dict = len(char_list)
char_to_index = dict((c, i) for i, c in enumerate(char_list))
index_to_char = dict((i, c) for i, c in enumerate(char_list))
char_to_index
X_train = np.zeros((len(padded_smiles), smile_max_length, chars_in_dict), dtype=np.float32)
X_train.shape
for i, smile in enumerate(padded_smiles):
for j, char in enumerate(smile):
X_train[i, j, char_to_index[char]] = 1
X_train, X_test = train_test_split(X_train, test_size=0.33, random_state=42)
X_train.shape
# need to build RNN to encode. some issues include what the 'embedded dimension' is (vector length of embedded sequence)
"""
Explanation: We may want to remove cations with more than 25 heavy atoms
End of explanation
"""
from keras import backend as K
from keras.objectives import binary_crossentropy #objs or losses
from keras.models import Model
from keras.layers import Input, Dense, Lambda
from keras.layers.core import Dense, Activation, Flatten, RepeatVector
from keras.layers.wrappers import TimeDistributed
from keras.layers.recurrent import GRU
from keras.layers.convolutional import Convolution1D
"""
Explanation: so some keras version stuff. 1.0 uses keras.losses to store its loss functions. 2.0 uses objectives. we'll just have to be consistent
End of explanation
"""
def Encoder(x, latent_rep_size, smile_max_length, epsilon_std = 0.01):
h = Convolution1D(9, 9, activation = 'relu', name='conv_1')(x)
h = Convolution1D(9, 9, activation = 'relu', name='conv_2')(h)
h = Convolution1D(10, 11, activation = 'relu', name='conv_3')(h)
h = Flatten(name = 'flatten_1')(h)
h = Dense(435, activation = 'relu', name = 'dense_1')(h)
def sampling(args):
z_mean_, z_log_var_ = args
batch_size = K.shape(z_mean_)[0]
epsilon = K.random_normal(shape=(batch_size, latent_rep_size),
mean=0., stddev = epsilon_std)
return z_mean_ + K.exp(z_log_var_ / 2) * epsilon
z_mean = Dense(latent_rep_size, name='z_mean', activation = 'linear')(h)
z_log_var = Dense(latent_rep_size, name='z_log_var', activation = 'linear')(h)
def vae_loss(x, x_decoded_mean):
x = K.flatten(x)
x_decoded_mean = K.flatten(x_decoded_mean)
xent_loss = smile_max_length * binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.mean(1 + z_log_var - K.square(z_mean) - \
K.exp(z_log_var), axis = -1)
return xent_loss + kl_loss
return (vae_loss, Lambda(sampling, output_shape=(latent_rep_size,),
name='lambda')([z_mean, z_log_var]))
def Decoder(z, latent_rep_size, smile_max_length, charset_length):
h = Dense(latent_rep_size, name='latent_input', activation = 'relu')(z)
h = RepeatVector(smile_max_length, name='repeat_vector')(h)
h = GRU(501, return_sequences = True, name='gru_1')(h)
h = GRU(501, return_sequences = True, name='gru_2')(h)
h = GRU(501, return_sequences = True, name='gru_3')(h)
return TimeDistributed(Dense(charset_length, activation='softmax'),
name='decoded_mean')(h)
x = Input(shape=(smile_max_length, len(char_set)))
_, z = Encoder(x, latent_rep_size=292, smile_max_length=smile_max_length)
encoder = Model(x, z)
"""
Explanation: Here I've adapted the exact architecture used in the paper
End of explanation
"""
encoded_input = Input(shape=(292,))
decoder = Model(encoded_input, Decoder(encoded_input, latent_rep_size=292,
smile_max_length=smile_max_length,
charset_length=len(char_set)))
"""
Explanation: encoded_input looks like a dummy layer here:
End of explanation
"""
x1 = Input(shape=(smile_max_length, len(char_set)), name='input_1')
vae_loss, z1 = Encoder(x1, latent_rep_size=292, smile_max_length=smile_max_length)
autoencoder = Model(x1, Decoder(z1, latent_rep_size=292,
smile_max_length=smile_max_length,
charset_length=len(char_set)))
"""
Explanation: create a separate autoencoder model that combines the encoder and decoder (I guess the former cells are for accessing those separate parts of the model)
End of explanation
"""
autoencoder.compile(optimizer='Adam', loss=vae_loss, metrics =['accuracy'])
autoencoder.fit(X_train, X_train, shuffle = True, validation_data=(X_test, X_test))
def sample(a, temperature=1.0):
# helper function to sample an index from a probability array
a = np.log(a) / temperature
a = np.exp(a) / np.sum(np.exp(a))
return np.argmax(np.random.multinomial(1, a, 1))
test_smi = values[0]
test_smi = pad_smiles(test_smi, smile_max_length)
Z = np.zeros((1, smile_max_length, len(char_list)), dtype=np.bool)
for t, char in enumerate(test_smi):
Z[0, t, char_to_index[char]] = 1
# autoencoder.
string = ""
for i in autoencoder.predict(Z):
for j in i:
index = sample(j)
string += index_to_char[index]
print("\n callback guess: " + string)
values[0]
"""
Explanation: we compile and fit
End of explanation
"""
|
darioizzo/d-CGP | doc/sphinx/notebooks/finding_prime_integrals.ipynb | gpl-3.0 | from dcgpy import expression_gdual_vdouble as expression
from dcgpy import kernel_set_gdual_vdouble as kernel_set
from pyaudi import gdual_vdouble as gdual
from matplotlib import pyplot as plt
import numpy as np
from numpy import sin, cos
from random import randint, random
np.seterr(all='ignore') # avoids numpy complaining about early on malformed expressions being evalkuated
%matplotlib inline
"""
Explanation: Discovery of prime integrals with dCGP
Lets first import dcgpy and pyaudi and set up things as to use dCGP on gduals defined over vectorized floats
End of explanation
"""
kernels = kernel_set(["sum", "mul", "pdiv", "diff"])() # note the call operator (returns the list of kernels)
dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213))
"""
Explanation: We consider a set of differential equations in the form:
$$
\left{
\begin{array}{c}
\frac{dx_1}{dt} = f_1(x_1, \cdots, x_n) \
\vdots \
\frac{dx_n}{dt} = f_n(x_1, \cdots, x_n)
\end{array}
\right.
$$
and we search for expressions $P(x_1, \cdots, x_n) = 0$ which we call prime integrals of motion.
The straight forward approach to design such a search would be to represent $P$ via a $dCGP$ program and evolve its chromosome so that the expression, computed along points of some trajectory, evaluates to zero. This naive approach brings to the evolution of trivial programs that are identically zero and that "do not represent the intrinsic reltations between state varaibles" - Schmidt 2009.
Let us, though, differentiate $P$ along a trajectory solution to the ODEs above. We get:
$$
\frac{dP}{dt} = \sum_{i=0}^n \frac{\partial P}{\partial x_i} \frac{dx_i}{dt} = \sum_{i=0}^n \frac{\partial P}{\partial x_i} f_i = 0
$$
we may try to evolve the expression $P$ so that the above relation is satisfied on chosen points (belonging to a real trajectory or just defined on a grid). To avoid evolution to go towards trivial solutions, unlike Schmidt, we suppress all mutations that give raise to expressions for which $\sum_{i=0}^n \left(\frac{\partial P}{\partial x_i}\right)^2 = 0$. That is, expressions that do not depend on the state.
A mass spring system
As a simple example, consider the following mass-spring system.
The ODEs are:
$$\left{
\begin{array}{l}
\dot v = -kx \
\dot x = v
\end{array}\right.
$$
We define a dCGP having three inputs (the state and the constant $k$) and one output ($P$)
End of explanation
"""
n_points = 50
x = []
v = []
k = []
for i in range(n_points):
x.append(random()*2 + 2)
v.append(random()*2 + 2)
k.append(random()*2 + 2)
x = gdual(x,"x",1)
v = gdual(v,"v",1)
k = gdual(k)
def fitness_call(dCGP, x, v, k):
res = dCGP([x,v,k])[0]
dPdx = np.array(res.get_derivative({"dx": 1}))
dPdv = np.array(res.get_derivative({"dv": 1}))
xcoeff = np.array(x.constant_cf)
vcoeff = np.array(v.constant_cf)
kcoeff = np.array(k.constant_cf)
err = dPdx/dPdv - kcoeff * xcoeff / vcoeff
return sum(err * err), 3
# We run an evolutionary strategy ES(1 + offspring)
def run_experiment(max_gen, offsprings, dCGP, x, v, k, screen_output=False):
chromosome = [1] * offsprings
fitness = [1] *offsprings
best_chromosome = dCGP.get()
best_fitness = 1e10
for g in range(max_gen):
for i in range(offsprings):
check = 0
while(check < 1e-3):
dCGP.set(best_chromosome)
dCGP.mutate_active(i+1) # we mutate a number of increasingly higher active genes
fitness[i], check = fitness_call(dCGP, x,v,k)
chromosome[i] = dCGP.get()
for i in range(offsprings):
if fitness[i] <= best_fitness:
if (fitness[i] != best_fitness) and screen_output:
dCGP.set(chromosome[i])
print("New best found: gen: ", g, " value: ", fitness[i], " ", dCGP.simplify(["x","v","k"]))
best_chromosome = chromosome[i]
best_fitness = fitness[i]
if best_fitness < 1e-12:
break
dCGP.set(best_chromosome)
return g, dCGP
# We run nexp experiments to accumulate statistic for the ERT
nexp = 100
offsprings = 10
stop = 2000
res = []
print("restart: \t gen: \t expression:")
for i in range(nexp):
dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213))
g, dCGP = run_experiment(stop, 10, dCGP, x,v,k, False)
res.append(g)
if g < (stop-1):
print(i, "\t\t", res[i], "\t", dCGP(["x","v","k"]), " a.k.a ", dCGP.simplify(["x","v","k"]))
one_sol = dCGP
res = np.array(res)
ERT = sum(res) / sum(res<(stop-1))
print("ERT Expected run time - avg. number of function evaluations needed: ", ERT * offsprings)
print(one_sol.simplify(["x","v","k"]))
plt.rcParams["figure.figsize"] = [20,20]
one_sol.visualize(["x","v","k"])
"""
Explanation: We define 50 random control of points where we check that the prime integral holds: $x \in [2,4]$, $v \in [2,4]$ and $k \in[2, 4]$
End of explanation
"""
kernels = kernel_set(["sum", "mul", "pdiv", "diff","sin","cos"])() # note the call operator (returns the list of kernels)
dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213))
"""
Explanation: Simple pendulum
Consider the simple pendulum problem. In particular its differential formulation:
The ODEs are:
$$\left{
\begin{array}{l}
\dot \omega = - \frac gL\sin\theta \
\dot \theta = \omega \
\end{array}\right.
$$
We define a dCGP having three inputs (the state and the constant $\frac gL$) and one output ($P$)
End of explanation
"""
n_points = 50
omega = []
theta = []
c = []
for i in range(n_points):
omega.append(random()*10 - 5)
theta.append(random()*10 - 5)
c.append(random()*10)
omega = gdual(omega,"omega",1)
theta = gdual(theta,"theta",1)
c = gdual(c)
def fitness_call(dCGP, theta, omega, c):
res = dCGP([theta, omega, c])[0]
dPdtheta = np.array(res.get_derivative({"dtheta": 1}))
dPdomega = np.array(res.get_derivative({"domega": 1}))
thetacoeff = np.array(theta.constant_cf)
omegacoeff = np.array(omega.constant_cf)
ccoeff = np.array(c.constant_cf)
err = dPdtheta/dPdomega + (-ccoeff * np.sin(thetacoeff)) / omegacoeff
check = sum(dPdtheta*dPdtheta + dPdomega*dPdomega)
return sum(err * err ), check
# We run an evolutionary strategy ES(1 + offspring)
def run_experiment(max_gen, offsprings, dCGP, theta, omega, c, screen_output=False):
chromosome = [1] * offsprings
fitness = [1] *offsprings
best_chromosome = dCGP.get()
best_fitness = 1e10
for g in range(max_gen):
for i in range(offsprings):
check = 0
while(check < 1e-3):
dCGP.set(best_chromosome)
dCGP.mutate_active(i+1) # we mutate a number of increasingly higher active genes
fitness[i], check = fitness_call(dCGP, theta, omega, c)
chromosome[i] = dCGP.get()
for i in range(offsprings):
if fitness[i] <= best_fitness:
if (fitness[i] != best_fitness) and screen_output:
dCGP.set(chromosome[i])
print("New best found: gen: ", g, " value: ", fitness[i], " ", dCGP.simplify(["theta","omega","c"]))
best_chromosome = chromosome[i]
best_fitness = fitness[i]
if best_fitness < 1e-12:
break
dCGP.set(best_chromosome)
return g, dCGP
# We run nexp experiments to accumulate statistic for the ERT
nexp = 100
offsprings = 10
stop = 2000
res = []
print("restart: \t gen: \t expression:")
for i in range(nexp):
dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213))
g, dCGP = run_experiment(stop, 10, dCGP, theta, omega, c, False)
res.append(g)
if g < (stop-1):
print(i, "\t\t", res[i], "\t", dCGP(["theta","omega","c"]), " a.k.a ", dCGP.simplify(["theta","omega","c"]))
one_sol = dCGP
res = np.array(res)
ERT = sum(res) / sum(res<(stop-1))
print("ERT Expected run time - avg. number of function evaluations needed: ", ERT * offsprings)
print(one_sol.simplify(["theta","omega","c"]))
plt.rcParams["figure.figsize"] = [20,20]
one_sol.visualize(["theta","omega","c"])
"""
Explanation: We define 50 random control of points where we check that the prime integral holds: $\omega \in [-1, 1]$, $\theta \in [-1, 1]$, and $\frac gL \in [1,2]$
End of explanation
"""
kernels = kernel_set(["sum", "mul", "pdiv", "diff"])() # note the call operator (returns the list of kernels)
dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213))
"""
Explanation: The two-body problem
Consider the two body problem. In particular its differential formulation in polar coordinates:
The ODEs are:
$$\left{
\begin{array}{l}
\dot v = -\frac\mu{r^2} + r\omega^2 \
\dot \omega = - 2 \frac{v\omega}{r} \
\dot r = v \
\dot \theta = \omega
\end{array}\right.
$$
We define a dCGP having five inputs (the state and the constant $\mu$) and one output ($P$)
End of explanation
"""
n_points = 50
v = []
omega = []
r = []
theta = []
mu = []
for i in range(n_points):
v.append(random()*2 + 2)
omega.append(random()*1 + 1)
r.append(random() + 0.1)
theta.append(random()*2 + 2)
mu.append(random() + 1)
r = gdual(r,"r",1)
omega = gdual(omega,"omega",1)
v = gdual(v,"v",1)
theta = gdual(theta,"theta",1)
mu = gdual(mu)
## Use this fitness if energy conservation is to be found (it basically forces the expression to depend on v)
def fitness_call(dCGP, r, v, theta, omega, mu):
res = dCGP([r, v, theta, omega, mu])[0]
dPdr = np.array(res.get_derivative({"dr": 1}))
dPdv = np.array(res.get_derivative({"dv": 1}))
dPdtheta = np.array(res.get_derivative({"dtheta": 1}))
dPdomega = np.array(res.get_derivative({"domega": 1}))
rcoeff = np.array(r.constant_cf)
vcoeff = np.array(v.constant_cf)
thetacoeff = np.array(theta.constant_cf)
omegacoeff = np.array(omega.constant_cf)
mucoeff = np.array(mu.constant_cf)
err = dPdr / dPdv + (-mucoeff/rcoeff**2 + rcoeff*omegacoeff**2) / vcoeff + dPdtheta / dPdv / vcoeff * omegacoeff + dPdomega / dPdv / vcoeff * (-2*vcoeff*omegacoeff/rcoeff)
check = sum(dPdr*dPdr + dPdv*dPdv + dPdomega*dPdomega + dPdtheta*dPdtheta)
return sum(err * err), check
## Use this fitness if any conservation is to be found (will always converge to angular momentum)
def fitness_call_free(dCGP, r, v, theta, omega, mu):
res = dCGP([r, v, theta, omega, mu])[0]
dPdr = np.array(res.get_derivative({"dr": 1}))
dPdv = np.array(res.get_derivative({"dv": 1}))
dPdtheta = np.array(res.get_derivative({"dtheta": 1}))
dPdomega = np.array(res.get_derivative({"domega": 1}))
rcoeff = np.array(r.constant_cf)
vcoeff = np.array(v.constant_cf)
thetacoeff = np.array(theta.constant_cf)
omegacoeff = np.array(omega.constant_cf)
mucoeff = np.array(mu.constant_cf)
err = dPdr * vcoeff + dPdv * (-mucoeff/rcoeff**2 + rcoeff*omegacoeff**2) + dPdtheta * omegacoeff + dPdomega * (-2*vcoeff*omegacoeff/rcoeff)
check = sum(dPdr*dPdr + dPdv*dPdv +dPdomega*dPdomega+ dPdtheta*dPdtheta)
return sum(err * err ), check
# We run an evolutionary strategy ES(1 + offspring)
def run_experiment(max_gen, offsprings, dCGP, r, v, theta, omega, mu, obj_fun, screen_output=False):
chromosome = [1] * offsprings
fitness = [1] *offsprings
best_chromosome = dCGP.get()
best_fitness = 1e10
for g in range(max_gen):
for i in range(offsprings):
check = 0
while(check < 1e-3):
dCGP.set(best_chromosome)
dCGP.mutate_active(i+1) # we mutate a number of increasingly higher active genes
fitness[i], check = obj_fun(dCGP, r, v, theta, omega, mu)
chromosome[i] = dCGP.get()
for i in range(offsprings):
if fitness[i] <= best_fitness:
if (fitness[i] != best_fitness) and screen_output:
dCGP.set(chromosome[i])
print("New best found: gen: ", g, " value: ", fitness[i], " ", dCGP.simplify(["r","v","theta","omega","mu"]))
best_chromosome = chromosome[i]
best_fitness = fitness[i]
if best_fitness < 1e-12:
break
dCGP.set(best_chromosome)
return g, dCGP
# We run nexp experiments to accumulate statistic for the ERT (angular momentum case)
nexp = 100
offsprings = 10
stop = 2000 #100000
res = []
print("restart: \t gen: \t expression:")
for i in range(nexp):
dCGP = expression(inputs=5, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213))
g, dCGP = run_experiment(stop, 10, dCGP, r, v, theta, omega, mu, fitness_call_free, False)
res.append(g)
if g < (stop-1):
print(i, "\t\t", res[i], "\t", dCGP(["r","v","theta","omega","mu"]), " a.k.a ", dCGP.simplify(["r","v","theta","omega","mu"]))
one_sol = dCGP
res = np.array(res)
ERT = sum(res) / sum(res<(stop-1))
print("ERT Expected run time - avg. number of function evaluations needed: ", ERT * offsprings)
print(one_sol.simplify(["r","v","theta","omega","mu"]))
plt.rcParams["figure.figsize"] = [20,20]
one_sol.visualize(["r","v","theta","omega","mu"])
# We run nexp experiments to accumulate statistic for the ERT (angular momentum case)
nexp = 100
offsprings = 10
stop = 100000
res = []
print("restart: \t gen: \t expression:")
for i in range(nexp):
dCGP = expression(inputs=5, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213))
g, dCGP = run_experiment(stop, 10, dCGP, r, v, theta, omega, mu, fitness_call, False)
res.append(g)
if g < (stop-1):
print(i, "\t\t", res[i], "\t", dCGP(["r","v","theta","omega","mu"]), " a.k.a ", dCGP.simplify(["r","v","theta","omega","mu"]))
one_sol = dCGP
res = np.array(res)
ERT = sum(res) / sum(res<(stop-1))
print("ERT Expected run time - avg. number of function evaluations needed: ", ERT * offsprings)
"""
Explanation: We define 50 random control of points where we check that the prime integral holds: $r \in [0.1,1.1]$, $v \in [2,4]$, $\omega \in [1,2]$ and $\theta \in[2, 4]$ and $\mu \in [1,2]$
End of explanation
"""
|
danielgoncalvesti/BIGDATA2017 | Projeto/.ipynb_checkpoints/pagerank-webgoogle-checkpoint.ipynb | gpl-3.0 | import pandas as pd
import networkx as nx
import pyensae
import pyquickhelper
example = pd.read_csv("data/web-Google-test.txt",sep = "\t", names=['from','to'])
example
G = nx.from_pandas_dataframe(example, 'from', 'to',create_using=nx.DiGraph())
import matplotlib as mp
%matplotlib inline
import matplotlib.pyplot as plt
nx.draw_networkx(G, node_color = 'lightgreen', node_size = 1000,arrows=True)
plt.savefig('pictures/graph_example.png')
from operator import add
sc = SparkContext.getOrCreate()
diretorio_base = os.path.join('data')
caminho_teste = os.path.join('web-Google-test.txt')
arquivo_teste = os.path.join(diretorio_base, caminho_teste)
def atualizaRank(listaUrls, rank):
num_urls = len(listaUrls)
rankAtualizado = []
for x in listaUrls:
rankAtualizado.append((x, (rank / num_urls)))
return rankAtualizado
#numPartitions = 2
#rawData = sc.textFile(fileName, numPartitions)
linksGoogle_teste = sc.textFile(arquivo_teste).filter(lambda x: "#" not in x).map(lambda x: x.split("\t"))
linksAgrupados_teste = linksGoogle_teste.groupByKey().cache()
#print(linksAgrupados.take(1))
#for it in linksAgrupados.take(1)[0][1]:
# print(it)
ranks_teste = linksAgrupados_teste.map(lambda url_agrupados: (url_agrupados[0], 1.0))
for x in range(1,2):
# Adiciona ranks inicializados com 1.0 na posição [1][1] da matriz
agrupaIdLinkComRank_teste = linksAgrupados_teste .join(ranks_teste)\
.flatMap(lambda url_rank: atualizaRank(url_rank[1][0], url_rank[1][1]))
# Soma os valores com o mesmo id e adiciona o fator de normalização
ranks_teste = agrupaIdLinkComRank_teste.reduceByKey(add)\
.mapValues(lambda rankFatorD: (rankFatorD * 0.85) + 0.15)
"""
Explanation: Implementação do Algoritmo PageRank no Spark
Aplicação do PageRank na Base da Dados de Teste
No código abaixo, vamos criar um exemplo de PageRank com uma base de dados simples para teste do algoritmo (4 linhas), seguindo a estrutura da figura abaixo.
End of explanation
"""
for (link, rank) in ranks_teste.sortBy(lambda x:-x[1]).take(3):
print("ID: %s Ranking: %s." % (link, rank))
"""
Explanation: Resultado do Teste:
End of explanation
"""
diretorio_base = os.path.join('data')
caminho = os.path.join('web-Google.txt')
arquivo = os.path.join(diretorio_base, caminho)
linksGoogle = sc.textFile(arquivo).filter(lambda x: "#" not in x).map(lambda x: x.split("\t"))
linksAgrupados = linksGoogle.groupByKey().cache()
ranks = linksAgrupados.map(lambda url_agrupados: (url_agrupados[0], 1.0))
for x in range(1,8):
# Adiciona ranks inicializados com 1.0 na posição [1][1] da matriz
agrupaIdLinkComRank = linksAgrupados.join(ranks)\
.flatMap(lambda url_rank: atualizaRank(url_rank[1][0], url_rank[1][1]))
# Soma os valores com o mesmo id e adiciona o fator de normalização
ranks = agrupaIdLinkComRank.reduceByKey(add)\
.mapValues(lambda rankFatorD: (rankFatorD * 0.85) + 0.15)
"""
Explanation: Aplicação do PageRank na Base da Dados Real
Após o teste, vamos aplicar o algoritmo de PageRank na base de dados de referências de páginas da web (arquivo web-Google.txt) encontrada no site da Stanford.
End of explanation
"""
for (link, rank) in ranks.sortBy(lambda x:-x[1]).take(10):
print("ID: %s Ranking: %s." % (link, rank))
"""
Explanation: Resultado das 10 primeias páginas mais relevantes:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/ec-earth-consortium/cmip6/models/ec-earth3-veg-lr/toplevel.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-veg-lr', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: EC-EARTH3-VEG-LR
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
European-XFEL/h5tools-py | docs/dssc_geometry.ipynb | bsd-3-clause | %matplotlib inline
from karabo_data.geometry2 import DSSC_1MGeometry
# Made up numbers!
quad_pos = [
(-130, 5),
(-130, -125),
(5, -125),
(5, 5),
]
path = 'dssc_geo_june19.h5'
g = DSSC_1MGeometry.from_h5_file_and_quad_positions(path, quad_pos)
g.inspect()
import numpy as np
import matplotlib.pyplot as plt
g.expected_data_shape
"""
Explanation: DSSC detector geometry
As of version 0.5, karabo_data has geometry code for the DSSC detector.
This doesn't currently account for the hexagonal pixels of DSSC, but it's
good enough for a preview of detector images.
End of explanation
"""
a = np.zeros(g.expected_data_shape)
g.plot_data_fast(a, axis_units='m');
"""
Explanation: We'll use some empty data to demonstate assembling an image.
End of explanation
"""
pixel_pos = g.get_pixel_positions()
print("Pixel positions array shape:", pixel_pos.shape,
"= (modules, slow_scan, fast_scan, x/y/z)")
q1m1_centres = pixel_pos[0]
cx = q1m1_centres[..., 0]
cy = q1m1_centres[..., 1]
distortn = g.to_distortion_array(allow_negative_xy=True)
print("Distortion array shape:", distortn.shape,
"= (modules * slow_scan, fast_scan, corners, z/y/x)")
q1m1_corners = distortn[:128]
from matplotlib.patches import Polygon
from matplotlib.collections import PatchCollection
fig, ax = plt.subplots(figsize=(10, 10))
hexes = []
for ss_pxl in range(4):
for fs_pxl in range(5):
# Create hexagon
corners = q1m1_corners[ss_pxl, fs_pxl]
corners = corners[:, 1:][:, ::-1] # Drop z, flip x & y
hexes.append(Polygon(corners))
# Draw text label near the pixel centre
ax.text(cx[ss_pxl, fs_pxl], cy[ss_pxl, fs_pxl],
' [{}, {}]'.format(ss_pxl, fs_pxl),
verticalalignment='bottom', horizontalalignment='left')
# Add the hexagons to the plot
pc = PatchCollection(hexes, facecolor=(0.75, 1.0, 0.75), edgecolor='k')
ax.add_collection(pc)
# Plot the pixel centres
ax.scatter(cx[:5, :6], cy[:5, :6], marker='x')
# matplotlib is reluctant to show such a small area, so we need to set the limits manually
ax.set_xlim(-0.007, -0.0085) # To match the convention elsewhere, draw x right-to-left
ax.set_ylim(0.0065, 0.0075)
ax.set_ylabel("metres")
ax.set_xlabel("metres")
ax.set_aspect(1)
"""
Explanation: Let's have a close up look at some pixels in Q1M1. get_pixel_positions() gives us pixel centres.
to_distortion_array() gives pixel corners in a slightly different format, suitable for PyFAI.
PyFAI requires non-negative x and y coordinates. But we want to plot them along with the centre positions, so we pass allow_negative_xy=True to get comparable coordinates.
End of explanation
"""
|
vlas-sokolov/multicube | notebooks/example.ipynb | mit | import numpy as np
%matplotlib inline
import matplotlib.pylab as plt
import pyspeckit
from multicube.subcube import SubCube
from multicube.astro_toolbox import make_test_cube, get_ncores
from IPython.utils import io
import warnings
warnings.filterwarnings('ignore')
"""
Explanation: Flexible initial guess selection with multicube
The example describes the possibilities of the SubCube class - a wrapper class which inherits most of its routines from pyspeckit's Cube class (see the docs here). This notebook demonstrates the usage of the internal methods defined in SubCube, mainly dealing with flexible initial guess selection and SNR estimates.
This notebook:
* generates a spectral cube in FITS format
* makes a guess grid based on parameter ranges specified
* performs Gaussian line fitting with pyspeckit
* makes best initial guess calculatons to help reach global convergence
End of explanation
"""
make_test_cube((300,10,10), outfile='foo.fits', sigma=(10,5))
sc = SubCube('foo.fits')
"""
Explanation: Let's first make a test FITS file, containing a mixture of synthetic signal with some noise put on top of it. The created spectral cube will be 10x10 pixels wide in the plain of sky and 300 pixels "long" along its spectral axis.
End of explanation
"""
# TODO: move this to astro_toolbox.py
# as a general synthetic cube generator routine
def tinker_ppv(arr):
scale_roll = 15
rel_shift = 30
rel_str = 5
shifted_component = np.roll(arr, rel_shift) / rel_str
for y,x in np.ndindex(arr.shape[1:]):
roll = np.sqrt((x-5)**2 + (y-5)**2) * scale_roll
arr[:,y,x] = np.roll(arr[:,y,x], int(roll))
return arr + shifted_component
sc.cube = tinker_ppv(sc.cube)
"""
Explanation: To make things interesting, let's introduce a radial velocity gradient in our cube along with a second, weaker component.
End of explanation
"""
sc.plot_spectrum(3,7)
"""
Explanation: This is how a sample spectrum looks like at x,y = (3,7):
End of explanation
"""
sc.update_model('gaussian')
minpars = [0.1, sc.xarr.min().value, 0.1]
maxpars = [2.0, sc.xarr.max().value, 2.0]
finesse = 10
sc.make_guess_grid(minpars, maxpars, finesse)
"""
Explanation: multicube can take minimal and maximal values for spectral model parameters and permute them to generate a grid in parameter space with given spacing (finesse). This works for an arbirtary number of parameters and with custom resolution for individual parameters (e.g., setting finesse = [3, 20, 5] will also work in this case).
(for Gaussian model in pyspeckit, the parameter order is [amplitude, centroid, sigma])
End of explanation
"""
sc.generate_model()
sc.get_snr_map()
sc.best_guess()
"""
Explanation: This grid, stored under sc.guess_grid, can be used to generate a number of spectral models with pyspeckit, and the guesses that have the least residual rms can be selected for the whole cube:
* sc.best_map stored the map between x,y pixel numbers and the numbers of corresponding best models
* sc.best_model is the number of the model suited best for the pixel with the highest S/N ratio.
End of explanation
"""
sc.plot_spectrum(3,7)
sc.plotter.axis.plot(sc.xarr.value, sc.model_grid[sc._best_map[3,7]])
# TODO: show the best five guesses or so for this pixel to demonstrate the grid size in the parameter space
"""
Explanation: Here's the best model selected for the spectrum above:
End of explanation
"""
sc1, sc2 = sc, sc.copy()
with io.capture_output() as captured: # suppresses output, normally should not be used
sc1.fiteach(fittype = sc1.fittype,
guesses = sc1.best_snr_guess, # best for the highest SNR pixel
multicore = get_ncores(),
errmap = sc1._rms_map,
verbose = 0,
**sc1.fiteach_args)
# let's plot the velocity field:
sc1.show_fit_param(1, cmap='coolwarm')
clb = sc1.mapplot.FITSFigure.colorbar
clb.set_axis_label_text(sc1.xarr.unit.to_string('latex_inline'))
sc1.mapplot.FITSFigure.set_title("fiteach() for one guess only:")
"""
Explanation: Example #1: fitting the cube with the overall best model:
End of explanation
"""
with io.capture_output() as captured: # suppresses output, normally should not be used
sc2.fiteach(fittype = sc2.fittype,
guesses = sc2.best_guesses,
multicore = get_ncores(),
errmap = sc2._rms_map,
verbose = 0,
**sc2.fiteach_args);
#sc2.show_fit_param(1, cmap='coolwarm')
sc2.show_fit_param(1, cmap='coolwarm')
clb = sc2.mapplot.FITSFigure.colorbar
clb.set_axis_label_text(sc2.xarr.unit.to_string('latex_inline'))
sc2.mapplot.FITSFigure.set_title("fiteach() for a map of best guesses:")
"""
Explanation: Because the same guess was used across the cube with varying velocity centroid, it isn't surprising that the fit failed to converge outside the central spot. Normally, a combination of start_from_point=(x,y) and use_neighbor_as_guess=True arguments can be passed to pyspeckit.Cube.fiteach to gradually spread from (x,y) and avoid divergence in this case, but this approach
breaks down for large gradients/discontinuities in parameter space
doesn't work that well when multicore is set to a relatively high number.
Alternatively, moment analysis is commonly used get the best initial guess for the nonlinear regression. However, the method is restricted to a singular Gaussian component. In the following code block, an alternative is shown, with initial guesses for each pixel selected as the best generated model for individual spectrum:
Example #2: fitting the cube best models for each x,y pixel:
End of explanation
"""
sc.plot_spectrum(3,7, plot_fit=True)
"""
Explanation: Voilà! All the pixels are fit.
End of explanation
"""
|
rickiepark/python-tutorial | tutorial-3/3. decorator.ipynb | mit | def print_name(first, last):
return 'My name is %s, %s' % (last, first)
def p_decor(func):
def func_wrapper(*args, **kwargs):
text = func(*args, **kwargs)
return '<p>%s</p>' % text
return func_wrapper
print_name = p_decor(print_name)
print_name('jobs', 'steve')
@p_decor
def print_name2(first, last):
return 'My name is %s, %s' % (last, first)
print_name2('jobs', 'steve')
"""
Explanation: 기존의 함수를 변경시키지 않고 새로운 기능을 추가할 때 데코레이터를 사용합니다.
데코레이터를 사용할 때는 '@' 기호를 이용합니다.
End of explanation
"""
def html_tag(tag):
def p_decor(func):
def func_wrapper(*args, **kwargs):
text = func(*args, **kwargs)
return '<%s>%s</%s>' % (tag, text, tag)
return func_wrapper
return p_decor
@html_tag('div')
def print_name3(first, last):
'''div tagging function'''
return 'My name is %s, %s' % (last, first)
print_name3('jobs', 'steve')
"""
Explanation: 데코레이터에 파라메타를 전달하기 위해 함수를 한번 더 래핑합니다.
End of explanation
"""
print_name3.__name__
print_name3.__doc__
from functools import wraps
def html_tag(tag):
def p_decor(func):
@wraps(func)
def func_wrapper(*args, **kwargs):
text = func(*args, **kwargs)
return '<%s>%s</%s>' % (tag, text, tag)
return func_wrapper
return p_decor
@html_tag('div')
def print_name4(first, last):
'''div tagging function'''
return 'My name is %s, %s' % (last, first)
print_name4.__name__
print_name4.__doc__
"""
Explanation: 데코레이터를 사용하면 함수의 원래 이름이 바뀌어 집니다.
functools.wraps 를 사용하여 함수 정보를 바꾸어 줍니다.
End of explanation
"""
|
ara-ta3/ml4se | Chapter3.ipynb | mit | main()
main()
main()
M = [0,1,2,3]
main()
main()
# -*- coding: utf-8 -*-
#
# 最尤推定による正規分布の推定
#
# 2015/04/23 ver1.0
#
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from pandas import Series, DataFrame
from numpy.random import normal
from scipy.stats import norm
def gauss():
fig = plt.figure()
for c, datapoints in enumerate([2,4,10,100]): # サンプル数
ds = normal(loc=0, scale=1, size=datapoints)
mu = np.mean(ds) # 平均の推定値
sigma = np.sqrt(np.var(ds)) # 標準偏差の推定値
subplot = fig.add_subplot(2,2,c+1)
subplot.set_title("N=%d" % datapoints)
# 真の曲線を表示
linex = np.arange(-10,10.1,0.1)
orig = norm(loc=0, scale=1)
subplot.plot(linex, orig.pdf(linex), color='green', linestyle='--')
# 推定した曲線を表示
est = norm(loc=mu, scale=np.sqrt(sigma))
label = "Sigma=%.2f" % sigma
subplot.plot(linex, est.pdf(linex), color='red', label=label)
subplot.legend(loc=1)
# サンプルの表示
subplot.scatter(ds, orig.pdf(ds), marker='o', color='blue')
subplot.set_xlim(-4,4)
subplot.set_ylim(0)
fig.show()
if __name__ == '__main__':
gauss()
"""
Explanation: main()
End of explanation
"""
gauss()
gauss()
"""
Explanation: 標準偏差の推定値は実際より小さくなるらしい
けど、↑ではなってないw
End of explanation
"""
# -*- coding: utf-8 -*-
#
# 推定量の一致性と不偏性の確認
#
# 2015/06/01 ver1.0
#
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from pandas import Series, DataFrame
from numpy.random import normal
def draw_subplot(subplot, linex1, liney1, linex2, liney2, ylim):
subplot.set_ylim(ylim)
subplot.set_xlim(min(linex1), max(linex1)+1)
subplot.scatter(linex1, liney1)
subplot.plot(linex2, liney2, color='red', linewidth=4, label="mean")
subplot.legend(loc=0)
def bias():
mean_linex = []
mean_mu = []
mean_s2 = []
mean_u2 = []
raw_linex = []
raw_mu = []
raw_s2 = []
raw_u2 = []
for n in np.arange(2,51): # 観測データ数Nを変化させて実行
for c in range(2000): # 特定のNについて2000回の推定を繰り返す
ds = normal(loc=0, scale=1, size=n)
raw_mu.append(np.mean(ds))
raw_s2.append(np.var(ds))
raw_u2.append(np.var(ds)*n/(n-1))
raw_linex.append(n)
mean_mu.append(np.mean(raw_mu)) # 標本平均の平均
mean_s2.append(np.mean(raw_s2)) # 標本分散の平均
mean_u2.append(np.mean(raw_u2)) # 不偏分散の平均
mean_linex.append(n)
# プロットデータを40個に間引きする
raw_linex = raw_linex[0:-1:50]
raw_mu = raw_mu[0:-1:50]
raw_s2 = raw_s2[0:-1:50]
raw_u2 = raw_u2[0:-1:50]
# 標本平均の結果表示
fig1 = plt.figure()
subplot = fig1.add_subplot(1,1,1)
subplot.set_title('Sample mean')
draw_subplot(subplot, raw_linex, raw_mu, mean_linex, mean_mu, (-1.5,1.5))
# 標本分散の結果表示
fig2 = plt.figure()
subplot = fig2.add_subplot(1,1,1)
subplot.set_title('Sample variance')
draw_subplot(subplot, raw_linex, raw_s2, mean_linex, mean_s2, (-0.5,3.0))
# 不偏分散の結果表示
fig3 = plt.figure()
subplot = fig3.add_subplot(1,1,1)
subplot.set_title('Unbiased variance')
draw_subplot(subplot, raw_linex, raw_u2, mean_linex, mean_u2, (-0.5,3.0))
fig1.show()
fig2.show()
fig3.show()
if __name__ == '__main__':
bias()
"""
Explanation: 標準偏差が小さくなるのは裾野のデータの発生確率が低いため
推定量
何らかの理屈にもとづいて推定値を計算する方法が得られた時に、その計算方法を推定量と呼ぶらしい
方法なのに量?
一致性と不偏性を持つのが良い推定量
一致性
データを大きくしていった時に真の値に近づいていくこと
一致性を持つ推定量を一致推定量というらしい
不偏性
何回か取得した推定値の平均が真の値に近づいていくこと
End of explanation
"""
|
Housebeer/Natural-Gas-Model | Data Analytics/Fitting curve.ipynb | mit | import pandas as pd
import numpy as np
from scipy.optimize import leastsq
import pylab as plt
N = 1000 # number of data points
t = np.linspace(0, 4*np.pi, N)
data = 3.0*np.sin(t+0.001) + 0.5 + np.random.randn(N) # create artificial data with noise
guess_mean = np.mean(data)
guess_std = 3*np.std(data)/(2**0.5)
guess_phase = 0
# we'll use this to plot our first estimate. This might already be good enough for you
data_first_guess = guess_std*np.sin(t+guess_phase) + guess_mean
# Define the function to optimize, in this case, we want to minimize the difference
# between the actual data and our "guessed" parameters
optimize_func = lambda x: x[0]*np.sin(t+x[1]) + x[2] - data
est_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]
# recreate the fitted curve using the optimized parameters
data_fit = est_std*np.sin(t+est_phase) + est_mean
plt.plot(data, '.')
plt.plot(data_fit, label='after fitting')
plt.plot(data_first_guess, label='first guess')
plt.legend()
plt.show()
"""
Explanation: Fitting curve to data
Within this notebook we do some data analytics on historical data to feed some real numbers into the model. Since we assume the consumer data to be resemble a sinus, due to the fact that demand is seasonal, we will focus on fitting data to this kind of curve.
End of explanation
"""
importfile = 'CBS Statline Gas Usage.xlsx'
df = pd.read_excel(importfile, sheetname='Month', skiprows=1)
df.drop(['Onderwerpen_1', 'Onderwerpen_2', 'Perioden'], axis=1, inplace=True)
#df
# transpose
df = df.transpose()
# provide headers
new_header = df.iloc[0]
df = df[1:]
df.rename(columns = new_header, inplace=True)
#df.drop(['nan'], axis=0, inplace=True)
df
x = range(len(df.index))
df['Via regionale netten'].plot(figsize=(18,5))
plt.xticks(x, df.index, rotation='vertical')
plt.show()
"""
Explanation: import data for our model
This is data imported from statline CBS webportal.
End of explanation
"""
#b = self.base_demand
#m = self.max_demand
#y = b + m * (.5 * (1 + np.cos((x/6)*np.pi)))
#b = 603
#m = 3615
N = 84 # number of data points
t = np.linspace(0, 83, N)
#data = b + m*(.5 * (1 + np.cos((t/6)*np.pi))) + 100*np.random.randn(N) # create artificial data with noise
data = np.array(df['Via regionale netten'].values, dtype=np.float64)
guess_mean = np.mean(data)
guess_std = 2695.9075546 #2*np.std(data)/(2**0.5)
guess_phase = 0
# we'll use this to plot our first estimate. This might already be good enough for you
data_first_guess = guess_mean + guess_std*(.5 * (1 + np.cos((t/6)*np.pi + guess_phase)))
# Define the function to optimize, in this case, we want to minimize the difference
# between the actual data and our "guessed" parameters
optimize_func = lambda x: x[0]*(.5 * (1 + np.cos((t/6)*np.pi+x[1]))) + x[2] - data
est_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]
# recreate the fitted curve using the optimized parameters
data_fit = est_mean + est_std*(.5 * (1 + np.cos((t/6)*np.pi + est_phase)))
plt.plot(data, '.')
plt.plot(data_fit, label='after fitting')
plt.plot(data_first_guess, label='first guess')
plt.legend()
plt.show()
print('Via regionale netten')
print('max_demand: %s' %(est_std))
print('phase_shift: %s' %(est_phase))
print('base_demand: %s' %(est_mean))
#data = b + m*(.5 * (1 + np.cos((t/6)*np.pi))) + 100*np.random.randn(N) # create artificial data with noise
data = np.array(df['Elektriciteitscentrales'].values, dtype=np.float64)
guess_mean = np.mean(data)
guess_std = 3*np.std(data)/(2**0.5)
guess_phase = 0
# we'll use this to plot our first estimate. This might already be good enough for you
data_first_guess = guess_mean + guess_std*(.5 * (1 + np.cos((t/6)*np.pi + guess_phase)))
# Define the function to optimize, in this case, we want to minimize the difference
# between the actual data and our "guessed" parameters
optimize_func = lambda x: x[0]*(.5 * (1 + np.cos((t/6)*np.pi+x[1]))) + x[2] - data
est_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]
# recreate the fitted curve using the optimized parameters
data_fit = est_mean + est_std*(.5 * (1 + np.cos((t/6)*np.pi + est_phase)))
plt.plot(data, '.')
plt.plot(data_fit, label='after fitting')
plt.plot(data_first_guess, label='first guess')
plt.legend()
plt.show()
print('Elektriciteitscentrales')
print('max_demand: %s' %(est_std))
print('phase_shift: %s' %(est_phase))
print('base_demand: %s' %(est_mean))
#data = b + m*(.5 * (1 + np.cos((t/6)*np.pi))) + 100*np.random.randn(N) # create artificial data with noise
data = np.array(df['Overige verbruikers'].values, dtype=np.float64)
guess_mean = np.mean(data)
guess_std = 3*np.std(data)/(2**0.5)
guess_phase = 0
guess_saving = .997
# we'll use this to plot our first estimate. This might already be good enough for you
data_first_guess = (guess_mean + guess_std*(.5 * (1 + np.cos((t/6)*np.pi + guess_phase)))) #* np.power(guess_saving,t)
# Define the function to optimize, in this case, we want to minimize the difference
# between the actual data and our "guessed" parameters
optimize_func = lambda x: x[0]*(.5 * (1 + np.cos((t/6)*np.pi+x[1]))) + x[2] - data
est_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]
# recreate the fitted curve using the optimized parameters
data_fit = est_mean + est_std*(.5 * (1 + np.cos((t/6)*np.pi + est_phase)))
plt.plot(data, '.')
plt.plot(data_fit, label='after fitting')
plt.plot(data_first_guess, label='first guess')
plt.legend()
plt.show()
print('Overige verbruikers')
print('max_demand: %s' %(est_std))
print('phase_shift: %s' %(est_phase))
print('base_demand: %s' %(est_mean))
"""
Explanation: now let fit different consumer groups
End of explanation
"""
inputexcel = 'TTFDA.xlsx'
outputexcel = 'pythonoutput.xlsx'
price = pd.read_excel(inputexcel, sheetname='Sheet1', index_col=0)
quantity = pd.read_excel(inputexcel, sheetname='Sheet2', index_col=0)
price.index = pd.to_datetime(price.index, format="%d-%m-%y")
quantity.index = pd.to_datetime(quantity.index, format="%d-%m-%y")
pq = pd.concat([price, quantity], axis=1, join_axes=[price.index])
pqna = pq.dropna()
year = np.arange(2008,2017,1)
coefficientyear = []
for i in year:
x= pqna['Volume'].sort_index().ix["%s"%i]
y= pqna['Last'].sort_index().ix["%s"%i]
#plot the trendline
plt.plot(x,y,'o')
# calc the trendline
z = np.polyfit(x, y, 1)
p = np.poly1d(z)
plt.plot(x,p(x),"r--", label="%s"%i)
plt.xlabel("Volume")
plt.ylabel("Price Euro per MWH")
plt.title('%s: y=%.10fx+(%.10f)'%(i,z[0],z[1]))
# plt.savefig('%s.png' %i)
plt.show()
# the line equation:
print("y=%.10fx+(%.10f)"%(z[0],z[1]))
# save the variables in a list
coefficientyear.append([i, z[0], z[1]])
len(year)
"""
Explanation: price forming
In order to estimate willingness to sell en willingness to buy we look at historical data over the past view years. We look at the DayAhead market at the TTF. Altough this data does not reflect real consumption necessarily
End of explanation
"""
|
jarvis-fga/Projetos | Problema 4/stars.ipynb | mit | import numpy
def verify_missing_data(data, features):
missing_data = []
for feature in features:
count = 0
for x in range(0, len(data)):
if type(data[feature][x]) is numpy.float64 or type(data[feature][x]) is numpy.int64:
count = count + 1
missing_data.append(count)
print(missing_data)
verify_missing_data(data, features)
"""
Explanation: Features:
mean_of_the_integrated_profile
standard_deviation_of_the_integrated_profile
excess_kurtosis_of_the_integrated_profile
skewness_of_the_integrated_profile
mean_of_the_DM-SNR_curve
standard_deviation_of_the_DM-SNR_curve
excess_kurtosis_of_the_DM-SNR_curve
skewness_of_the_DM-SNR_curve
class
Labels
Pulsar: 1
Não Pulsar: 0
Missing Data?
A base de dados não possui missig data porque seus campos são preenchidos com numpy.float64 e os rótulos com numpy.int64
End of explanation
"""
import numpy
number_samples, number_features = data.shape
number_labels = len(numpy.unique(labels))
from time import time
from sklearn import metrics
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
sample_size = 1900 # Pesquisar melhor valor para esse parâmetro usado na métrica silhouette
def bench_k_means(estimator, name, data):
initial_time = time()
estimator.fit(data)
execution_time = time() - initial_time
# metrics
inertia = estimator.inertia_
homogeneity_score = metrics.homogeneity_score(labels, estimator.labels_)
completeness_score = metrics.completeness_score(labels, estimator.labels_)
v_measure_score = metrics.v_measure_score(labels, estimator.labels_)
adjusted_rand_score = metrics.adjusted_rand_score(labels, estimator.labels_)
adjusted_mutual_info_score = metrics.adjusted_mutual_info_score(labels, estimator.labels_)
silhouette_score = metrics.silhouette_score(data, estimator.labels_, metric='euclidean', sample_size=sample_size)
#show metrics
print('%-9s\t%.2fs\t%i\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f'
% (name, execution_time, inertia, homogeneity_score,completeness_score, v_measure_score,
adjusted_rand_score, adjusted_mutual_info_score, silhouette_score))
print(90 * '_')
print('init\t\ttime\tinertia\t\thomo\tcompl\tv-meas\tARI\tAMI\tsilhouette')
bench_k_means(KMeans(init='k-means++', n_clusters=number_labels, n_init=10),
name="k-means++", data=data)
bench_k_means(KMeans(init='random', n_clusters=number_labels, n_init=10),
name="random", data=data)
# in this case the seeding of the centers is deterministic, hence we run the
# kmeans algorithm only once with n_init=1
pca = PCA(n_components=number_labels).fit(data)
bench_k_means(KMeans(init=pca.components_, n_clusters=number_labels, n_init=1),
name="PCA-based", data=data)
print(90 * '_')
"""
Explanation: Aplicando K-means e coletando métricas:
seguin do exemplo disponível em:
http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.html#sphx-glr-auto-examples-cluster-plot-kmeans-digits-py
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/quantum/tutorials/qcnn.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install tensorflow==2.7.0
"""
Explanation: Quantum Convolutional Neural Network
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/qcnn"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/qcnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/qcnn.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/qcnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial implements a simplified <a href="https://www.nature.com/articles/s41567-019-0648-8" class="external">Quantum Convolutional Neural Network</a> (QCNN), a proposed quantum analogue to a classical convolutional neural network that is also translationally invariant.
This example demonstrates how to detect certain properties of a quantum data source, such as a quantum sensor or a complex simulation from a device. The quantum data source being a <a href="https://arxiv.org/pdf/quant-ph/0504097.pdf" class="external">cluster state</a> that may or may not have an excitation—what the QCNN will learn to detect (The dataset used in the paper was SPT phase classification).
Setup
End of explanation
"""
!pip install tensorflow-quantum
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
"""
Explanation: Install TensorFlow Quantum:
End of explanation
"""
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
"""
Explanation: Now import TensorFlow and the module dependencies:
End of explanation
"""
qubit = cirq.GridQubit(0, 0)
# Define some circuits.
circuit1 = cirq.Circuit(cirq.X(qubit))
circuit2 = cirq.Circuit(cirq.H(qubit))
# Convert to a tensor.
input_circuit_tensor = tfq.convert_to_tensor([circuit1, circuit2])
# Define a circuit that we want to append
y_circuit = cirq.Circuit(cirq.Y(qubit))
# Instantiate our layer
y_appender = tfq.layers.AddCircuit()
# Run our circuit tensor through the layer and save the output.
output_circuit_tensor = y_appender(input_circuit_tensor, append=y_circuit)
"""
Explanation: 1. Build a QCNN
1.1 Assemble circuits in a TensorFlow graph
TensorFlow Quantum (TFQ) provides layer classes designed for in-graph circuit construction. One example is the tfq.layers.AddCircuit layer that inherits from tf.keras.Layer. This layer can either prepend or append to the input batch of circuits, as shown in the following figure.
<img src="./images/qcnn_1.png" width="700">
The following snippet uses this layer:
End of explanation
"""
print(tfq.from_tensor(input_circuit_tensor))
"""
Explanation: Examine the input tensor:
End of explanation
"""
print(tfq.from_tensor(output_circuit_tensor))
"""
Explanation: And examine the output tensor:
End of explanation
"""
def generate_data(qubits):
"""Generate training and testing data."""
n_rounds = 20 # Produces n_rounds * n_qubits datapoints.
excitations = []
labels = []
for n in range(n_rounds):
for bit in qubits:
rng = np.random.uniform(-np.pi, np.pi)
excitations.append(cirq.Circuit(cirq.rx(rng)(bit)))
labels.append(1 if (-np.pi / 2) <= rng <= (np.pi / 2) else -1)
split_ind = int(len(excitations) * 0.7)
train_excitations = excitations[:split_ind]
test_excitations = excitations[split_ind:]
train_labels = labels[:split_ind]
test_labels = labels[split_ind:]
return tfq.convert_to_tensor(train_excitations), np.array(train_labels), \
tfq.convert_to_tensor(test_excitations), np.array(test_labels)
"""
Explanation: While it is possible to run the examples below without using tfq.layers.AddCircuit, it's a good opportunity to understand how complex functionality can be embedded into TensorFlow compute graphs.
1.2 Problem overview
You will prepare a cluster state and train a quantum classifier to detect if it is "excited" or not. The cluster state is highly entangled but not necessarily difficult for a classical computer. For clarity, this is a simpler dataset than the one used in the paper.
For this classification task you will implement a deep <a href="https://arxiv.org/pdf/quant-ph/0610099.pdf" class="external">MERA</a>-like QCNN architecture since:
Like the QCNN, the cluster state on a ring is translationally invariant.
The cluster state is highly entangled.
This architecture should be effective at reducing entanglement, obtaining the classification by reading out a single qubit.
<img src="./images/qcnn_2.png" width="1000">
An "excited" cluster state is defined as a cluster state that had a cirq.rx gate applied to any of its qubits. Qconv and QPool are discussed later in this tutorial.
1.3 Building blocks for TensorFlow
<img src="./images/qcnn_3.png" width="1000">
One way to solve this problem with TensorFlow Quantum is to implement the following:
The input to the model is a circuit tensor—either an empty circuit or an X gate on a particular qubit indicating an excitation.
The rest of the model's quantum components are constructed with tfq.layers.AddCircuit layers.
For inference a tfq.layers.PQC layer is used. This reads $\langle \hat{Z} \rangle$ and compares it to a label of 1 for an excited state, or -1 for a non-excited state.
1.4 Data
Before building your model, you can generate your data. In this case it's going to be excitations to the cluster state (The original paper uses a more complicated dataset). Excitations are represented with cirq.rx gates. A large enough rotation is deemed an excitation and is labeled 1 and a rotation that isn't large enough is labeled -1 and deemed not an excitation.
End of explanation
"""
sample_points, sample_labels, _, __ = generate_data(cirq.GridQubit.rect(1, 4))
print('Input:', tfq.from_tensor(sample_points)[0], 'Output:', sample_labels[0])
print('Input:', tfq.from_tensor(sample_points)[1], 'Output:', sample_labels[1])
"""
Explanation: You can see that just like with regular machine learning you create a training and testing set to use to benchmark the model. You can quickly look at some datapoints with:
End of explanation
"""
def cluster_state_circuit(bits):
"""Return a cluster state on the qubits in `bits`."""
circuit = cirq.Circuit()
circuit.append(cirq.H.on_each(bits))
for this_bit, next_bit in zip(bits, bits[1:] + [bits[0]]):
circuit.append(cirq.CZ(this_bit, next_bit))
return circuit
"""
Explanation: 1.5 Define layers
Now define the layers shown in the figure above in TensorFlow.
1.5.1 Cluster state
The first step is to define the <a href="https://arxiv.org/pdf/quant-ph/0504097.pdf" class="external">cluster state</a> using <a href="https://github.com/quantumlib/Cirq" class="external">Cirq</a>, a Google-provided framework for programming quantum circuits. Since this is a static part of the model, embed it using the tfq.layers.AddCircuit functionality.
End of explanation
"""
SVGCircuit(cluster_state_circuit(cirq.GridQubit.rect(1, 4)))
"""
Explanation: Display a cluster state circuit for a rectangle of <a href="https://cirq.readthedocs.io/en/stable/generated/cirq.GridQubit.html" class="external"><code>cirq.GridQubit</code></a>s:
End of explanation
"""
def one_qubit_unitary(bit, symbols):
"""Make a Cirq circuit enacting a rotation of the bloch sphere about the X,
Y and Z axis, that depends on the values in `symbols`.
"""
return cirq.Circuit(
cirq.X(bit)**symbols[0],
cirq.Y(bit)**symbols[1],
cirq.Z(bit)**symbols[2])
def two_qubit_unitary(bits, symbols):
"""Make a Cirq circuit that creates an arbitrary two qubit unitary."""
circuit = cirq.Circuit()
circuit += one_qubit_unitary(bits[0], symbols[0:3])
circuit += one_qubit_unitary(bits[1], symbols[3:6])
circuit += [cirq.ZZ(*bits)**symbols[6]]
circuit += [cirq.YY(*bits)**symbols[7]]
circuit += [cirq.XX(*bits)**symbols[8]]
circuit += one_qubit_unitary(bits[0], symbols[9:12])
circuit += one_qubit_unitary(bits[1], symbols[12:])
return circuit
def two_qubit_pool(source_qubit, sink_qubit, symbols):
"""Make a Cirq circuit to do a parameterized 'pooling' operation, which
attempts to reduce entanglement down from two qubits to just one."""
pool_circuit = cirq.Circuit()
sink_basis_selector = one_qubit_unitary(sink_qubit, symbols[0:3])
source_basis_selector = one_qubit_unitary(source_qubit, symbols[3:6])
pool_circuit.append(sink_basis_selector)
pool_circuit.append(source_basis_selector)
pool_circuit.append(cirq.CNOT(control=source_qubit, target=sink_qubit))
pool_circuit.append(sink_basis_selector**-1)
return pool_circuit
"""
Explanation: 1.5.2 QCNN layers
Define the layers that make up the model using the <a href="https://arxiv.org/abs/1810.03787" class="external">Cong and Lukin QCNN paper</a>. There are a few prerequisites:
The one- and two-qubit parameterized unitary matrices from the <a href="https://arxiv.org/abs/quant-ph/0507171" class="external">Tucci paper</a>.
A general parameterized two-qubit pooling operation.
End of explanation
"""
SVGCircuit(one_qubit_unitary(cirq.GridQubit(0, 0), sympy.symbols('x0:3')))
"""
Explanation: To see what you created, print out the one-qubit unitary circuit:
End of explanation
"""
SVGCircuit(two_qubit_unitary(cirq.GridQubit.rect(1, 2), sympy.symbols('x0:15')))
"""
Explanation: And the two-qubit unitary circuit:
End of explanation
"""
SVGCircuit(two_qubit_pool(*cirq.GridQubit.rect(1, 2), sympy.symbols('x0:6')))
"""
Explanation: And the two-qubit pooling circuit:
End of explanation
"""
def quantum_conv_circuit(bits, symbols):
"""Quantum Convolution Layer following the above diagram.
Return a Cirq circuit with the cascade of `two_qubit_unitary` applied
to all pairs of qubits in `bits` as in the diagram above.
"""
circuit = cirq.Circuit()
for first, second in zip(bits[0::2], bits[1::2]):
circuit += two_qubit_unitary([first, second], symbols)
for first, second in zip(bits[1::2], bits[2::2] + [bits[0]]):
circuit += two_qubit_unitary([first, second], symbols)
return circuit
"""
Explanation: 1.5.2.1 Quantum convolution
As in the <a href="https://arxiv.org/abs/1810.03787" class="external">Cong and Lukin</a> paper, define the 1D quantum convolution as the application of a two-qubit parameterized unitary to every pair of adjacent qubits with a stride of one.
End of explanation
"""
SVGCircuit(
quantum_conv_circuit(cirq.GridQubit.rect(1, 8), sympy.symbols('x0:15')))
"""
Explanation: Display the (very horizontal) circuit:
End of explanation
"""
def quantum_pool_circuit(source_bits, sink_bits, symbols):
"""A layer that specifies a quantum pooling operation.
A Quantum pool tries to learn to pool the relevant information from two
qubits onto 1.
"""
circuit = cirq.Circuit()
for source, sink in zip(source_bits, sink_bits):
circuit += two_qubit_pool(source, sink, symbols)
return circuit
"""
Explanation: 1.5.2.2 Quantum pooling
A quantum pooling layer pools from $N$ qubits to $\frac{N}{2}$ qubits using the two-qubit pool defined above.
End of explanation
"""
test_bits = cirq.GridQubit.rect(1, 8)
SVGCircuit(
quantum_pool_circuit(test_bits[:4], test_bits[4:], sympy.symbols('x0:6')))
"""
Explanation: Examine a pooling component circuit:
End of explanation
"""
def create_model_circuit(qubits):
"""Create sequence of alternating convolution and pooling operators
which gradually shrink over time."""
model_circuit = cirq.Circuit()
symbols = sympy.symbols('qconv0:63')
# Cirq uses sympy.Symbols to map learnable variables. TensorFlow Quantum
# scans incoming circuits and replaces these with TensorFlow variables.
model_circuit += quantum_conv_circuit(qubits, symbols[0:15])
model_circuit += quantum_pool_circuit(qubits[:4], qubits[4:],
symbols[15:21])
model_circuit += quantum_conv_circuit(qubits[4:], symbols[21:36])
model_circuit += quantum_pool_circuit(qubits[4:6], qubits[6:],
symbols[36:42])
model_circuit += quantum_conv_circuit(qubits[6:], symbols[42:57])
model_circuit += quantum_pool_circuit([qubits[6]], [qubits[7]],
symbols[57:63])
return model_circuit
# Create our qubits and readout operators in Cirq.
cluster_state_bits = cirq.GridQubit.rect(1, 8)
readout_operators = cirq.Z(cluster_state_bits[-1])
# Build a sequential model enacting the logic in 1.3 of this notebook.
# Here you are making the static cluster state prep as a part of the AddCircuit and the
# "quantum datapoints" are coming in the form of excitation
excitation_input = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
cluster_state = tfq.layers.AddCircuit()(
excitation_input, prepend=cluster_state_circuit(cluster_state_bits))
quantum_model = tfq.layers.PQC(create_model_circuit(cluster_state_bits),
readout_operators)(cluster_state)
qcnn_model = tf.keras.Model(inputs=[excitation_input], outputs=[quantum_model])
# Show the keras plot of the model
tf.keras.utils.plot_model(qcnn_model,
show_shapes=True,
show_layer_names=False,
dpi=70)
"""
Explanation: 1.6 Model definition
Now use the defined layers to construct a purely quantum CNN. Start with eight qubits, pool down to one, then measure $\langle \hat{Z} \rangle$.
End of explanation
"""
# Generate some training data.
train_excitations, train_labels, test_excitations, test_labels = generate_data(
cluster_state_bits)
# Custom accuracy metric.
@tf.function
def custom_accuracy(y_true, y_pred):
y_true = tf.squeeze(y_true)
y_pred = tf.map_fn(lambda x: 1.0 if x >= 0 else -1.0, y_pred)
return tf.keras.backend.mean(tf.keras.backend.equal(y_true, y_pred))
qcnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
loss=tf.losses.mse,
metrics=[custom_accuracy])
history = qcnn_model.fit(x=train_excitations,
y=train_labels,
batch_size=16,
epochs=25,
verbose=1,
validation_data=(test_excitations, test_labels))
plt.plot(history.history['loss'][1:], label='Training')
plt.plot(history.history['val_loss'][1:], label='Validation')
plt.title('Training a Quantum CNN to Detect Excited Cluster States')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
"""
Explanation: 1.7 Train the model
Train the model over the full batch to simplify this example.
End of explanation
"""
# 1-local operators to read out
readouts = [cirq.Z(bit) for bit in cluster_state_bits[4:]]
def multi_readout_model_circuit(qubits):
"""Make a model circuit with less quantum pool and conv operations."""
model_circuit = cirq.Circuit()
symbols = sympy.symbols('qconv0:21')
model_circuit += quantum_conv_circuit(qubits, symbols[0:15])
model_circuit += quantum_pool_circuit(qubits[:4], qubits[4:],
symbols[15:21])
return model_circuit
# Build a model enacting the logic in 2.1 of this notebook.
excitation_input_dual = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
cluster_state_dual = tfq.layers.AddCircuit()(
excitation_input_dual, prepend=cluster_state_circuit(cluster_state_bits))
quantum_model_dual = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_dual)
d1_dual = tf.keras.layers.Dense(8)(quantum_model_dual)
d2_dual = tf.keras.layers.Dense(1)(d1_dual)
hybrid_model = tf.keras.Model(inputs=[excitation_input_dual], outputs=[d2_dual])
# Display the model architecture
tf.keras.utils.plot_model(hybrid_model,
show_shapes=True,
show_layer_names=False,
dpi=70)
"""
Explanation: 2. Hybrid models
You don't have to go from eight qubits to one qubit using quantum convolution—you could have done one or two rounds of quantum convolution and fed the results into a classical neural network. This section explores quantum-classical hybrid models.
2.1 Hybrid model with a single quantum filter
Apply one layer of quantum convolution, reading out $\langle \hat{Z}_n \rangle$ on all bits, followed by a densely-connected neural network.
<img src="./images/qcnn_5.png" width="1000">
2.1.1 Model definition
End of explanation
"""
hybrid_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
loss=tf.losses.mse,
metrics=[custom_accuracy])
hybrid_history = hybrid_model.fit(x=train_excitations,
y=train_labels,
batch_size=16,
epochs=25,
verbose=1,
validation_data=(test_excitations,
test_labels))
plt.plot(history.history['val_custom_accuracy'], label='QCNN')
plt.plot(hybrid_history.history['val_custom_accuracy'], label='Hybrid CNN')
plt.title('Quantum vs Hybrid CNN performance')
plt.xlabel('Epochs')
plt.legend()
plt.ylabel('Validation Accuracy')
plt.show()
"""
Explanation: 2.1.2 Train the model
End of explanation
"""
excitation_input_multi = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
cluster_state_multi = tfq.layers.AddCircuit()(
excitation_input_multi, prepend=cluster_state_circuit(cluster_state_bits))
# apply 3 different filters and measure expectation values
quantum_model_multi1 = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_multi)
quantum_model_multi2 = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_multi)
quantum_model_multi3 = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_multi)
# concatenate outputs and feed into a small classical NN
concat_out = tf.keras.layers.concatenate(
[quantum_model_multi1, quantum_model_multi2, quantum_model_multi3])
dense_1 = tf.keras.layers.Dense(8)(concat_out)
dense_2 = tf.keras.layers.Dense(1)(dense_1)
multi_qconv_model = tf.keras.Model(inputs=[excitation_input_multi],
outputs=[dense_2])
# Display the model architecture
tf.keras.utils.plot_model(multi_qconv_model,
show_shapes=True,
show_layer_names=True,
dpi=70)
"""
Explanation: As you can see, with very modest classical assistance, the hybrid model will usually converge faster than the purely quantum version.
2.2 Hybrid convolution with multiple quantum filters
Now let's try an architecture that uses multiple quantum convolutions and a classical neural network to combine them.
<img src="./images/qcnn_6.png" width="1000">
2.2.1 Model definition
End of explanation
"""
multi_qconv_model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
loss=tf.losses.mse,
metrics=[custom_accuracy])
multi_qconv_history = multi_qconv_model.fit(x=train_excitations,
y=train_labels,
batch_size=16,
epochs=25,
verbose=1,
validation_data=(test_excitations,
test_labels))
plt.plot(history.history['val_custom_accuracy'][:25], label='QCNN')
plt.plot(hybrid_history.history['val_custom_accuracy'][:25], label='Hybrid CNN')
plt.plot(multi_qconv_history.history['val_custom_accuracy'][:25],
label='Hybrid CNN \n Multiple Quantum Filters')
plt.title('Quantum vs Hybrid CNN performance')
plt.xlabel('Epochs')
plt.legend()
plt.ylabel('Validation Accuracy')
plt.show()
"""
Explanation: 2.2.2 Train the model
End of explanation
"""
|
0x4a50/udacity-0x4a50-deep-learning-nanodegree | embeddings/Skip-Gram_word2vec.ipynb | mit | import time
import numpy as np
import tensorflow as tf
import utils
"""
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
"""
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
"""
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
"""
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
"""
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
"""
Explanation: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
"""
## Your code here
train_words = # The final subsampled word list
"""
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
"""
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
return
"""
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
"""
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
"""
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
"""
train_graph = tf.Graph()
with train_graph.as_default():
inputs =
labels =
"""
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
"""
n_vocab = len(int_to_vocab)
n_embedding = # Number of embedding features
with train_graph.as_default():
embedding = # create embedding weight matrix here
embed = # use tf.nn.embedding_lookup to get the hidden layer output
"""
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
"""
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = # create softmax weight matrix here
softmax_b = # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
"""
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
"""
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
"""
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
"""
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
"""
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
"""
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
"""
Explanation: Restore the trained network if you need to:
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
"""
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation
"""
|
rcrehuet/Python_for_Scientists_2017 | notebooks/6_2_More NumPy.ipynb | gpl-3.0 | import numpy as np
arr = np.array([1,2,3])
print(arr," is of type ",arr.dtype)
float_arr = arr.astype(np.float64)
print(float_arr," is of type ",float_arr.dtype)
"""
Explanation: NumPy: computing with arrays
Numerical Python (NumPy) is the fundamental package for high performance scientific computing and data analysis. It is the founding for most high-level tools. Features:
- ndarray: fast and space-efficient multidimensional array providing vectorized arithmetic operations
- Standard mathematical functions for fast operations on entire arrays of data without having to write loops
- tools for reading / writing array data to disk and working with memory-mapped files
- Linear algebra, random number generation, and Fourier transform capabilities
- tools for integrating C, C++, Fortran codes
For most data analysis applications, the main functionalities we might be interested in are:
fast vectorized array operations: data munging, cleaning, subsetting, filtering, transformation
sorting algorithm, set operations
efficient descriptive statistics, aggregating and summarizing data
data alignement and relational data manipulation for merging heterogeneous data sets
expressing conditional logic as array expressions
group-wise data manipulations
NumPy provides the computational foundation for these operations but you might want to use pandas as your basis for data analysis as it provides a rich, high-level interface masking most common data tasks very concise and simple. It also provides some more domain specific functionality like time series manipulation, absent in NumPy.
Data types for ndarrays
dtype is a special object containing the information the ndarrays needs to interpret a chunk of memory as a particular type of data:
arr = np.array([1,2,3], dtype=np.float64)
dtype objects are part of what makes NumPy so powerful and flexible. In most of the cases they map directly onto an underlying machine representation. This makes it easy to read/write binary streams of data to disk and connect to code written in languages such as C or Fortran. The most common numerical types are floatXX and intXX where XX stands for the number of bits (tipically 64 for float and 32 for int).
An array originally created with one type can be converter or casted to another type:
End of explanation
"""
numeric_string = np.array(input("Type a sequence of numbers separated by spaces: ").split())
numeric_string.astype(float)
"""
Explanation: However, pay attention to the fact that a astype() produces a new object.
Numbers introduced as strings can also be casted to numerical values:
End of explanation
"""
int_array = np.arange(10)
other_array = np.array([0.5, 1.2], dtype=float)
int_array.astype(other_array.dtype)
"""
Explanation: One array can be casted to another array's dtype:
End of explanation
"""
vec = np.arange(10)*3 + 0.5
vec
"""
Explanation: Some reminders on indexing and slicing
Indexing and slicing in NumPy is a topic that could fill pages and pages. A single array element or a subset can be selected in multiple ways. The following will be a short reminder on array indexing and slicing in NumPy. In the case of one-dimensional arrays there are apparently no differences with respect to indexing in Python lists:
End of explanation
"""
vec[5]
"""
Explanation: We obtain the sixth element of the array similarly to what one would do in a list:
End of explanation
"""
vec[5:8] #as in lists, first index is included, last index is omitted
"""
Explanation: The subset formed by the sixth, seventh, and eigth element:
End of explanation
"""
vec[5] = 11
vec
"""
Explanation: We can modify the value of a single element:
End of explanation
"""
vec [:3] = [11, 12, 13]
vec
"""
Explanation: Or a subset of the vector:
End of explanation
"""
vec [-3:] = 66
vec
"""
Explanation: If an scalar is assigned to a slice of the array, the value is broadcasted:
End of explanation
"""
print("This is the vector",vec)
slice = vec[5:8] #we obtain a slice of the vector
slice[1] = 1111 #modify the second element in the slice
print("This is the vector",vec)
"""
Explanation: READ THIS An importat difference with respect to lists is that slices of a given array are views on the original array. This means that the data is not copied and any modification will be reflected in the source array:
End of explanation
"""
arr2D = np.arange(9).reshape(3,3)
arr2D[2]
"""
Explanation: This can be quite surprising and even inconvenient for those coming from programming language such as Fortran90 that copy data more zealously. If you want to actually copy a slice of an ndarray instead of just viewing it, you can either create a new array or use the copy() method:
slice = np.array(vec[5:8])
slice = vec[5:8].copy()
With higher dimensional arrays more possibilities are available. For instance, in the case of a 2D array, the elements at each index are no longer scalars but rather one-dimensional arrays. Similarly in the case of a 3D array, at each index we would have a 2D array composed of 1D arrays at each subindex, and so on.
End of explanation
"""
arr2D[2][1]
arr2D[2,1]
"""
Explanation: Individual elements can be accessed recursively. You can use either the syntax employed in lists or the more compact and nice comma-separated list of indices to select individual elements.
End of explanation
"""
arr3D = np.arange(27).reshape(3,3,3)
arr3D
"""
Explanation: In the case of multidimensional arrays, omitting later indices will produce lower-dimensional array consisting of all the data along the higher dimensions:
End of explanation
"""
vec[1:3]
"""
Explanation: Check that, for instance, arr3D[0] is a 3x3 matrix:
Similarly, arr3D[2,1] gives us all the values of the three-indexed variable whose indices start with (1,0), forming a 1D array:
Slices
Like lists, ndarrays can be sliced using the familiar syntax:
End of explanation
"""
arr2D[:2]
"""
Explanation: Higher dimensional objects can be sliced along one or more axes and also mix integers. For instance in the 2D case:
End of explanation
"""
arr2D[:2,1:]
"""
Explanation: The matrix is sliced along its first axis (rows) an the first two elements are given. One can pass multiple slices:
End of explanation
"""
arr2D[:,-1] #produce the last column of the matrix
"""
Explanation: Slice the first two rows and columns from the second onwards. Keep in mind that slices are views of the original array. The colon by itself means that the entire axis is taken:
End of explanation
"""
arr2D = np.random.randint(0,20,35).reshape(7,5)
"""
Explanation: Boolean indexing
To illustrate this indexing style let us build a 7x5 matrix filled with random numbers distributed between 0 and 20.
End of explanation
"""
arr1D = np.random.randn(7)
"""
Explanation: A second sequence, a 1D array composed by 7 elements.
End of explanation
"""
np.abs(arr1D) > 0.5
"""
Explanation: Suppose each value in arr1D corresponds to a row in the arr2D array and we want to select all the rows corresponding to absolute values in arr1D greater than 0.5. If we apply a logical operation onto the array the result is a NumPy array of the same shape with a boolean value according to the outcome of the logical operation on the corresponding element.
End of explanation
"""
arr2D[np.abs(arr1D) > 0.5]
"""
Explanation: This boolean array can be passed when indexing the array:
End of explanation
"""
arr2D[np.abs(arr1D) > 0.5, -1] #last column of those rows matching the logical condition
"""
Explanation: Clearly, the boolean array must be of the same length as the axis it's indexing. Boolean arrays can be mixed with slices or integer indices:
End of explanation
"""
arr2D[(np.abs(arr1D) > 0.5) & (arr1D > 0)]
"""
Explanation: Boolean conditions can be combined using logical operators like &(equivalent to and) and | (or)
End of explanation
"""
arr2D
arr2D[[0,2]] #Fancy index is an array requesting rows 0 and 2
"""
Explanation: However, keep in mind that the Python keywords andand or do not work with boolean arrays.
We can also apply to the array a mask produced by a logical condition on the array itself. Which elements of arr2D are even?
Fancy Indexing
Fancy indexing is a term adopted by NumPy to describe indexing using integer arrays.
End of explanation
"""
arr2D[[-1,-3]]
"""
Explanation: Using negative indices selects rows from the end:
End of explanation
"""
arr2D[[0,2],[-1,-3]]
"""
Explanation: Passing multiple indices does something slightly different; it selects a 1D array of elements corresponding to each tuple of indices:
End of explanation
"""
arr2D.shape
arr2D.T.shape
arr2D
arr2D.T
"""
Explanation: Check that we have selected elements (0,-1) and (2,-3):
Transposing Arrays and Swapping Axes
Transposing is a special form of reshaping which returns a view on the underlying data without copying anything. Arrays have the transpose method and also the special T attribute:
End of explanation
"""
arr = np.arange(16).reshape((2,2,4))
arr
arr.transpose(1,0,2)
arr[0]
arr.transpose(1,0,2)[1]
"""
Explanation: For higher dimensional arrays, transpose will accept a tuple of axis numbers to permute the axes:
End of explanation
"""
arr1D = np.arange(10)
np.sqrt(arr1D)
"""
Explanation: Universal Functions: Fast Element-wise Array Funcions
A universal function (ufunc) is a function that performs elementwise operations on data stired in ndarrays. They can be seen as fast vectorized wrappers for simple functions that take an array of values and produce an array of results. Many such ufuncs are elementwise representations and are called unary ufuncs:
End of explanation
"""
i = np.random.randint(8, size=8)
i
j = np.random.randint(8, size=8)
j
np.maximum(i,j) #produces element-wise maxima
"""
Explanation: Other functions take two arrays as arguments. They are called binary ufuncs and some examples are add or maximum.
End of explanation
"""
points = np.arange(-5, 5, 0.01)
print(points.ndim, points.shape, points.size)
"""
Explanation: See the following link for more on ufuncs.
Data Processing Using Arrays
Let us start with a simple example, we will evaluate the function $\sqrt{x^2 + y^2}$ on a grid of equally spaced $(x,y)$ values.
End of explanation
"""
import numpy as np
x, y = np.meshgrid(points, points)
y
"""
Explanation: The np.mesgrid() function takes two 1D arrays and produces two 2D matrices corresponding to all pairs of (x, y) in the two arrays:
End of explanation
"""
zp = np.sqrt(x**2 + y**2)
zp
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(zp, cmap=plt.cm.gray)
plt.colorbar()
plt.title("Image plot of $\sqrt{x^2 + y^2}$ for a evenly spaced grid")
"""
Explanation: To evaluate the function we simply write the same expression we would write for two scalar values, only now x and y are arrays. Being of the same dimension, the operation is carried out element wise:
End of explanation
"""
xarr = np.linspace(1.1,1.5,5)
yarr = np.linspace(2.1,2.5,5)
"""
Explanation: Expressing Conditional Logic as Array Operations
The numpy.where function is a vectorized vrsion of the ternary expression:
x if condition else y
To see how it works, let us build first two arrays 1D:
End of explanation
"""
condition = np.array([True, False, True, True, False])
"""
Explanation: We also have a boolean array of the same dimension, for instance:
End of explanation
"""
new_arr = [x if c else y for x,y,c in zip(xarr, yarr, condition)]
new_arr
"""
Explanation: Let us build a new array combining xarr and yarr in such a way that whenever condition is True we use the value in xarr and yarr otherwise:
End of explanation
"""
new_arr = np.where(condition, xarr, yarr)
new_arr
"""
Explanation: The latter is not the most efficient solution:
it will not be very fast for large arrays
it will not work if the arrays are multidimensional
The np.where function offers a concise and fast alternative:
End of explanation
"""
arr = np.random.randn(16).reshape(4,4)
arr
new1 = np.where(arr > 0, 2, -2)
new1
new2 = np.where(arr > 0, 2, arr)
new2
"""
Explanation: Exercise
- Build a 4x4 array composed by randomly distributed values
- Build a new 4x4 arrays that contains the value 2 where the original array has a positive value and -2 where the original array contains a negative one
- Build another array setting only the positive elements to 2
End of explanation
"""
import numpy as np
arr = np.random.randn(5,4)
arr
arr.mean()
np.mean(arr)
"""
Explanation: Mathematical and Statistical Methods
Numpy offers a set of mathematical functions that compute statistics about an entire array or about the data along an axis. These functions are accessed as array methods. Reductions or aggregations such as sum, mean, and std can be used by calling the array method or the Numpy function:
End of explanation
"""
arr.mean(axis=1)
arr.mean(1)
"""
Explanation: Functions like mean and sum take an optional axisargument which computes the statistic over the given axis, yielding an array with one fewer dimension
End of explanation
"""
arr = np.arange(9).reshape(3,3)
arr
arr.cumsum() #cumulative sum of the complete matrix
arr.cumsum(0) #cumulative sum over columns
arr.cumsum(1) #cumulative sum over rows
"""
Explanation: Other methods like cumsum and cumprod do not aggregate but they produce an array of the intermediate results:
End of explanation
"""
arr = np.random.randn(100)
(arr > 0).sum()
"""
Explanation: Methods for Boolean Arrays
Boolean take the values either 1 (True) or 0 (False) when applying statistical methods. For instance, we could count the True values in a boolean array:
End of explanation
"""
np.random.randint?
boolarr = np.array(np.random.randint(0,2,size=10),dtype=np.bool)
boolarr
boolarr.any()
boolarr.all()
"""
Explanation: There are two additional methods that are particularly useful for boolean arrays: any and all. any tests whether one or more values in an array is True, while all checks if every value is True:
End of explanation
"""
arr = np.random.randn(10)
arr
arr.sort()
arr
"""
Explanation: By the way, these two methods can be applied to non-boolean arrays, non-zero elements evaluate to True.
Sorting
Numpy arrays can be sorted in-place using the sort method:
End of explanation
"""
arr = np.random.randn(5,3) # 5x3 matrix with random values
arr
arr.sort(1) #sort the rows
arr
arr.sort(0) #sort the columns
arr
"""
Explanation: In the case of multidimensional arrays can have each 1D section of values sorted in-place along an axis.
End of explanation
"""
arr = np.random.randn(5)
arr.sort()
quantile = 0.1
arr[int(quantile*len(arr))]
"""
Explanation: It is worthwhile mentioning that the np.sort() function is not fully equivalent to the sort method. While the method sorts the array in-place, the function creates a new function.
Exercise Write a piece code that:
- builds a 1D array of 1000 normally distributed random numbers
- compute 10% quantile of the array (i.e. the last value in the top 10% values)
End of explanation
"""
x = np.linspace(1,6,6).reshape(2,3)
y = np.random.randint(0,30,size=6).reshape(3,2)
x
y
x.dot(y)
"""
Explanation: Linear Algebra
Keep in mind that multiplying two arrays with * results in an element-wise product instead of a vectorial (or matrix) dot product. For that, NumPy provides both an array method and a function:
End of explanation
"""
from numpy.linalg import inv, qr
X = np.random.randn(5,5)
mat = X.T.dot(X) #Matrix product between X and its transpose
inv(mat)
mat.dot(inv(mat))
"""
Explanation: The numpy.linalg module of the NumPy module has a set of matrix decomposition, inverse, and determinant functions. These are implemented under the hood using the same industry-standard Fortran libraries used in other languages like MATLAB and R, such as BLAS, LAPACK or Intel MKL:
End of explanation
"""
q, r = qr(mat)
"""
Explanation: Type qr?:
End of explanation
"""
|
SHAFNehal/Course | code/Introduction to Deep Learning.ipynb | apache-2.0 | # Import the required packages
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import scipy
import math
import random
import string
random.seed(123)
# Display plots inline
%matplotlib inline
# Define plot's default figure size
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
#read the datasets
train = pd.read_csv("data/intro_to_ann.csv")
print (train.head())
X, y = np.array(train.ix[:,0:2]), np.array(train.ix[:,2])
print(X.shape, y.shape)
plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.BuGn)
"""
Explanation: Introduction to Deep Learning
Goal : This notebook explains the building blocks of a neural network model.
Data : Data is taken from sklearn's make_moon dataset. There are two features and and the target is a categorical variable (0/1). The aim is to devise an algorithm that correctly classifies the datapoints.
Aproach: We will build the neural networks from first principles. We will create a very simple model and understand how it works. We will also be implementing backpropagation algorithm. Please note that this code is not optimized. This is for instructive purpose - for us to understand how ANN works. Libraries like theano have highly optimized code.
<img src="image/nn-3-layer-network.png">
End of explanation
"""
# calculate a random number where: a <= rand < b
def rand(a, b):
return (b-a)*random.random() + a
# Make a matrix
def makeMatrix(I, J, fill=0.0):
return np.zeros([I,J])
# our sigmoid function
def sigmoid(x):
#return math.tanh(x)
return 1/(1+np.exp(-x))
# derivative of our sigmoid function, in terms of the output (i.e. y)
def dsigmoid(y):
return (y * (1- y))
"""
Explanation: Let's start building our NN's building blocks.
This process will eventually result in our own NN class
Function to generate a random number, given two numbers
When we initialize the neural networks, the weights have to be randomly assigned.
End of explanation
"""
class NN:
def __init__(self, ni, nh, no):
# number of input, hidden, and output nodes
self.ni = ni + 1 # +1 for bias node
self.nh = nh
self.no = no
# activations for nodes
self.ai = [1.0]*self.ni
self.ah = [1.0]*self.nh
self.ao = [1.0]*self.no
# create weights
self.wi = makeMatrix(self.ni, self.nh)
self.wo = makeMatrix(self.nh, self.no)
# set them to random vaules
for i in range(self.ni):
for j in range(self.nh):
self.wi[i][j] = rand(-0.2, 0.2)
for j in range(self.nh):
for k in range(self.no):
self.wo[j][k] = rand(-2.0, 2.0)
# last change in weights for momentum
self.ci = makeMatrix(self.ni, self.nh)
self.co = makeMatrix(self.nh, self.no)
class NN:
def __init__(self, ni, nh, no):
# number of input, hidden, and output nodes
self.ni = ni + 1 # +1 for bias node
self.nh = nh
self.no = no
# activations for nodes
self.ai = [1.0]*self.ni
self.ah = [1.0]*self.nh
self.ao = [1.0]*self.no
# create weights
self.wi = makeMatrix(self.ni, self.nh)
self.wo = makeMatrix(self.nh, self.no)
# set them to random vaules
for i in range(self.ni):
for j in range(self.nh):
self.wi[i][j] = rand(-0.2, 0.2)
for j in range(self.nh):
for k in range(self.no):
self.wo[j][k] = rand(-2.0, 2.0)
# last change in weights for momentum
self.ci = makeMatrix(self.ni, self.nh)
self.co = makeMatrix(self.nh, self.no)
def backPropagate(self, targets, N, M):
if len(targets) != self.no:
print(targets)
raise ValueError('wrong number of target values')
# calculate error terms for output
#output_deltas = [0.0] * self.no
output_deltas = np.zeros(self.no)
for k in range(self.no):
error = targets[k]-self.ao[k]
output_deltas[k] = dsigmoid(self.ao[k]) * error
# calculate error terms for hidden
#hidden_deltas = [0.0] * self.nh
hidden_deltas = np.zeros(self.nh)
for j in range(self.nh):
error = 0.0
for k in range(self.no):
error = error + output_deltas[k]*self.wo[j][k]
hidden_deltas[j] = dsigmoid(self.ah[j]) * error
# update output weights
for j in range(self.nh):
for k in range(self.no):
change = output_deltas[k]*self.ah[j]
self.wo[j][k] = self.wo[j][k] + N*change + M*self.co[j][k]
self.co[j][k] = change
#print N*change, M*self.co[j][k]
# update input weights
for i in range(self.ni):
for j in range(self.nh):
change = hidden_deltas[j]*self.ai[i]
self.wi[i][j] = self.wi[i][j] + N*change + M*self.ci[i][j]
self.ci[i][j] = change
# calculate error
error = 0.0
for k in range(len(targets)):
error = error + 0.5*(targets[k]-self.ao[k])**2
return error
def test(self, patterns):
self.predict = np.empty([len(patterns), self.no])
for i, p in enumerate(patterns):
self.predict[i] = self.activate(p)
#self.predict[i] = self.activate(p[0])
def weights(self):
print('Input weights:')
for i in range(self.ni):
print(self.wi[i])
print('Output weights:')
for j in range(self.nh):
print(self.wo[j])
def activate(self, inputs):
if len(inputs) != self.ni-1:
print(inputs)
raise ValueError('wrong number of inputs')
# input activations
for i in range(self.ni-1):
#self.ai[i] = sigmoid(inputs[i])
self.ai[i] = inputs[i]
# hidden activations
for j in range(self.nh):
sum = 0.0
for i in range(self.ni):
sum = sum + self.ai[i] * self.wi[i][j]
self.ah[j] = sigmoid(sum)
# output activations
for k in range(self.no):
sum = 0.0
for j in range(self.nh):
sum = sum + self.ah[j] * self.wo[j][k]
self.ao[k] = sigmoid(sum)
return self.ao[:]
def train(self, patterns, iterations=1000, N=0.5, M=0.1):
# N: learning rate
# M: momentum factor
patterns = list(patterns)
for i in range(iterations):
error1 = 0.0
#j = 0
for p in patterns:
inputs = p[0]
targets = p[1]
self.activate(inputs)
error1 = error1 + self.backPropagate([targets], N, M)
#j= j+1
#print (j)
#self.weights()
#if i % 5 == 0:
print('error in iiteration %d : %-.5f' % (i,error1))
#print('Final training error: %-.5f' % error1)
"""
Explanation: Our NN class
When we first create a neural networks architecture, we need to know the number of inputs, number of hidden layers and number of outputs.
The weights have to be randomly initialized.
End of explanation
"""
# Helper function to plot a decision boundary.
# This generates the contour plot to show the decision boundary visually
def plot_decision_boundary(nn_model):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
nn_model.test(np.c_[xx.ravel(), yy.ravel()])
Z = nn_model.predict
Z[Z>=0.5] = 1
Z[Z<0.5] = 0
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], s=40, c=y, cmap=plt.cm.BuGn)
"""
Explanation: Let's visualize and observe the resultset
End of explanation
"""
n = NN(2, 4, 1)
print (n.weights())
"""
Explanation: Create Neural networks with 1 hidden layer.
End of explanation
"""
print ("prediction")
print ("y=1 --- yhat=",n.activate([2.067788, 0.258133]))
print ("y=1 --- yhat=",n.activate([0.993994, 0.258133]))
print ("y=0 --- yhat=",n.activate([-0.690315, 0.749921]))
print ("y=0 --- yhat=",n.activate([1.023582, 0.529003]))
print ("y=1 --- yhat=",n.activate([0.700747, -0.496724]))
"""
Explanation: Data Set
(X1,X2) = (2.067788 0.258133), y=1
(X1,X2) = (0.993994 -0.609145), y=1
(X1,X2) = (-0.690315 0.749921), y=0
(X1,X2) = (1.023582 0.529003), y=0
(X1,X2) = (0.700747 -0.496724), y=1
End of explanation
"""
%timeit -n 1 -r 1 n.train(zip(X,y), iterations=1000)
plot_decision_boundary(n)
plt.title("Our next model with 4 hidden units")
print (n.weights())
"""
Explanation: Train the Neural Networks = estimate the ws while minimizing the error
End of explanation
"""
print ("prediction")
print ("y=1 --- yhat=",n.activate([2.067788, 0.258133]))
print ("y=1 --- yhat=",n.activate([0.993994, 0.258133]))
print ("y=0 --- yhat=",n.activate([-0.690315, 0.749921]))
print ("y=0 --- yhat=",n.activate([1.023582, 0.529003]))
print ("y=1 --- yhat=",n.activate([0.700747, -0.496724]))
"""
Explanation: Data Set
(X1,X2) = (2.067788 0.258133), y=1
(X1,X2) = (0.993994 -0.609145), y=1
(X1,X2) = (-0.690315 0.749921), y=0
(X1,X2) = (1.023582 0.529003), y=0
(X1,X2) = (0.700747 -0.496724), y=1
End of explanation
"""
|
tpin3694/tpin3694.github.io | sql/ignoring_null_values.ipynb | mit | # Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
"""
Explanation: Title: Ignoring Null or Missing Values
Slug: ignoring_null_values
Summary: Ignoring Null or Missing Values in SQL.
Date: 2017-01-16 12:00
Category: SQL
Tags: Basics
Authors: Chris Albon
Note: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks library. If you have not using a Jupyter Notebook, you can ignore the two lines of code below and any line containing %%sql. Furthermore, this tutorial uses SQLite's flavor of SQL, your version might have some differences in syntax.
For more, check out Learning SQL by Alan Beaulieu.
End of explanation
"""
%%sql
-- Create a table of criminals
CREATE TABLE criminals (pid, name, age, sex, city, minor);
INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals VALUES (234, NULL, 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (632, NULL, 23, 'F', 'San Francisco', 0);
INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'San Francisco', 0);
INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Petaluma', 0);
"""
Explanation: Create Data
End of explanation
"""
%%sql
-- Select name and average age,
SELECT name, age
-- from the table 'criminals',
FROM criminals
-- if age is not a null value
WHERE name IS NOT NULL
"""
Explanation: Select Name And Ages Only When The Name Is Known
End of explanation
"""
|
UCSBarchlab/PyRTL | ipynb-examples/example4-debuggingtools.ipynb | bsd-3-clause | import random
import io
from pyrtl.rtllib import adders, multipliers
import pyrtl
pyrtl.reset_working_block()
random.seed(93729473) # used to make random calls deterministic for this example
"""
Explanation: Example 4: Debugging
Debugging is half the coding process in software, and in PyRTL, it's no
different. PyRTL provides some additional challenges when it comes to
debugging as a problem may surface long after the error was made. Fortunately,
PyRTL comes with various features to help you find mistakes.
End of explanation
"""
# building three inputs
in1, in2, in3 = (pyrtl.Input(8, "in" + str(x)) for x in range(1, 4))
out = pyrtl.Output(10, "out")
add1_out = adders.kogge_stone(in1, in2)
add2_out = adders.kogge_stone(add1_out, in2)
out <<= add2_out
"""
Explanation: This example covers debugging strategies for PyRTL. For general python debugging
we recommend healthy use of the "assert" statement, and use of "pdb" for
tracking down bugs. However, PyRTL introduces some new complexities because
the place where functionality is defined (when you construct and operate
on PyRTL classes) is separate in time from where that functionality is executed
(i.e. during simulation). Thus, sometimes it hard to track down where a wire
might have come from, or what exactly it is doing.
In this example specifically, we will be building a circuit that adds up three values.
However, instead of building an add function ourselves or using the
built-in "+" function in PyRTL, we will instead use the Kogge-Stone adders
in RtlLib, the standard library for PyRTL.
End of explanation
"""
debug_out = pyrtl.Output(9, "debug_out")
debug_out <<= add1_out
"""
Explanation: The most basic way of debugging PyRTL is to connect a value to an output wire
and use the simulation to trace the output. A simple "print" statement doesn't work
because the values in the wires are not populated during creation time
If we want to check the result of the first addition, we can connect an output wire
to the result wire of the first adder
End of explanation
"""
vals1 = [int(2**random.uniform(1, 8) - 2) for _ in range(20)]
vals2 = [int(2**random.uniform(1, 8) - 2) for _ in range(20)]
vals3 = [int(2**random.uniform(1, 8) - 2) for _ in range(20)]
sim_trace = pyrtl.SimulationTrace()
sim = pyrtl.Simulation(tracer=sim_trace)
for cycle in range(len(vals1)):
sim.step({
'in1': vals1[cycle],
'in2': vals2[cycle],
'in3': vals3[cycle]})
"""
Explanation: Now simulate the circuit. Let's create some random inputs to feed our adder.
End of explanation
"""
print("---- Inputs and debug_out ----")
print("in1: ", str(sim_trace.trace['in1']))
print("in2: ", str(sim_trace.trace['in2']))
print("debug_out: ", str(sim_trace.trace['debug_out']))
print('\n')
"""
Explanation: In order to get the result data, you do not need to print a waveform of the trace
You always have the option to just pull the data out of the tracer directly
End of explanation
"""
for i in range(len(vals1)):
assert(sim_trace.trace['debug_out'][i] == sim_trace.trace['in1'][i] + sim_trace.trace['in2'][i])
"""
Explanation: Below, I am using the ability to directly retrieve the trace data to
verify the correctness of the first adder
End of explanation
"""
pyrtl.reset_working_block()
"""
Explanation: Probe
Now that we have built some stuff, let's clear it so we can try again in a
different way. We can start by clearing all of the hardware from the current working
block. The working block is a global structure that keeps track of all the
hardware you have built thus far. A "reset" will clear it so we can start fresh.
End of explanation
"""
print("---- Using Probes ----")
in1, in2 = (pyrtl.Input(8, "in" + str(x)) for x in range(1, 3))
out1, out2 = (pyrtl.Output(8, "out" + str(x)) for x in range(1, 3))
multout = multipliers.tree_multiplier(in1, in2)
#The following line will create a probe named "std_probe for later use, like an output.
pyrtl.probe(multout, 'std_probe')
"""
Explanation: In this example, we will be multiplying two numbers using tree_multiplier()
Again, create the two inputs and an output
End of explanation
"""
out1 <<= pyrtl.probe(multout, 'stdout_probe') * 2
"""
Explanation: We could also do the same thing during assignment. The next command will
create a probe (named 'stdout_probe') that refers to multout (returns the wire multout).
This achieves virtually the same thing as 4 lines above, but it is done during assignment,
so we skip a step by probing the wire before the multiplication.
The probe returns multout, the original wire, and out will be assigned multout * 2
End of explanation
"""
pyrtl.probe(multout + 32, 'adder_probe')
pyrtl.probe(multout[2:7], 'select_probe')
out2 <<= pyrtl.probe(multout)[2:16] # notice probe names are not absolutely necessary
"""
Explanation: Probe can also be used with other operations like this:
End of explanation
"""
vals1 = [int(2**random.uniform(1, 8) - 2) for _ in range(10)]
vals2 = [int(2**random.uniform(1, 8) - 2) for _ in range(10)]
sim_trace = pyrtl.SimulationTrace()
sim = pyrtl.Simulation(tracer=sim_trace)
for cycle in range(len(vals1)):
sim.step({
'in1': vals1[cycle],
'in2': vals2[cycle]})
"""
Explanation: As one can see, probe can be used on any wire any time,
such as before or during its operation, assignment, etc.
Now on to the simulation...
For variation, we'll recreate the random inputs:
End of explanation
"""
sim_trace.render_trace()
sim_trace.print_trace()
"""
Explanation: Now we will show the values of the inputs and probes
and look at that, we didn't need to make any outputs!
(although we did, to demonstrate the power and convenience of probes)
End of explanation
"""
print("--- Probe w/ debugging: ---")
pyrtl.set_debug_mode()
pyrtl.probe(multout - 16, 'debugsubtr_probe)')
pyrtl.set_debug_mode(debug=False)
"""
Explanation: Say we wanted to have gotten more information about
one of those probes above at declaration.
We could have used pyrtl.set_debug_mode() before their creation, like so:
End of explanation
"""
pyrtl.set_debug_mode()
test_out = pyrtl.Output(9, "test_out")
test_out <<= adders.kogge_stone(in1, in2)
"""
Explanation: WireVector Stack Trace
Another case that might arise is that a certain wire is causing an error to occur
in your program. WireVector Stack Traces allow you to find out more about where a particular
WireVector was made in your code. With this enabled the WireVector will
store exactly were it was created, which should help with issues where
there is a problem with an identified wire.
Like above, just add the following line before the relevant WireVector
might be made or at the beginning of the program.
End of explanation
"""
wire_trace = test_out.init_call_stack
"""
Explanation: Now to retrieve information:
End of explanation
"""
print("---- Stack Trace ----")
for frame in wire_trace:
print(frame)
"""
Explanation: This data is generated using the traceback.format_stack() call from the Python
standard library's Traceback module (look at the Python standard library docs for
details on the function). Therefore, the stack traces are stored as a list with the
outermost call first.
End of explanation
"""
dummy_wv = pyrtl.WireVector(1, name="blah")
"""
Explanation: Storage of Additional Debug Data
WARNING: the debug information generated by the following two processes are
not guaranteed to be preserved when functions (eg. pyrtl.synthesize() ) are
done over the block.
However, if the stack trace does not give you enough information about the
WireVector, you can also embed additional information into the wire itself.
Two ways of doing so is either through manipulating the name of the
WireVector, or by adding your own custom metadata to the WireVector.
So far, each input and output WireVector have been given their own names, but
normal WireVectors can also be given names by supplying the name argument to
the constructor
End of explanation
"""
dummy_wv.my_custom_property_name = "John Clow is great"
dummy_wv.custom_value_028493 = 13
# removing the WireVector from the block to prevent problems with the rest of
# this example
pyrtl.working_block().remove_wirevector(dummy_wv)
"""
Explanation: Also, because of the flexible nature of Python, you can also add custom
properties to the WireVector.
End of explanation
"""
pyrtl.working_block().sanity_check()
pyrtl.passes._remove_unused_wires(pyrtl.working_block()) # so that trivial_graph() will work
print("--- Trivial Graph Format ---")
with io.StringIO() as tgf:
pyrtl.output_to_trivialgraph(tgf)
print(tgf.getvalue())
"""
Explanation: Trivial Graph Format
Finally, there is a handy way to view your hardware creations as a graph.
The function output_to_trivialgraph will render your hardware a formal that
you can then open with the free software "yEd"
(http://en.wikipedia.org/wiki/YEd). There are options under the
"hierarchical" rendering to draw something that looks quite like a circuit.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.