repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
Mdround/fastai-deeplearning1 | deeplearning1/nbs/lesson1.ipynb | apache-2.0 | %matplotlib inline
"""
Explanation: Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task: 'Dogs vs Cats'
We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as at 2013!
Basic setup
There isn't too much to do to get started - just a few simple configuration steps.
This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
End of explanation
"""
path = "data/dogscats/"
#path = "data/dogscats/sample/"
"""
Explanation: Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
End of explanation
"""
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
"""
Explanation: A few basic libraries that we'll need for the initial exercises:
End of explanation
"""
import utils
from imp import reload # fixes a P2-P3 incompatibility
reload(utils)
from utils import plots
"""
Explanation: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
End of explanation
"""
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=64
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
??Vgg16.get_batches()
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
print(path+'train')
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)
"""
Explanation: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline: state of the art custom model in 7 lines of code
Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
End of explanation
"""
vgg = Vgg16()
"""
Explanation: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object:
End of explanation
"""
batches = vgg.get_batches(path+'train', batch_size=4)
"""
Explanation: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder:
End of explanation
"""
imgs,labels = next(batches)
"""
Explanation: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
End of explanation
"""
plots(imgs, titles=labels)
"""
Explanation: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
End of explanation
"""
vgg.predict(imgs, True)
"""
Explanation: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
End of explanation
"""
vgg.classes[:4]
"""
Explanation: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
End of explanation
"""
batch_size=64
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
"""
Explanation: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
End of explanation
"""
vgg.finetune(batches)
"""
Explanation: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
End of explanation
"""
vgg.fit(batches, val_batches, nb_epoch=1)
"""
Explanation: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
End of explanation
"""
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
"""
Explanation: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras:
End of explanation
"""
FILES_PATH = 'http://www.platform.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f:
class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
type(classes)
test = [class_dict[str(i)][1] for i in range(5)]
test
"""
Explanation: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
End of explanation
"""
classes[:5]
"""
Explanation: Here's a few examples of the categories we just imported:
End of explanation
"""
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
"""
Explanation: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
End of explanation
"""
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
"""
Explanation: ...and here's the fully-connected definition.
End of explanation
"""
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
"""
Explanation: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
End of explanation
"""
def VGG_16():
model = Sequential()
# model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224), output_shape=(3,224,224))) # following ...
# ... http://forums.fast.ai/t/warning-output-shape-argument-not-specified/416/11
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
"""
Explanation: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
End of explanation
"""
model = VGG_16()
"""
Explanation: We'll learn about what these different blocks do later in the course. For now, it's enough to know that:
Convolution layers are for finding patterns in images
Dense (fully connected) layers are for combining patterns across an image
Now that we've defined the architecture, we can create the model like any python object:
End of explanation
"""
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
"""
Explanation: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
End of explanation
"""
batch_size = 4
"""
Explanation: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
End of explanation
"""
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
"""
Explanation: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
End of explanation
"""
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
"""
Explanation: From here we can use exactly the same steps as before to look at predictions from the model.
End of explanation
"""
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('More classes: {}'.format(classes[161:169]))
print('Their probabilities: {}\n'.format(preds[0, 161:169]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
#print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
print (' {:.4f}/{} ({})'.format(preds[i, idx], classes[idx], idx))
pred_batch(imgs)
"""
Explanation: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.
End of explanation
"""
|
nproctor/phys202-2015-work | assignments/assignment10/ODEsEx02.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
"""
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
"""
def lorentz_derivs(yvec, t, sigma, rho, beta):
"""Compute the the derivatives for the Lorentz system at yvec(t)."""
x = yvec[0]
y = yvec[1]
z = yvec[2]
dx = sigma* (y-x)
dy = x * (rho - z) -y
dz = x*y - beta*z
return np.array([dx, dy, dz])
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
"""
Explanation: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:
$$ \frac{dx}{dt} = \sigma(y-x) $$
$$ \frac{dy}{dt} = x(\rho-z) - y $$
$$ \frac{dz}{dt} = xy - \beta z $$
The solution vector is $[x(t),y(t),z(t)]$ and $\sigma$, $\rho$, and $\beta$ are parameters that govern the behavior of the solutions.
Write a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.
End of explanation
"""
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
"""Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
"""
t = np.linspace(0, max_time, 250*max_time)
soln = odeint(lorentz_derivs, ic , t, args=(sigma, rho, beta))
return (soln, t)
assert True # leave this to grade solve_lorenz
"""
Explanation: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
End of explanation
"""
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
"""Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
"""
np.random.seed(1)
colors = plt.cm.hot(np.linspace(0,1,N))
x = []
y = []
z = []
for i in range(N):
x.append(np.random.uniform(-15.0, 15.0))
y.append(np.random.uniform(-15.0, 15.0))
z.append(np.random.uniform(-15.0, 15.0))
X = np.array(x)
Y = np.array(y)
Z = np.array(z)
ic = np.transpose(np.vstack((X,Y,Z)))
for i in range(len(X)):
soln = solve_lorentz(ic[i], max_time, sigma, rho, beta)[0]
soln2 = np.transpose(soln)
plt.plot(soln2[0], soln2[2], color=colors[i])
plt.xlabel("x(t)")
plt.ylabel("z(t)")
plt.title("Lorentz Plot")
ax = plt.gca()
ax.spines['top'].set_color('white')
ax.spines['right'].set_color('white')
plt.show()
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
"""
Explanation: Write a function plot_lorentz that:
Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.
Plot $[x(t),z(t)]$ using a line to show each trajectory.
Color each line using the hot colormap from Matplotlib.
Label your plot and choose an appropriate x and y limit.
The following cell shows how to generate colors that can be used for the lines:
End of explanation
"""
interact(plot_lorentz, max_time = (1,10), N=(1,50), sigma=(0.0,50.0), rho=(0.0,50.0), beta=fixed(8/3));
"""
Explanation: Use interact to explore your plot_lorenz function with:
max_time an integer slider over the interval $[1,10]$.
N an integer slider over the interval $[1,50]$.
sigma a float slider over the interval $[0.0,50.0]$.
rho a float slider over the interval $[0.0,50.0]$.
beta fixed at a value of $8/3$.
End of explanation
"""
|
bMzi/ML_in_Finance | 0211_SVM.ipynb | mit | from IPython.display import YouTubeVideo
YouTubeVideo('3liCbRZPrZA')
"""
Explanation: Support Vector Machines
Motivating Support Vector Machines
Developing the Intuition
Support vector machines (SVM) are a powerful and flexible class of supervised algorithms. Developed in the 1990s, SVM have shown to perform well in a variety of settings which explains their popularity. Though the underlying mathematics can become somewhat complicated, the basic concept of a SVM is easily understood. Therefore, in what follows we develop an intuition, introduce the mathematical basics of SVM and ultimately look into how we can apply SVM with Python.
As an introductory example, borrowed from VanderPlas (2016), consider the following simplified two-dimensional classification task, where the two classes (indicated by the colors) are well separated.
<img src="Graphics/0211_SVM_Intro1.png" alt="SVM_Intro1" style="width: 1000px;"/>
A linear discriminant classifier as discussed in chapter 8 would attempt to draw a separating hyperplane (which in two dimensions is nothing but a line) in order to distinguish the two classes. For two-dimensional data, we could even do this by hand. However, one problem arises: there are more than one separating hyperplane between the two classes.
<img src="Graphics/0211_SVM_Intro2.png" alt="SVM_Intro2" style="width: 1000px;"/>
There exist an infinite number of possible hyperplanes that perfectly discriminate between the two classes in the training data. In above figure we visualize but three of them. Depending on what hyperplane we choose, a new data point (e.g. the one marked by the red "X") will be assigned a different label. Yet, so far we have no decision criteria established to decide which one of the three hyperplanes we should choose.
How do we decide which line best separates the two classes? The idea of SVM is to add a margin of some width to both sides of each hyperplane - up to the nearest point. This might look something like this:
<img src="Graphics/0211_SVM_Intro3.png" alt="SVM_Intro3" style="width: 1000px;"/>
In SVM, the hyperplane that maximizes the margin to the nearest points is the one that is chosen as decision boundary. In other words, the maximum margin estimator is what we are looking for. Below figure shows the optimal solution for a (linear) SVM. Of all possible hyperplanes, the solid line has the largest margins (dashed lines) - measured from the decision boundary (solid line) to the nearest points (circled points).
<img src="Graphics/0211_SVM_Intro4.png" alt="SVM_Intro4" style="width: 1000px;"/>
Support Vector
The three circled sample points in above figure represent the nearest points. All three lie along the (dashed) margin line and in terms of perpendicular distance are equidistant from the decision boundary (solid line). Together they form the so called support vector. The support vector "supports" the maximal margin hyperplane in the sense that if one of the observations were moved slightly, the maximal margin hyperplane would move as well. In other words, they dictate slope and intercept of the hyperplane. Interestingly, any points further from the margin that are on the correct side do not modify the decision boundary. For example points at $(x_1, x_2) = (2.5, 1)$ or $(1, 4.2)$ have no effect on the decision boundary. Technically, this is because these points do not contribute to the loss function used to fit the model, so their position and number do not matter so long as they do not cross the margin (VanderPlas (2016)) . This is an important and helpful property as it simplifies calculations significantly. It is not surprising that computations are a lot faster if a model has only a few data points (in the support vector) to consider (James et al. (2013)).
Developing the Mathematical Intuition
Hyperplanes
To start, let us do a brief (and superficial) refresher on hyperplanes. In a $p$-dimensional space, a hyperplane is a flat (affine) subspace of dimension $p - 1$. Affine simply indicates that the subspace need not pass through the origin. As we have seen above, in two dimensions a hyperplane is just a line. In three dimensions it is a plane. For $p > 3$ visualization is hardly possible but the notion applies in similar fashion. Mathematically a $p$-dimensional hyperplane is defined by the expression
\begin{equation}
\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \ldots + \beta_p x_p = 0
\end{equation}
If a point $\mathbf{x}^ = (x^_1, x^_2, \ldots, x^_p)^T$ (i.e. a vector of length $p$) satisfies the above equation, then $\mathbf{x}^$ lies on the hyperplane. If $\mathbf{x}^{}$ does not satisfy above equation but yields a value $>0$, that is
\begin{equation}
\beta_0 + \beta_1 x^_1 + \beta_2 x^_2 + \ldots + \beta_p x^*_p > 0
\end{equation}
then this tells us that $\mathbf{x}^*$ lies on one side of the hyperplane. Similarly,
\begin{equation}
\beta_0 + \beta_1 x^_1 + \beta_2 x^_2 + \ldots + \beta_p x^*_p < 0
\end{equation}
tells us that $\mathbf{x}^*$ lies on the other side of the plane.
Separating Hyperplanes
Suppose our training sample is a $n \times p$ data matrix $\mathbf{X}$ that consists of $n$ observations in $p$-dimensional space,
\begin{equation}
\mathbf{x}1 =
\begin{pmatrix}
x{11} \
\vdots \
x_{1p}
\end{pmatrix}, \; \ldots, \; \mathbf{x}n =
\begin{pmatrix}
x{n1} \
\vdots \
x_{np}
\end{pmatrix}
\end{equation}
and each observation falls into one of two classes: $y_1, \ldots, y_n \in {-1, 1}$. Then a separating hyperplane has the helpful property that
\begin{align}
f(x) = \beta_0 + \beta_1 x_{i1} + \beta_2 x_{i2} + \ldots + \beta_p x_{ip} \quad \text{is} \quad
\begin{cases}
0 & \quad \text{if } y_i =1 \
< 0 & \quad \text{if } y_i = -1
\end{cases}
\end{align}
Given such a hyperplane exists, it can be used to construct a very intuitive classifier: a test observation is assigned to a class based on the side of the hyperplane it lies. This means we simply calculate $f(x^*)$ and if the result is positive, we assign the test observation to class 1, and to class -1 otherwise.
Maximal Margin Classifier
If our data can be perfectly separated, then - as alluded to above - there exist an infinite number of separating hyperplanes. Therefore we seek to maximize the margin to the closest training observations (support vector). The result is what we call the maximal margin hyperplane.
Let us consider how such a maximal margin hyperplane is constructed. We follow Raschka (2015) in deriving the objective function as this approach is appealing to the intuition. For a mathematically more sound derivation, see e.g. Friedman et al. (2001, chapter 4.5). As before we assume to have a set of $n$ training observations $\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_n \in \mathbb{R}^p$ with corresponding class labels $y_1, y_2, \ldots, y_n \in {-1, 1}$. The hyperplane as our decision boundary we have introduced above. Here is the same in vector notation, where $\mathbf{\beta}$ and $\mathbf{x}$ are vector of dimension $[p \times 1]$:
\begin{equation}
\beta_0 + \mathbf{\beta}^T \mathbf{x}_{\text{hyper}} = 0
\end{equation}
This way of writing is much more concise and therefore we will stick to it moving forward. Let us further define the positive and negative margin hyperplanes, which lie parallel to the decision boundary:
\begin{align}
\beta_0 + \mathbf{\beta}^T \mathbf{x}{\text{pos}} &= 1 &\text{pos. margin} \
\beta_0 + \mathbf{\beta}^T \mathbf{x}{\text{neg}} &= -1 &\text{neg. margin}
\end{align}
Below you find a visual representationof the above. Notice that the two margin hyperplanes are parallel and the values for $\beta_0, \mathbf{\beta}$ are identical
<img src="Graphics/0211_SVM_Intro5.png" alt="SVM_Intro5" style="width: 1000px;"/>
If we subtract the equation for the negative margin from the positive, we get:
\begin{equation}
\mathbf{\beta}^T (\mathbf{x}{\text{pos}} - \mathbf{x}{\text{neg}}) = 2
\end{equation}
Let us normalize both sides of the equation by the length of the vector $\mathbf{\beta}$, that is the norm, which is defined as follows:
\begin{equation}
\Vert \mathbf{\beta} \Vert := \sqrt{\sum_{i=1}^p \beta_i^2} = 1
\end{equation}
With that we arrive at the following expression:
\begin{equation}
\frac{\mathbf{\beta}^T (\mathbf{x}{\text{pos}} - \mathbf{x}{\text{neg}})}{\Vert \mathbf{\beta}\Vert} = \frac{2}{\Vert \mathbf{\beta} \Vert}
\end{equation}
The left side of the equation can be interpreted as the normalized distance between the positive (upper) and negative (lower) margin. This distance we aim to maximize. Since maximizing the lefthand side of above expression is similar to maximizing the right hand side, we can summarize this in the following optimization problem:
\begin{equation}
\begin{aligned}
& \underset{\beta_0, \beta_1, \ldots, \beta_p}{\text{maximize}}
& & \frac{2}{\Vert \mathbf{\beta} \Vert} \
& \text{subject to} & & \beta_0 + \mathbf{\beta}^T \mathbf{x}{i} \geq \;\; 1 \quad \text{if } y_i = 1 \
&&& \beta_0 + \mathbf{\beta}^T \mathbf{x}{i} \leq -1 \quad \text{if } y_i = -1 \
&&& \text{for } i = 1, \ldots, N.
\end{aligned}
\end{equation}
The two constraints make sure that all positive samples ($y_i = 1$) fall on or above the positive side of the positive margin hyperplane and all negative samples ($y_i = -1$) are on or below the negative margin hyperplane. A few tweaks allow us to write the two constraints as one. We show this by transforming the second constraint, in which case $y_i = -1$:
\begin{align}
\beta_0 + \mathbf{\beta}^T \mathbf{x}_i &\leq -1 \
\Leftrightarrow \qquad y_i (\beta_0 + \mathbf{\beta}^T \mathbf{x}_i) &\geq (-1)y_i \
\Leftrightarrow \qquad y_i (\beta_0 + \mathbf{\beta}^T \mathbf{x}_i) &\geq 1
\end{align}
The same can be done for the first constraint - it will yield the same expression. Therefore, our maximization problem can be restated in a slightly simpler form:
\begin{equation}
\begin{aligned}
& \underset{\beta_0, \beta_1, \ldots, \beta_p}{\text{maximize}}
& & \frac{2}{\Vert \mathbf{\beta} \Vert} \
& \text{subject to} & & y_i(\beta_0 + \mathbf{\beta}^T \mathbf{x}_{i}) \geq 1 \quad \text{for } i = 1, \ldots, N.
\end{aligned}
\end{equation}
This is a convex optimization problem (quadratic criterion with linear inequality constraints) and can be solved with Lagrange. For details refer to appendix (D1) of the script.
Note that in practice it is easier to minimize the reciprocal term of the squared norm of $\mathbf{\beta}$, $\frac{1}{2} \Vert\mathbf{\beta} \Vert^2$. Therefore the objective function is often given as
\begin{equation}
\begin{aligned}
& \underset{\beta_0, \beta}{\text{minimize}}
& & \frac{1}{2}\Vert \mathbf{\beta} \Vert^2 \
& \text{subject to} & & y_i(\beta_0 + \mathbf{\beta}^T \mathbf{x}_{i}) \geq 1 \quad \text{for } i = 1, \ldots, N.
\end{aligned}
\end{equation}
This transformation does not change the optimization problem yet at the same time is computationally easier to be handled by quadratic programming. A detailed discussion of quadratic programming goes beyond the scope of this course. For details, see e.g. Vapnik (2000) or Burges (1998).**
Support Vector Classifier
Non-Separable Data
Given our data is separable into two classes, the maximal margin classifier from before seems like a natural approach. However, it is easy to see that when the data is not clearly discriminable, no separable hyperplane exists and therefore such a classifier does not exist. In that case the above maximization problem has no solution. What makes the situation even more complicated is that the maximal margin classifier is very sensitive to changes in the support vectors. This means that this classifier might suffer from inappropriate sensitivity to individual observations and thus it has a substantial risk of overfitting the training data. That is why we might be willing to consider a classifier on a hyperplane that does not perfectly separate the two classes but allows for greater robustness to individual observations and better classification of most of the training observations. In other words it could be worthwhile to misclassify a few training observations in order to do a better job in classifying the test data (James et al. (2013)).
Details of the Support Vector Classifier
This is where the Support Vector Classifier (SVC) comes into play. It allows a certain number of observations to be on the 'wrong' side of the hyperplane while seeking a solution where the majority of data points are still on the 'correct' side of the hyperplane. The following figure visualizes this.
<img src="Graphics/0211_SVM_Intro6.png" alt="SVM_Intro6" style="width: 1000px;"/>
The SVC still classifies a test observation based on which side of a hyperplane it lies. However, when we train the model, the margins are now somewhat softened. This means that the model allows for a limited number of training observations to be on the wrong side of the margin and hyperplane, respectively.
Let us briefly discuss in general terms how the support vector classifier reaches its optimal solution. For this we extend the optimization problem from the maximum margin classifier as follows:
\begin{equation}
\begin{aligned}
& \underset{\beta_0, \beta}{\text{minimize}}
& & \frac{1}{2}\Vert \mathbf{\beta} \Vert^2 + C \left(\sum_{i=1}^n \epsilon_i \right) \
& \text{subject to} & & \beta_0 + \mathbf{\beta}^T \mathbf{x}_{i} \geq (1-\epsilon_i) \quad \text{for } i = 1, \ldots, N. \
& & & \epsilon_i \geq 0 \quad \forall i
\end{aligned}
\end{equation}
This, again, can be solved with Lagrange similar to the way it is shown for the maximum margin classifier (see appendix (D1)) and it is left to the reader as an exercise to derive the Lagrange (primal and dual) objective function. For the impatient readers will find a solution draft in Friedman et al. (2001), section 12.2.1.
Let us now focus on the added term $C \left(\sum_{i=1}^n \epsilon_i \right)$. Here, $\epsilon_1, \epsilon_2, \ldots, \epsilon_n$ are slack variables that allow the individual observations to be on the wrong side of the margin or the hyperplane. They contain information on where the $i$th observation is located, relative to the hyperplane and relative to the margin.
If $\epsilon_i = 0$ then the $i$th observation is on the correct side of the margin,
if $1 \geq \epsilon_i > 0$ it is on the wrong side of the margin but correct side of the hyperplane, and
if $\epsilon_i > 1$ it is on the wrong side of the hyperplane.
The tuning parameter $C$ can be interpreted as a penalty factor for misclassification. It is defined by the user. Large values of $C$ correspond to a significant error penalty, whereas small values are used if we are less strict about misclassification errors. By controlling for $C$ we indirectly control for the margin and therefore actively tune the bias-variance trade-off. Decreasing the value of $C$ increases the bias but lowers the variance of the model.
Below figure shows how $C$ impacts the decision boundary and its corresponding margin.
<img src="Graphics/0211_SVM_Intro7.png" alt="SVM_Intro7" style="width: 1000px;"/>
Solving Nonlinear Problems
So far we worked with data that is linearly separable. What makes SVM so powerful and popular is that it can be kernelized to solve nonlinear classification problems. We start our discussion again with illustrations to build an intuition.
<img src="Graphics/0211_SVM_kernel1.png" alt="SVM_kernel1" style="width: 1000px;"/>
Clearly the data is not linear and the resulting (linear) decision boundary is useless. How, then, do we deal with this? With mapping functions. The basic idea is to project the data via some mapping function $\phi$ onto a higher dimension such that a linear separator would be sufficient. The idea is similar to using quadratic and cubic terms of the predictor in linear regression in order to address non-linearity $(y = \beta_0 + \beta_1 x_i + \beta_2 x_i^2 + \beta_3 x_i^3 + \ldots)$ . For example, for the data in the preceding figure we could use the following mapping function $\phi: \mathbb{R}^2 \rightarrow \mathbb{R}^3$.
\begin{equation}
\phi(x_1, x_2) = (z_1, z_2, z_3) = \left(x_1, x_2, x_1^2 + x_2^2 \right)
\end{equation}
<img src="Graphics/0211_SVM_kernel2.png" alt="SVM_kernel2" style="width: 1000px;"/>
Here we enlarge our feature space from $\mathbb{R}^2 \rightarrow \mathbb{R}^3$ in oder to accommodate a non-linear boundary. The transformed data becomes trivially linearly separable. All we have to do is find a plane in $\mathbb{R}^3$. If we project this decision boundary back onto the original feature space $\mathbb{R}^2$ (with $\phi^{-1}$), we have a nonlinear decision boundary.
<img src="Graphics/0211_SVM_kernel3.png" alt="SVM_kernel3" style="width: 1000px;"/>
Here's an animated visualization of this concept.
End of explanation
"""
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
plt.rcParams['font.size'] = 14
# Load data
df = pd.read_csv('Data/5year.csv', sep=',')
df.head()
# Check for NA values
df.isnull().sum()
# Calculate % of missing values for 'Attr37'
df['Attr37'].isnull().sum() / (len(df))
"""
Explanation: The Problem with Mapping Functions
One could think that this is the recipe to work with nonlinear data: Transform all training data onto a higher-dimensional feature space via some mapping function $\phi$ train a linear SVM model and use the same function $\phi$ to transform new (test) data to classify it.
As attractive as this idea seems, it is unfortunately unfeasible because it quickly becomes computationally too expensive. Here is a hands-on example why: Consider for example a degree-2 polynomial (kernel) transformation of the form $\phi(x_1, x_2) = (x_1^2, x_2^2, \sqrt{2} x_1 x_2, \sqrt{2c} x_1, \sqrt{2c} x_2, c)$. This means that for a dataset in $\mathbb{R}^2$ the transformation adds four additional dimensions ($\mathbb{R}^2 \rightarrow \mathbb{R}^6$). If we generalize this, it means that a $d$-dimensional polynomial (Kernel) transformation maps from $\mathbb{R}^p$ to an ${p + d}\choose{d}$-dimensional space (Balcan (2011)). Thus for datasets with $p$ large, naively performing such transformations will force most computers to its knees.
The Kernel Trick
Thankfully, not all is lost. It turns out that one does not need to explicitly work in the higher-dimensional space. One can show that when using Lagrange to solve our optimization problem, the training samples are only used to compute the pair-wise dot products $\langle x_i, x_{j}\rangle$ (where $x_i, x_{j} \in \mathbb{R}^{p}$). This is significant because there exist functions that, given two vectors $x_i$ and $x_{j}$ in $\mathbb{R}^p$, implicitly compute the dot product between the two vectors in a higher-dimension $\mathbb{R}^q$ (with $q > p$) without explicitly transforming $x_i, x_{j}$ onto a higher dimension $\mathbb{R}^q$. Such functions are called Kernel functions, written $K(x_i, x_{j})$ (Kim (2013)).
Let us show an example of such a Kernel function (following Hofmann (2006)). For ease of reading we use $x = (x_1, x_2)$ and $z=(z_1, z_2)$ instead of $x_i$ and $x_{j}$. Consider the Kernel function $K(x, z) = (x^T z)^2$ and the mapping function $\phi(x) = (x_1^2, \sqrt{2}x_1 x_2, x_2^2)$. If we were to solve our optimization problem from above with Lagrange, the mapping function appears in the form $\phi(x)^T \phi(z)$.
\begin{align}
\phi(x)^T \phi(z) &= (x_1^2, \sqrt{2}x_1 x_2, x_2^2)^T (z_1^2, \sqrt{2}z_1 z_2, z_2^2) \
&= x_1^2 z_1^2 + 2x_1 z_1 x_2 z_2 + x_2^2 z_2^2 \
&= (x_1 z_1 + x_2 z_2)^2 \
&= (x^T z)^2 \
&= K(x, z)
\end{align}
The mapping function would have transformed the data from $\mathbb{R}^2 \rightarrow \mathbb{R}^3$ and back. The Kernel function, however, stays in $\mathbb{R}^2$. This is of course only one (toy) example and far away from a proper proof but it provides the intuition of what can be generalized: that by using a Kernel function, e.g. where $K(x_i, x_j) = (x^T z)^2 = \phi(x_i)^T \phi(x_j)$, we implicitly transform our data to a higher-dimension without having to explicitly apply a mapping function $\phi$. This so called "Kernel Trick" allows us to efficiently learn nonlinear decision boundaries for SVM.
Popular Kernel Functions
Not every random mapping function is also a Kernel function. For a function to be a Kernel function, it needs to have certain properties (see e.g. Balcan (2011) or Hofmann (2006) for a discussion). In SVM literature, the following three Kernel functions have emerged as popular choices (Friedman et al. (2001)):
\begin{align}
d\text{th-Degree polynomial} \qquad K(x_i, x_j) &= (r + \gamma \langle x_i, x_j \rangle)^d \
\text{Radial Basis (RBF)} \qquad K(x_i, x_j) &= \exp(-\gamma \Vert x_i - x_j \Vert^2) \
\text{Sigmoid} \qquad K(x_i, x_j) &= \tanh(\gamma \langle x_i, x_j \rangle + r)
\end{align}
In general there is no "best choice". With each Kernel having some degree of variability, one has to find the optimal solution by experimenting with different Kernels and playing with their parameter ($\gamma, r, d$).
Optimization with Lagrange
We have mentioned before that the optimization problem of the maximum margin classifier and support vector classifier can be solved with Lagrange. The details of which are beyond the scope of this notebook. However, the interested reader is encouraged to learn the details in the appendix of the script (and the recommended reference sources) as these are crucial in understanding the mathematics/core of SVM and the application of Kernel functions.
SVM with Scikit-Learn
Preparing the Data
Having build an intuition of how SVM work, let us now see this algorithm applied in Python. We will again use the Scikit-learn package that has an optimized class implemented. The data we will work with is called "Polish Companies Bankruptcy Data Set" and was used in Zieba et al. (2014). The full set comprises five data files. Each file contains 64 features plus a class label. The features are ratios derived from the financial statements of the more than 10'000 manufacturing companies considered during the period of 2000 - 2013 (from EBITDA margin to equity ratio to liquidity ratios (quick ratio) etc.. The five files differ in that the first contains data with companies that defaulted/were still running five years down the road ('1year.csv'), the second four years down the road ('2year.csv') etc. Details can be found in the original publication (Zikeba et al. (2016)) or in the description provided on the UCI Machine Learning Repository site where the data was downloaded from. For our purposes we will use the '5year.csv' file where we should predict defaults within the next year.
End of explanation
"""
# Drop column with 'Attr37'.
# Notice that as of Pandas version 0.21.0 you can simply use df.drop(columns=['Attr37'])
df = df.drop('Attr37', axis=1)
df.iloc[:, 30:38].head()
"""
Explanation: Attribute 37 sticks out with 2'548 of 5'910 (43.1%) missing values. This attribute considers "(current assets - inventories) / long-term liabilities". Due to the many missing values we can not use a fill method so let us drop this feature column.
End of explanation
"""
from sklearn.impute import SimpleImputer
# Impute missing values by mean (axis=0 --> along columns;
# Notice that argument 'axis=' has been removed as of version 0.20.2)
ipr = SimpleImputer(missing_values=np.nan, strategy='mean')
ipr = ipr.fit(df.values)
imputed_data = ipr.transform(df.values)
# Assign imputed values to 'df' and check for 'NaN' values
df = pd.DataFrame(imputed_data, columns=df.columns)
df.isnull().sum().sum()
"""
Explanation: As for the other missing values we are left to decide whether we want to remove the corresponding observations (rows) or apply a filling method. The problem with dropping all rows with missing values is that we might lose a lot of valuable information. Therefore in this case we prefer to use a common interpolation technique and impute NaN values with the corresponging feature mean. Alternatively we could use 'median' or 'most_frequent' as strategy. A convenient way to achieve this imputation is to use the Imputer class from sklearn.
Notice that as of sklearn version 0.20.2 Scikit-learn has relabeled function Imputer we use below to SimpleImputer(). Furthermore, with version 0.22.1 other imputers were introduced, such as a KNNImputer or a (yet still experimental) IterativeImputer. It is up to the reader to familiarize her-/himself with the available options for imputing. See Scikit-learn's guide for details. To check for the Sklearn version you currently run hit !pip list in your shell.
End of explanation
"""
df.shape[0] * df.shape[1] - df.applymap(np.isreal).sum().sum()
"""
Explanation: Now let us check if we have some categorical features that we need to transform. For this we compare the number of cells in the dataframe with the sum of numeric values (np.isreal()). If the result is 0, we do not need to apply a One-Hot-Encoding or LabelEncoding procedure.
End of explanation
"""
X = df.iloc[:, :-1].values
y = df.iloc[:, -1].values
"""
Explanation: As we see, the dataframe only consists of real values. Therefore, we can proceed by assigning columns 1-63 to variable X and column 64 to y.
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,
random_state=0,
stratify=y)
"""
Explanation: Applying SVM
Having assigned the data to X and y we are now ready to divide the dataset into separate training and test sets.
End of explanation
"""
from sklearn.preprocessing import StandardScaler
# Create StandardScaler object
sc = StandardScaler()
# Standardize features; equal results as if done in two
# separate steps (first .fit() and then .transform())
X_train_std = sc.fit_transform(X_train)
# Transform test set
X_test_std = sc.transform(X_test)
"""
Explanation: Unlike e.g. decision tree algorithms SVM are sensitive to the magnitude the data. Therefore scaling our data is recommended.
End of explanation
"""
from sklearn.svm import SVC
from sklearn import metrics
import matplotlib.pyplot as plt
# Create object
svm_linear = SVC(kernel='linear', C=1.0)
svm_linear
"""
Explanation: With the data standardized, we can finally apply a SVM on the data. We import the SVC (for Support Vector Classifier) from the Scikit-learn toolbox and create a svm_linear object that represents a linear SVM with C=1. Recall that C helps us control the penalty for misclassification. Large values of C correspond to large error penalties and vice-versa. More parameter can be specified. Details are best explained in the function's documentation page.
End of explanation
"""
# Fit linear SVM to standardized training set
svm_linear.fit(X_train_std, y_train)
# Print results
print("Observed probability of non-default: {:.2f}".format(np.count_nonzero(y_train==0) / len(y_train)))
print("Train score: {:.2f}".format(svm_linear.score(X_train_std, y_train)))
print("Test score: {:.2f}".format(svm_linear.score(X_test_std, y_test)))
# Predict classes
y_pred = svm_linear.predict(X_test_std)
# Manual confusion matrix as pandas DataFrame
confm = pd.DataFrame({'Predicted': y_pred,
'True': y_test})
confm.replace(to_replace={0:'Non-Default', 1:'Default'}, inplace=True)
print(confm.groupby(['True','Predicted'], sort=False).size().unstack('Predicted'))
"""
Explanation: With the svm_linear object ready we can now fit the object to the training data and check for the model's accuracy.
End of explanation
"""
svm_poly = SVC(kernel='poly', random_state=1)
svm_poly
"""
Explanation: In the same way we can run a Kernel SVM on the data. We have four Kernel options: one linear as introduced above and three non-linear. All of them have hyperparameter available. If these are not specified, default values are taken. Check the documentation for details.
linear: linear SVM as shown above with C as hyperparameter
rbf: Radial basis function Kernel with C, gamma as hyperparameter
poly: Polynomial Kernel with C, degree, gamma, coef0 as hyperparameter
sigmoid: Sigmoid Kernel with C, gamma, coef0 as hyperparameter
Let us apply a polynomial Kernel as example.
End of explanation
"""
# Fit polynomial SVM to standardized training set
svm_poly.fit(X_train_std, y_train)
# Print results
print("Observed probability of non-default: {:.2f}".format(np.count_nonzero(y_train==0) / len(y_train)))
print("Train score: {:.2f}".format(svm_poly.score(X_train_std, y_train)))
print("Test score: {:.2f}".format(svm_poly.score(X_test_std, y_test)))
# Predict classes
y_pred = svm_poly.predict(X_test_std)
# Manual confusion matrix as pandas DataFrame
confm = pd.DataFrame({'Predicted': y_pred,
'True': y_test})
confm.replace(to_replace={0:'Non-Default', 1:'Default'}, inplace=True)
print(confm.groupby(['True','Predicted'], sort=False).size().unstack('Predicted'))
"""
Explanation: Not having specified hyperparameter C, degree, gamma, and coef0 we see that the algorithm has taken default values. For C it is equal to 1, default degree is 3, gamma=auto means that the value will be calculated as $1/n_{\text{features}}$, and coef0 is set to 0 as default.
End of explanation
"""
# Initiate and fit a polynomial SVM to training set
svm_poly = SVC(kernel='poly', random_state=1, class_weight='balanced')
svm_poly.fit(X_train_std, y_train)
# Predict classes and print results
y_pred = svm_poly.predict(X_test_std)
print(metrics.classification_report(y_test, y_pred))
print(metrics.confusion_matrix(y_test, y_pred))
print("Test score: {:.2f}".format(svm_poly.score(X_test_std, y_test)))
"""
Explanation: As it looks linear and polynomial SVM yield similar results. What is clearly unsatisfactory is the number of true defaults that the SVM missed to detect. Both linear as well as non linear SVM miss to label $\geq$ 80 defaults [sic]. From a financial perspective, this is unacceptable and raises questions regarding
* Class imbalance
* Hyperparameter fine-tuning through cross validation and grid search
* Feature selection
* Noise & dimension reduction
which we want to address in the next section.
Dealing with Class Imbalance
When we deal with default data sets we observe that the ratio of non-default to default records is heavily skewed towards non-default. This is a common problem in real-world data set: Samples from one class or multiple classes dominate the data set. For the present data set we are talking 93% non-defaults vs. 7% defaults. Having an algorithm that predicts non-default 100 out of a 100 times is right in 93% of the cases. Therefore, training a model on such a data set that achieves the same 93% test accuracy (as our SVM above) means nothing else than our model hasn't learned anything informative from the features provided in this data set. Thus, when assessing a classifier on an imbalanced data set we have learned that other metrics such as precision, recall, ROC curve etc. might be more informative.
Having said that, what we have to consider is that a class imbalance might influences a learning algorithm during the model fitting itself. Machine learning algorithms typically optimize a reward or cost function. This means that an algorithm implicitly learns the model that optimizes the predictions based on the most abundant class in the dataset in order to minimize the cost or maximize the reward during the training phase. And this in turn might yield skewed results in case of imbalanced data sets.
There are several options to deal with class imbalance, we will discuss two of them. The first option is to set the class_weight parameter to class_weight='balanced'. Most classifier hae this option implemented (of the introduced classifiers, KNN, LDA and QDA lack such a parameter). This will assign a larger penalty to wrong predictions on the minority class.
End of explanation
"""
pd.DataFrame(X[y==1]).shape
X[y==0].shape
from sklearn.utils import resample
# Upsampling: define which rows you want to upsample
# (i.e. all columns of X where value in corresponding y vector is equal to 1: X[y==1],
# and similar for y[y==1]. Then define how many samples should be generated through
# bootstrapping (here: X[y==0].shape[0] = 5'500))
X_upsampled, y_upsampled = resample(X[y==1], y[y==1],
replace=True,
n_samples=X[y==0].shape[0],
random_state=1)
print('No. of default samples BEFORE upsampling: {:.0f}'.format(y.sum()))
print('No. of default samples AFTER upsampling: {:.0f}'.format(y_upsampled.sum()))
"""
Explanation: The second option we want to discuss is up- & downsampling of the minority/majority class. Both up- and downsampling are implemented in Scikit-learn through the resample function and depending on the data and given the task at hand, one might be better suited than the other. For the upsampling, scikit-learn will apply a bootstrapping to draw new samples from the datasets with replacement. This means that the function will repeatedly draw new samples from the minority class until it contains the number of samples we define. Here's a code example:
End of explanation
"""
# Downsampling
X_dnsampled, y_dnsampled = resample(X[y==0], y[y==0],
replace=False,
n_samples=X[y==1].shape[0],
random_state=1)
"""
Explanation: Downsampling works in similar fashion.
End of explanation
"""
# Combine datasets
X_bal = np.vstack((X[y==1], X_dnsampled))
y_bal = np.hstack((y[y==1], y_dnsampled))
# Train test split
X_train_bal, X_test_bal, y_train_bal, y_test_bal = \
train_test_split(X_bal, y_bal,
test_size=0.2,
random_state=0,
stratify=y_bal)
# Standardize features; equal results as if done in two
# separate steps (first .fit() and then .transform())
X_train_bal_std = sc.fit_transform(X_train_bal)
# Transform test set
X_test_bal_std = sc.transform(X_test_bal)
# Initiate and fit a polynomial SVM to training set
svm_poly_bal = SVC(kernel='poly', random_state=1)
svm_poly_bal.fit(X_train_bal_std, y_train_bal)
# Predict classes and print results
y_pred_bal = svm_poly_bal.predict(X_test_bal_std)
print(metrics.classification_report(y_test_bal, y_pred_bal))
print(metrics.confusion_matrix(y_test_bal, y_pred_bal))
print("Test score: {:.2f}".format(svm_poly_bal.score(X_test_bal_std, y_test_bal)))
"""
Explanation: Running the SVM algorighm on the balanced dataset works now as you would expect:
End of explanation
"""
from sklearn.pipeline import Pipeline
# Create pipeline object with standard scaler and SVC estimator
pipe = Pipeline([('scaler', StandardScaler()),
('svm_poly', SVC(kernel='poly', random_state=0))])
"""
Explanation: By applying a SVM to a balanced set of data we improve our model slightly. Yet there remains some work to be done. The polynomial SVM still misses out on 95.1% (=78/82) of the default cases.
It should be said that in general using an upsampled set is to be preferred over a downsampled set. However, here we are talking 11'000 observations times 63 features for the upsampled set and this can easily take quite some time to run models on, especially if we compute a grid search as in the next section. For this reason the downsampled set was used.
Hyperparameter Fine-Tuning
Pipelines
Another tool that is of help in optimizing our model is the GridSearchCV function introduced in the previous chapter that finds the best hyperparameter through a brute-force (cross validation) approach. Yet before we simply copy-past the code from the last chapter we ought to address a subtle yet important difference between the decision tree and SVM (or most other ML) algorithms that has implications on the application: Decision tree algorithms are of the few models where data scaling is not necessary. SVM on the other hand are (as most ML algorithms) fairly sensitive to the magnitude of the data. Now you might say that this is precisely why we standardized the data at the very beginning and with that we are good to go. In principle, this is correct. However, if we are precise, we commit a subtle yet possibly significant thought error.
If we decide to apply a grid search using cross validation to find the optimal hyperparameter for e.g. a SVM we unfortunately can not just scale the full data set at the very beginning and then be good for the rest of the process. Conceptually it is important to understand why. Assume we have a data set. As we learned in the chapter on feature scaling and cross validation, applying a scaling on the combined data set and splitting the set into training and holdout set after the scaling is wrong. The reason is that information from the test set found its way into the model and distorts the results. The training set is scaled not only based on information from that set but also based on information from the test set.
Now the same is true if we apply a gridsearch process with cross validation on a training set. For each fold in the CV, some part of the training set will be declared as the training part, and some the test part. The test part within this split is used to measure the performance of our model trained on the training part. However, if we simply scale the training set and then apply gridsearch-CV on the scaled training set we would commit the same thought error as if we simply scale the full set at the very beginning. The test fold (of the CV split) would no longer be independent but implicitly already be part of the training set we used to fit the model. This is fundamentally different from how new data looks to the model. The test data within each cross validation split would no longer correctly mirrors how new data would look to the modeling process. Information already leaked from the test data into our modeling process. This would lead to overly optimistic results during cross validation, and possibly the selection of suboptimal parameter (Müller & Guido (2017)).
We have not addressed this problem in the chapter on cross validation because so far we have not introduced the tool to deal with it. Furthermore, if our data set is homogeneous and of some size, this is less of an issue. Yet as Scikit-learn provides a fantastic tool to deal with this (and many other) issue(s), we want to introduce it here. The tool is called pipelines and allows to combine multiple processing steps in a very convenient and proper way. Let us look at how we can use the Pipeline class to express the end-to-end workflow. First we build a pipeline object. This object is provided a list of steps. Each step is a tuple containing a name (you define) and an instance of an estimator.
End of explanation
"""
# Define parameter grid
param_grid = {'svm_poly__C': [0.1, 1, 10, 100],
'svm_poly__degree': [1, 2, 3, 5, 7]}
"""
Explanation: Next we define a parameter grid to search over and construct a GridSearchCV from the pipeline and the parameter grid. Notice that we have to specify for each parameter which step of the pipeline it belongs to. This is done by calling the name we gave this step, followed by a double underscore and the parameter name. For the present example, let us compare different degrees, and C values.
End of explanation
"""
from sklearn.model_selection import GridSearchCV
# Run grid search
grid = GridSearchCV(pipe, param_grid=param_grid, cv=5, n_jobs=-1)
grid.fit(X_train_bal, y_train_bal)
# Print results
print('Best CV accuracy: {:.2f}'.format(grid.best_score_))
print('Test score: {:.2f}'.format(grid.score(X_test_bal, y_test_bal)))
print('Best parameters: {}'.format(grid.best_params_))
"""
Explanation: With that we can run a GridSearchCV as usual.
End of explanation
"""
from sklearn.decomposition import PCA
# Create pipeline object with standard scaler, PCA and SVC estimator
pipe = Pipeline([('scaler', StandardScaler()),
('pca', PCA(n_components=2)),
('svm_poly', SVC(kernel='poly', random_state=0))])
# Define parameter grid
param_grid = {'svm_poly__C': [100],
'svm_poly__degree': [1, 2, 3]}
# Run grid search
grid = GridSearchCV(pipe, param_grid=param_grid, cv=5, n_jobs=-1)
grid.fit(X_train_bal, y_train_bal)
# Print results
print('Best CV accuracy: {:.2f}'.format(grid.best_score_))
print('Test score: {:.2f}'.format(grid.score(X_test_bal, y_test_bal)))
print('Best parameters: {}'.format(grid.best_params_))
"""
Explanation: Notice that thanks to the pipeline object, now for each split in the cross validation the StandardScaler is refit with only the training splits and no information is leaked from the test split into the parameter search.
Depending on the grid you search, computations might take quite some time. One way to improve speed is by reducing the feature space; that is reducing the number of features. We will discuss feature selection and dimension reduction options in the next section but for the moment, let us just apply a method called Principal Component Analysis (PCA). PCA effectively transforms the feature space from $\mathbb{R}^{p} \rightarrow \mathbb{R}^{q}$ with $q$ being a user specified value (but usually $q < < p$). PCA is similar to other preprocessing steps and can be included in pipelines as e.g. StandardScaler.
Here we reduce the feature space from $\mathbb{R}^{63}$ (i.e. $p=63$ features) to $\mathbb{R}^{2}$. This will make the fitting process faster. However, this comes at a cost: by reducing the feature space we might not only get rid of noise but also lose part of the information available in the full dataset. Our model accuracy might suffer as a consequence. Furthermore, the speed that we gain by fitting a model to a smaller subset can be set off by the additional computations it takes to calculate the PCA. In the example of the upsampled data set we would be talking of an $[11'000 \cdot 0.8 \cdot 0.8 \times 63]$ matrix (0.8 for the train/test-split and each cv fold) for which eigenvector and eigenvalues need to be calculated. This means up to 63 eigenvalues per grid search loop.
End of explanation
"""
from sklearn.linear_model import LogisticRegression
# Create pipeline object with standard scaler, PCA and SVC estimator
pipe = Pipeline([('scaler', StandardScaler()),
('classifier', SVC(random_state=0))])
# Define parameter grid
param_grid = [{'scaler': [StandardScaler()],
'classifier': [SVC(kernel='rbf')],
'classifier__gamma': [1, 10],
'classifier__C': [10, 100]},
{'scaler': [StandardScaler(), None],
'classifier': [LogisticRegression(max_iter=1000)],
'classifier__C': [10, 100]}]
# Run grid search
grid = GridSearchCV(pipe, param_grid, cv=5, n_jobs=-1)
grid.fit(X_train_bal, y_train_bal)
# Print results
print('Best CV accuracy: {:.2f}'.format(grid.best_score_))
print('Test score: {:.2f}'.format(grid.score(X_test_bal, y_test_bal)))
print('Best parameters: {}'.format(grid.best_params_))
"""
Explanation: Other so called preprocessing steps can be included in the pipeline too. This shows how seamless such workflows can be steered through pipelines. We can even combine multiple models as we show in the next code snippet. By now you are probably aware that trying all possible solutions is not a viable machine learning strategy. Computational power is certainly going to be an issue. Nevertheless, for the record we provide below an example where we apply logistic regression and a SVM with RBF kernel to find the best solution (details see section on PCA below).
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
# Extract feature labels
feat_labels = df.columns[:-1]
# Create Random Forest object, fit data and
# extract feature importance attributes
forest = RandomForestClassifier(random_state=1)
forest.fit(X_train_bal, y_train_bal)
importances = forest.feature_importances_
# Sort output (by relative importance) and
# print top 15 features
indices = np.argsort(importances)[::-1]
n = 15
for i in range(n):
print('{0:2d}) {1:7s} {2:6.4f}'.format(i + 1,
feat_labels[indices[i]],
importances[indices[i]]))
"""
Explanation: From the above output we see that the SVC yields the best accuracy.
Feature Selection and Dimensionality Reduction
Complexity and the Curse of Overfitting
If we observe that a model performs much better on training than on test data, we have an indication that the model suffers from overfitting. The reason for the overfitting is most probably that our model is too complex for the given training data. Common solutions to reduce the generalization error are (Raschka (2015)):
* Collect more (training) data
* Introduce a penalty for complexity via regularization
* Choose a simpler model with fewer parameter
* Reduce the dimensionality of the data
Collecting more data is self explanatory but often not applicable. Regularization via a complexity penalty term is a technique that is primarily applicable to regression settings (e.g. logistic regression). We will not discuss it here but the interested reader will easily find helpful information in e.g. James et al. (2013) chapter 6 or Raschka (2015) chapter 4. Here we will look at one commonly used solution to reduce overfitting: dimensionality reduction via feature selection.
Feature Selection
A useful approach to select relevant features from a data set is to use information from the random forest algorithm we introduced in the previous chapter. There we elaborated how decision trees rank the feature importance based on a impurity decrease. Conveniently, we can access this feature importance rank directly from the RandomForestClassifier object. By executing below code - following the example in Raschka (2015) - we will train a random forest model on the balanced default data set (from before) and rank the features by their respective importance measure.
End of explanation
"""
# Get cumsum of the n most important features
feat_imp = np.sort(importances)[::-1]
sum_feat_imp = np.cumsum(feat_imp)[:n]
# Plot Feature Importance (both cumul., individual)
plt.figure(figsize=(12, 8))
plt.bar(range(n), importances[indices[:n]], align='center')
plt.xticks(range(n), feat_labels[indices[:n]], rotation=90)
plt.xlim([-1, n])
plt.xlabel('Feature')
plt.ylabel('Rel. Feature Importance')
plt.step(range(n), sum_feat_imp, where='mid',
label='Cumulative importance')
plt.tight_layout();
"""
Explanation: The value in decimal is the relative importance for the respective feature. We can also plot this result to have a better overview. Below code shows one way of doing it.
End of explanation
"""
from sklearn.feature_selection import SelectFromModel
pipe = Pipeline([('feature_selection', SelectFromModel(RandomForestClassifier(), threshold='median')),
('scaler', StandardScaler()),
('classification', SVC())])
pipe.fit(X_train_bal, y_train_bal).score(X_test_bal, y_test_bal)
"""
Explanation: Executing the code will rank the different features according to their relative importance. The definition of each AttrXX we would have to look up in the data description. Note that the feature importance values are normalized such that they sum up to 1.
Feature selection in the way shown in the preceding code snippets will not work in combination with a pipeline object. However, Scikit-learn has implemented such a function that could be used in a preprocessing step. Its name is SelectFromModel and details can be found here. Instead of selecting the top $n$ features you define a threshold, which selects those features whose combined importance is greater or equal to said threshold (e.g. mean, median etc.). For reference, below it is shown how the function is applied inside a pipeline.
End of explanation
"""
from sklearn.decomposition import PCA
# Define no. of PC
q = 10
# Create PCA object and fit to find
# first q principal components
pca = PCA(n_components=q)
pca.fit(X_train_bal)
pca
"""
Explanation: Principal Component Analysis
In the previous section you learned an approach for reducing the dimensionality of a data set through feature selection. An alternative to feature selection is feature extraction, of which Principal Component Analysis (PCA) is the best known and most popular approach. It is an unsupervised method that aims to summarize the information content of a data set by transforming it onto a new feature subspace of lower dimensionality than the original one. With the rise of big data, this is a field that is gaining importance by the day. PCA is widely used in a variety of field - e.g. in finance to de-noise signals in stock market trading, create factor models, for feature selection in bankruptcy prediction, dimensionality reduction of high frequency data etc.. Unfortunately, the scope of this course does not allow us to discuss PCA in great detail. Nevertheless the fundamentals shall be addressed here briefly so that the reader has a good understanding of how PCA helps in reducing dimensionality.
To build an intuition for PCA we quote the excellent James et al. (2013, p. 375): "PCA finds a low-dimensional representation of a dataset that contains as much as possible of the variation. The idea is that each of the $n$ observations lives in $p$-dimensional space, but not all of these dimensions are equally interesting. PCA seeks a small number of dimensions that are as interesting as possible, where the concept of interesting is measured by the amount that the observation vary along each dimension. Each of the dimensions found by PCA is a linear combination of the $p$ features." Since each principal component is required to be orthogonal to all other principal components, we basically take correlated original variables (features) and replace them with a small set of principal components that capture their joint variation.
Below figures aim at visualizing the idea of principal components. In both figures we see the same two-dimensional dataset. PCA searches for the principal axis along which the data varies most. These principal axis measure the variance of the data when projected onto that axis. The two vectors (arrows) in the left plot visualize this. Notice that given an $[n \times p]$ feature matrix $\mathbf{X}$ there are at most $\min(n-1, p)$ principal components. The figure on the right-hand side displays the projection of the data points projected onto the first principal axis. In this way we have reduced the dimensionality from $\mathbf{R}^2$ to $\mathbf{R}^1$. In practice, PCA is of course primarily used for datasets with $p$ large and the selected number of principal components $q$ is usually much smaller than the dimension of the original dataset ($q << p)$.
<img src="Graphics/0211_PCA1.png" alt="PCA1" style="width: 1000px;"/>
The first principal component is the direction in space along which (orthogonal) projections have the largest variance. The second principal component is the direction which maximizes variance among all directions while being orthogonal to the first. The $k^{\text{th}}$ component is the variance-maximizing direction orthogonal to the previous $k-1$ components.
How do we express this in mathematical terms? Let $\mathbf{X}$ be an $n \times p$ dataset and let it be centered (i.e. each column mean is zero; notice that standardization is very important in PCA). The $p \times p$ variance-covariance matrix $\mathbf{C}$ is then equal to $\mathbf{C} = \frac{1}{n} \mathbf{X}^T \mathbf{X}$. Additionally, let $\mathbf{\phi}$ be a unit $p$-dimensional vector, i.e. $\phi \in \mathbb{R}^p$ and let $\sum_{i=1}^p \phi_{i1}^2 = \mathbf{\phi}^T \mathbf{\phi} = 1$.
The projections of the individual data points onto the principal axis are given by the linear combination of the form
\begin{equation}
Z_{i} = \phi_{1i} X_{1} + \phi_{2i} X_{2} + \ldots + \phi_{pi} X_{p}.
\end{equation}
In matrix notation we write
\begin{equation}
\mathbf{Z} = \mathbf{X \phi}
\end{equation}
Since each column vector $X_i$ is standardized, i.e. $\frac{1}{n} \sum_{i=1}^n x_{ip} = 0$, the average of $Z_i$ (the column vector for feature $i$) will be zero as well. With that, the variance of $\mathbf{Z}$ is
\begin{align}
\text{Var}(\mathbf{Z}) &= \frac{1}{n} (\mathbf{X \phi})^T (\mathbf{X \phi}) \
&= \frac{1}{n} \mathbf{\phi}^T \mathbf{X}^T \mathbf{X \phi} \
&= \mathbf{\phi}^T \frac{\mathbf{X}^T \mathbf{X}}{n} \mathbf{\phi} \
&= \mathbf{\phi}^T \mathbf{C} \mathbf{\phi}
\end{align}
Note that it is common standard to use the population estimation of variance (division by $n$) instead of the sample variance (division by $n-1$).
Now, PCA seeks to solve a sequence of optimization problems:
\begin{equation}
\begin{aligned}
& \underset{\mathbf{\phi}}{\text{maximize}} & & \text{Var}(\mathbf{Z})\
& \text{subject to} & & \mathbf{\phi}^T \mathbf{\phi}=1, \quad \phi \in \mathbb{R}^p \
&&& \mathbf{Z}^T \mathbf{Z} = \mathbf{ZZ}^T = \mathbf{I}.
\end{aligned}
\end{equation}
Looking at the above term it should be clear why we haver restricted vector $\mathbf{\phi}$ to be a unit vector. If not, we could simply increase $\mathbf{\phi}$ - which is not what we want. This problem can be solved with Lagrange and via an eigen decomposition (a standard technique in linear algebra). The details of which are explained in the appendix of the script.
How we apply PCA within a pipeline workflow we have shown above. A more general setup is shown in below code snippet. We again make use of the polish bankruptcy set introduced above.
End of explanation
"""
# Run PCA for all possible PCs
pca = PCA().fit(X_train_bal)
# Define max no. of PC
q = X_train_bal.shape[1]
# Get cumsum of the PC 1-q
expl_var = pca.explained_variance_ratio_
sum_expl_var = np.cumsum(expl_var)[:q]
# Plot Feature Importance (both cumul., individual)
plt.figure(figsize=(12, 6))
plt.bar(range(1, q + 1), expl_var, align='center')
plt.xticks(range(1, q + 1, 5))
plt.xlim([0, q + 1])
plt.xlabel('Principal Components')
plt.ylabel('Explained Variance Ratio')
plt.step(range(1, 1 + q), sum_expl_var, where='mid')
plt.tight_layout();
"""
Explanation: To close, one last code snippet is provided. Running it will visualize the cumulative explained variance ratio as a function of the number of components. (Mathematically, the explained variance ratio is the ratio of the eigenvalue of principal component $i$ to the sum of the eigenvalues, $\frac{\lambda_i}{\sum_{i}^p \lambda_i}$. See the appendix in the script to better understand the meaning of eigenvalues in this context.) In practice, this might be helpful in deciding on the number of principal components $q$ to use.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_custom_inverse_solver.ipynb | bsd-3-clause | import numpy as np
from scipy import linalg
import mne
from mne.datasets import sample
from mne.viz import plot_sparse_source_estimates
data_path = sample.data_path()
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'
subjects_dir = data_path + '/subjects'
condition = 'Left Auditory'
# Read noise covariance matrix
noise_cov = mne.read_cov(cov_fname)
# Handling average file
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked.crop(tmin=0.04, tmax=0.18)
evoked = evoked.pick_types(eeg=False, meg=True)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname, surf_ori=True)
"""
Explanation: Source localization with a custom inverse solver
The objective of this example is to show how to plug a custom inverse solver
in MNE in order to facilate empirical comparison with the methods MNE already
implements (wMNE, dSPM, sLORETA, LCMV, (TF-)MxNE etc.).
This script is educational and shall be used for methods
evaluations and new developments. It is not meant to be an example
of good practice to analyse your data.
The example makes use of 2 functions apply_solver and solver
so changes can be limited to the solver function (which only takes three
parameters: the whitened data, the gain matrix, and the number of orientations)
in order to try out another inverse algorithm.
End of explanation
"""
def apply_solver(solver, evoked, forward, noise_cov, loose=0.2, depth=0.8):
"""Function to call a custom solver on evoked data
This function does all the necessary computation:
- to select the channels in the forward given the available ones in
the data
- to take into account the noise covariance and do the spatial whitening
- to apply loose orientation constraint as MNE solvers
- to apply a weigthing of the columns of the forward operator as in the
weighted Minimum Norm formulation in order to limit the problem
of depth bias.
Parameters
----------
solver : callable
The solver takes 3 parameters: data M, gain matrix G, number of
dipoles orientations per location (1 or 3). A solver shall return
2 variables: X which contains the time series of the active dipoles
and an active set which is a boolean mask to specify what dipoles are
present in X.
evoked : instance of mne.Evoked
The evoked data
forward : instance of Forward
The forward solution.
noise_cov : instance of Covariance
The noise covariance.
loose : None | float in [0, 1]
Value that weights the source variances of the dipole components
defining the tangent space of the cortical surfaces. Requires surface-
based, free orientation forward solutions.
depth : None | float in [0, 1]
Depth weighting coefficients. If None, no depth weighting is performed.
Returns
-------
stc : instance of SourceEstimate
The source estimates.
"""
# Import the necessary private functions
from mne.inverse_sparse.mxne_inverse import \
(_prepare_gain, _to_fixed_ori, is_fixed_orient,
_reapply_source_weighting, _make_sparse_stc)
all_ch_names = evoked.ch_names
# put the forward solution in fixed orientation if it's not already
if loose is None and not is_fixed_orient(forward):
forward = forward.copy()
_to_fixed_ori(forward)
# Handle depth weighting and whitening (here is no weights)
gain, gain_info, whitener, source_weighting, mask = _prepare_gain(
forward, evoked.info, noise_cov, pca=False, depth=depth,
loose=loose, weights=None, weights_min=None)
# Select channels of interest
sel = [all_ch_names.index(name) for name in gain_info['ch_names']]
M = evoked.data[sel]
# Whiten data
M = np.dot(whitener, M)
n_orient = 1 if is_fixed_orient(forward) else 3
X, active_set = solver(M, gain, n_orient)
X = _reapply_source_weighting(X, source_weighting, active_set, n_orient)
stc = _make_sparse_stc(X, active_set, forward, tmin=evoked.times[0],
tstep=1. / evoked.info['sfreq'])
return stc
"""
Explanation: Auxiliary function to run the solver
End of explanation
"""
def solver(M, G, n_orient):
"""Dummy solver
It just runs L2 penalized regression and keep the 10 strongest locations
Parameters
----------
M : array, shape (n_channels, n_times)
The whitened data.
G : array, shape (n_channels, n_dipoles)
The gain matrix a.k.a. the forward operator. The number of locations
is n_dipoles / n_orient. n_orient will be 1 for a fixed orientation
constraint or 3 when using a free orientation model.
n_orient : int
Can be 1 or 3 depending if one works with fixed or free orientations.
If n_orient is 3, then ``G[:, 2::3]`` corresponds to the dipoles that
are normal to the cortex.
Returns
-------
X : array, (n_active_dipoles, n_times)
The time series of the dipoles in the active set.
active_set : array (n_dipoles)
Array of bool. Entry j is True if dipole j is in the active set.
We have ``X_full[active_set] == X`` where X_full is the full X matrix
such that ``M = G X_full``.
"""
K = linalg.solve(np.dot(G, G.T) + 1e15 * np.eye(G.shape[0]), G).T
K /= np.linalg.norm(K, axis=1)[:, None]
X = np.dot(K, M)
indices = np.argsort(np.sum(X ** 2, axis=1))[-10:]
active_set = np.zeros(G.shape[1], dtype=bool)
for idx in indices:
idx -= idx % n_orient
active_set[idx:idx + n_orient] = True
X = X[active_set]
return X, active_set
"""
Explanation: Define your solver
End of explanation
"""
# loose, depth = 0.2, 0.8 # corresponds to loose orientation
loose, depth = 1., 0. # corresponds to free orientation
stc = apply_solver(solver, evoked, forward, noise_cov, loose, depth)
"""
Explanation: Apply your custom solver
End of explanation
"""
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
opacity=0.1)
"""
Explanation: View in 2D and 3D ("glass" brain like 3D plot)
End of explanation
"""
|
lisitsyn/shogun | doc/ipython-notebooks/multiclass/Tree/TreeEnsemble.ipynb | bsd-3-clause | import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../../data')
from shogun import CSVFile,features,MulticlassLabels
def load_file(feat_file,label_file):
feats=features(CSVFile(feat_file))
labels=MulticlassLabels(CSVFile(label_file))
return (feats, labels)
trainfeat_file=os.path.join(SHOGUN_DATA_DIR, 'uci/letter/train_fm_letter.dat')
trainlab_file=os.path.join(SHOGUN_DATA_DIR, 'uci/letter/train_label_letter.dat')
train_feats,train_labels=load_file(trainfeat_file,trainlab_file)
"""
Explanation: Ensemble of Decision Trees
By Parijat Mazumdar (GitHub ID: mazumdarparijat)
This notebook illustrates the use of Random Forests in Shogun for classification and regression. We will understand the functioning of Random Forests, discuss about the importance of its various parameters and appreciate the usefulness of this learning method.
What is Random Forest?
Random Forest is an ensemble learning method in which a collection of decision trees are grown during training and the combination of the outputs of all the individual trees are considered during testing or application. The strategy for combination can be varied but generally, in case of classification, the mode of the output classes is used and, in case of regression, the mean of the outputs is used. The randomness in the method, as the method's name suggests, is infused mainly by the random subspace sampling done while training individual trees. While choosing the best split during tree growing, only a small randomly chosen subset of all the features is considered. The subset size is a user-controlled parameter and is usually the square root of the total number of available features. The purpose of the random subset sampling method is to decorrelate the individual trees in the forest, thus making the overall model more generic; i.e. decrease the variance without increasing the bias (see bias-variance trade-off). The purpose of Random Forest, in summary, is to reduce the generalization error of the model as much as possible.
Random Forest vs Decision Tree
In this section, we will appreciate the importance of training a Random Forest over a single decision tree. In the process, we will also learn how to use Shogun's Random Forest class. For this purpose, we will use the letter recognition dataset. This dataset contains pixel information (16 features) of 20000 samples of the English alphabet. This is a 26-class classification problem where the task is to predict the alphabet given the 16 pixel features. We start by loading the training dataset.
End of explanation
"""
from shogun import RandomForest, MajorityVote
from numpy import array
def setup_random_forest(num_trees,rand_subset_size,combination_rule,feature_types):
rf=RandomForest(rand_subset_size,num_trees)
rf.put('combination_rule', combination_rule)
rf.set_feature_types(feature_types)
return rf
comb_rule=MajorityVote()
feat_types=array([False]*16)
rand_forest=setup_random_forest(10,4,comb_rule,feat_types)
"""
Explanation: Next, we decide the parameters of our Random Forest.
End of explanation
"""
# train forest
rand_forest.put('labels', train_labels)
rand_forest.train(train_feats)
# load test dataset
testfeat_file= os.path.join(SHOGUN_DATA_DIR, 'uci/letter/test_fm_letter.dat')
testlab_file= os.path.join(SHOGUN_DATA_DIR, 'uci/letter/test_label_letter.dat')
test_feats,test_labels=load_file(testfeat_file,testlab_file)
# apply forest
output_rand_forest_train=rand_forest.apply_multiclass(train_feats)
output_rand_forest_test=rand_forest.apply_multiclass(test_feats)
"""
Explanation: In the above code snippet, we decided to create a forest using 10 trees in which each split in individual trees will be using a randomly chosen subset of 4 features. Note that 4 here is the square root of the total available features (16) and is hence the usually chosen value as mentioned in the introductory paragraph. The strategy for combination chosen is Majority Vote which, as the name suggests, chooses the mode of all the individual tree outputs. The given features are all continuous in nature and hence feature types are all set false (i.e. not nominal). Next, we train our Random Forest and use it to classify letters in our test dataset.
End of explanation
"""
from shogun import CARTree, PT_MULTICLASS
def train_cart(train_feats,train_labels,feature_types,problem_type):
c=CARTree(feature_types,problem_type,2,False)
c.put('labels', train_labels)
c.train(train_feats)
return c
# train CART
cart=train_cart(train_feats,train_labels,feat_types,PT_MULTICLASS)
# apply CART model
output_cart_train=cart.apply_multiclass(train_feats)
output_cart_test=cart.apply_multiclass(test_feats)
"""
Explanation: We have with us the labels predicted by our Random Forest model. Let us also get the predictions made by a single tree. For this purpose, we train a CART-flavoured decision tree.
End of explanation
"""
from shogun import MulticlassAccuracy
accuracy=MulticlassAccuracy()
rf_train_accuracy=accuracy.evaluate(output_rand_forest_train,train_labels)*100
rf_test_accuracy=accuracy.evaluate(output_rand_forest_test,test_labels)*100
cart_train_accuracy=accuracy.evaluate(output_cart_train,train_labels)*100
cart_test_accuracy=accuracy.evaluate(output_cart_test,test_labels)*100
print('Random Forest training accuracy : '+str(round(rf_train_accuracy,3))+'%')
print('CART training accuracy : '+str(round(cart_train_accuracy,3))+'%')
print
print('Random Forest test accuracy : '+str(round(rf_test_accuracy,3))+'%')
print('CART test accuracy : '+str(round(cart_test_accuracy,3))+'%')
"""
Explanation: With both results at our disposal, let us find out which one is better.
End of explanation
"""
def get_rf_accuracy(num_trees,rand_subset_size):
rf=setup_random_forest(num_trees,rand_subset_size,comb_rule,feat_types)
rf.put('labels', train_labels)
rf.train(train_feats)
out_test=rf.apply_multiclass(test_feats)
acc=MulticlassAccuracy()
return acc.evaluate(out_test,test_labels)
"""
Explanation: As it is clear from the results above, we see a significant improvement in the predictions. The reason for the improvement is clear when one looks at the training accuracy. The single decision tree was over-fitting on the training dataset and hence was not generic. Random Forest on the other hand appropriately trades off training accuracy for the sake of generalization of the model. Impressed already? Let us now see what happens if we increase the number of trees in our forest.
Random Forest parameters : Number of trees and random subset size
In the last section, we trained a forest of 10 trees. What happens if we make our forest with 20 trees? Let us try to answer this question in a generic way.
End of explanation
"""
import matplotlib.pyplot as plt
% matplotlib inline
num_trees4=[5,10,20,50,100]
rf_accuracy_4=[round(get_rf_accuracy(i,4)*100,3) for i in num_trees4]
print('Random Forest accuracies (as %) :' + str(rf_accuracy_4))
# plot results
x4=[1]
y4=[86.48] # accuracy for single tree-CART
x4.extend(num_trees4)
y4.extend(rf_accuracy_4)
plt.plot(x4,y4,'--bo')
plt.xlabel('Number of trees')
plt.ylabel('Multiclass Accuracy (as %)')
plt.xlim([0,110])
plt.ylim([85,100])
plt.show()
"""
Explanation: The method above takes the number of trees and subset size as inputs and returns the evaluated accuracy as output. Let us use this method to get the accuracy for different number of trees keeping the subset size constant at 4.
End of explanation
"""
# subset size 2
num_trees2=[10,20,50,100]
rf_accuracy_2=[round(get_rf_accuracy(i,2)*100,3) for i in num_trees2]
print('Random Forest accuracies (as %) :' + str(rf_accuracy_2))
# subset size 8
num_trees8=[5,10,50,100]
rf_accuracy_8=[round(get_rf_accuracy(i,8)*100,3) for i in num_trees8]
print('Random Forest accuracies (as %) :' + str(rf_accuracy_8))
"""
Explanation: NOTE : The above code snippet takes about a minute to execute. Please wait patiently.
We see from the above plot that the accuracy of the model keeps on increasing as we increase the number of trees on our Random Forest and eventually satarates at some value. Extrapolating the above plot qualitatively, the saturation value will be somewhere around 96.5%. The jump of accuracy from 86.48% for a single tree to 96.5% for a Random Forest with about 100 trees definitely highlights the importance of the Random Forest algorithm.
The inevitable question at this point is whether it is possible to achieve higher accuracy saturation by working with lesser (or greater) random feature subset size. Let us figure this out by repeating the above procedure for random subset size as 2 and 8.
End of explanation
"""
x2=[1]
y2=[86.48]
x2.extend(num_trees2)
y2.extend(rf_accuracy_2)
x8=[1]
y8=[86.48]
x8.extend(num_trees8)
y8.extend(rf_accuracy_8)
plt.plot(x2,y2,'--bo',label='Subset Size = 2')
plt.plot(x4,y4,'--r^',label='Subset Size = 4')
plt.plot(x8,y8,'--gs',label='Subset Size = 8')
plt.xlabel('Number of trees')
plt.ylabel('Multiclass Accuracy (as %) ')
plt.legend(bbox_to_anchor=(0.92,0.4))
plt.xlim([0,110])
plt.ylim([85,100])
plt.show()
"""
Explanation: NOTE : The above code snippets take about a minute each to execute. Please wait patiently.
Let us plot all the results together and then comprehend the results.
End of explanation
"""
rf=setup_random_forest(100,2,comb_rule,feat_types)
rf.put('labels', train_labels)
rf.train(train_feats)
# set evaluation strategy
eval=MulticlassAccuracy()
oobe=rf.get_oob_error(eval)
print('OOB accuracy : '+str(round(oobe*100,3))+'%')
"""
Explanation: As we can see from the above plot, the subset size does not have a major impact on the saturated accuracy obtained in this particular dataset. While this is true in many datasets, this is not a generic observation. In some datasets, the random feature sample size does have a measurable impact on the test accuracy. A simple strategy to find the optimal subset size is to use cross-validation. But with Random Forest model, there is actually no need to perform cross-validation. Let us see how in the next section.
Out-of-bag error
The individual trees in a Random Forest are trained over data vectors randomly chosen with replacement. As a result, some of the data vectors are left out of training by each of the individual trees. These vectors form the out-of-bag (OOB) vectors of the corresponding trees. A data vector can be part of OOB classes of multiple trees. While calculating OOB error, a data vector is applied to only those trees of which it is a part of OOB class and the results are combined. This combined result averaged over similar estimate for all other vectors gives the OOB error. The OOB error is an estimate of the generalization bound of the Random Forest model. Let us see how to compute this OOB estimate in Shogun.
End of explanation
"""
trainfeat_file= os.path.join(SHOGUN_DATA_DIR, 'uci/wine/fm_wine.dat')
trainlab_file= os.path.join(SHOGUN_DATA_DIR, 'uci/wine/label_wine.dat')
train_feats,train_labels=load_file(trainfeat_file,trainlab_file)
"""
Explanation: The above OOB accuracy calculated is found to be slighly less than the test error evaluated in the previous section (see plot for num_trees=100 and rand_subset_size=2). This is because of the fact that the OOB estimate depicts the expected error for any generalized set of data vectors. It is only natural that for some set of vectors, the actual accuracy is slightly greater than the OOB estimate while in some cases the accuracy observed in a bit lower.
Let us now apply the Random Forest model to the wine dataset. This dataset is different from the previous one in the sense that this dataset is small and has no separate test dataset. Hence OOB (or equivalently cross-validation) is the only viable strategy available here. Let us read the dataset first.
End of explanation
"""
import matplotlib.pyplot as plt
def get_oob_errors_wine(num_trees,rand_subset_size):
feat_types=array([False]*13)
rf=setup_random_forest(num_trees,rand_subset_size,MajorityVote(),feat_types)
rf.put('labels', train_labels)
rf.train(train_feats)
eval=MulticlassAccuracy()
return rf.get_oob_error(eval)
size=[1,2,4,6,8,10,13]
oobe=[round(get_oob_errors_wine(400,i)*100,3) for i in size]
print('Out-of-box Accuracies (as %) : '+str(oobe))
plt.plot(size,oobe,'--bo')
plt.xlim([0,14])
plt.xlabel('Random subset size')
plt.ylabel('Multiclass accuracy')
plt.show()
"""
Explanation: Next let us find out the appropriate feature subset size. For this we will make use of OOB error.
End of explanation
"""
size=[50,100,200,400,600]
oobe=[round(get_oob_errors_wine(i,2)*100,3) for i in size]
print('Out-of-box Accuracies (as %) : '+str(oobe))
plt.plot(size,oobe,'--bo')
plt.xlim([40,650])
plt.ylim([95,100])
plt.xlabel('Number of trees')
plt.ylabel('Multiclass accuracy')
plt.show()
"""
Explanation: From the above plot it is clear that subset size of 2 or 3 produces maximum accuracy for wine classification. At this value of subset size, the expected classification accuracy is of the model is 98.87%. Finally, as a sanity check, let us plot the accuracy vs number of trees curve to ensure that 400 is indeed a sufficient value ie. the oob error saturates before 400.
End of explanation
"""
|
Xilinx/BNN-PYNQ | notebooks/CNV-QNN_Cifar10_Testset.ipynb | bsd-3-clause | import bnn
"""
Explanation: Cifar-10 testset classification on Pynq
This notebook covers how to use low quantized neural networks on Pynq.
It shows an example how CIFAR-10 testset can be inferred utilizing different precision neural networks inspired at VGG-16, featuring 6 convolutional layers, 3 max pool layers and 3 fully connected layers. There are 3 different precision available:
CNVW1A1 using 1 bit weights and 1 bit activation,
CNVW1A2 using 1 bit weights and 2 bit activation and
CNVW2A2 using 2 bit weights and 2 bit activation
1. Import the package
End of explanation
"""
#get
!wget https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz
#unzip
!tar -xf cifar-10-binary.tar.gz
labels = []
with open("/home/xilinx/jupyter_notebooks/bnn/cifar-10-batches-bin/test_batch.bin", "rb") as file:
#for 10000 pictures
for i in range(10000):
#read first byte -> label
labels.append(int.from_bytes(file.read(1), byteorder="big"))
#read image (3072 bytes) and do nothing with it
file.read(3072)
file.close()
"""
Explanation: 2. The Cifar-10 testset
This notebook required the testset from https://www.cs.toronto.edu/~kriz/cifar.html which contains 10000 images that can be processed by CNV network directly without preprocessing.
You can download the cifar-10 set from given url via wget and unzip it to a folder on Pynq as shown below.
This may take a while as the training set is included in the archive as well.
After that we need to read the labels from the binary file to be able to compare the results later:
End of explanation
"""
hw_classifier = bnn.CnvClassifier(bnn.NETWORK_CNVW1A1,'cifar10',bnn.RUNTIME_HW)
"""
Explanation: 3. Start inference
The inference can be performed with different precision for weights and activation. Creating a specific Classifier will automatically download the correct bitstream onto PL and load the weights and thresholds trained on the specific dataset.
Thus that images are already Cifar-10 preformatted no preprocessing is required. Therefor the functions classify_cifar or classify_cifars can be used. When classifying non Cifar-10 formatted pictures refer to classify_image or classify_images (see Notebook CNV-QNN_Cifar10).
Case 1:
W1A1 - 1 bit weight and 1 activation
Instantiate the classifier:
End of explanation
"""
result_W1A1 = hw_classifier.classify_cifars("/home/xilinx/jupyter_notebooks/bnn/cifar-10-batches-bin/test_batch.bin")
time_W1A1 = hw_classifier.usecPerImage
"""
Explanation: And start the inference on Cifar-10 preformatted multiple images:
End of explanation
"""
hw_classifier = bnn.CnvClassifier(bnn.NETWORK_CNVW1A2,'cifar10',bnn.RUNTIME_HW)
result_W1A2 = hw_classifier.classify_cifars("/home/xilinx/jupyter_notebooks/bnn/cifar-10-batches-bin/test_batch.bin")
time_W1A2 = hw_classifier.usecPerImage
"""
Explanation: Case 2:
W1A2 - 1 bit weight and 2 activation
End of explanation
"""
hw_classifier = bnn.CnvClassifier(bnn.NETWORK_CNVW2A2,'cifar10',bnn.RUNTIME_HW)
result_W2A2 = hw_classifier.classify_cifars("/home/xilinx/jupyter_notebooks/bnn/cifar-10-batches-bin/test_batch.bin")
time_W2A2 = hw_classifier.usecPerImage
"""
Explanation: Case 3:
W2A2 - 2 bit weight and 2 activation
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
height = [time_W1A1, time_W1A2, time_W2A2]
bars = ('W1A1', 'W1A2', 'W2A2')
y_pos=range(3)
plt.bar(y_pos, height, 0.5)
plt.xticks(y_pos, bars)
plt.show()
"""
Explanation: 4. Summary
Inference time
Results can be visualized using matplotlib. Here the comparison of hardware execution time is plotted in microseconds per Image:
End of explanation
"""
#compare against labels
countRight = 0
for idx in range(len(labels)):
if labels[idx] == result_W1A1[idx]:
countRight += 1
accuracyW1A1 = countRight*100/len(labels)
countRight = 0
for idx in range(len(labels)):
if labels[idx] == result_W1A2[idx]:
countRight += 1
accuracyW1A2 = countRight*100/len(labels)
countRight = 0
for idx in range(len(labels)):
if labels[idx] == result_W2A2[idx]:
countRight += 1
accuracyW2A2 = countRight*100/len(labels)
print("Accuracy W1A1: ",accuracyW1A1,"%")
print("Accuracy W1A2: ",accuracyW1A2,"%")
print("Accuracy W2A2: ",accuracyW2A2,"%")
"""
Explanation: Accuracy
The accuracy on the testset can be calculated by comparing the inferred labels against the one read at the beginning:
End of explanation
"""
from pynq import Xlnk
xlnk = Xlnk()
xlnk.xlnk_reset()
"""
Explanation: 6. Reset the device
End of explanation
"""
|
tvaught/compintro | 11_camera_intro.ipynb | bsd-3-clause | import os
from picamera import PiCamera
from picamera.color import Color
from time import sleep
camera = PiCamera()
camera.start_preview()
sleep(3)
camera.stop_preview()
"""
Explanation: Raspberry Pi Camera Test
First we import the libraries we need and initialize a camera 'object.'
End of explanation
"""
camera.hflip = True
camera.vflip = True
camera.brightness = 50 # the default is 50, but you can set it to whatever.
"""
Explanation: TADA ... wait, nothing happened.
End of explanation
"""
camera.annotate_foreground = Color(1.0,1.0,0.5)
camera.annotate_text = "STEM Camp ROCKS!"
camera.annotate_text_size = 36
camera.start_preview()
sleep(1)
camera.capture('./img/image_test.jpg')
camera.stop_preview()
"""
Explanation: How about some text on the image.
End of explanation
"""
camera.start_preview()
for i in range(5):
sleep(3)
camera.capture('./img/image%s.jpg' % i)
camera.stop_preview()
"""
Explanation: How about taking several shots...
End of explanation
"""
camera.resolution = (1280, 720)
camera.start_preview()
camera.start_recording('img/videotest.h264')
camera.wait_recording(3)
camera.stop_recording()
camera.stop_preview()
"""
Explanation: What about video?
End of explanation
"""
# convert the video above to something playable through the browser, then delete the original unplayable version.
msg = os.system("MP4Box -fps 30 -add img/videotest.h264 img/videotest2.mp4")
os.remove("img/videotest.h264")
print msg
"""
Explanation: This did generate a file, but the default format isn't playable through the browser, so...
Note: this did require the installation of gpac with:
sudo apt-get install -y gpac
End of explanation
"""
import datetime as dt
camera.resolution = (1280, 720)
camera.framerate = 24
camera.start_preview()
camera.annotate_background = Color('black')
camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
camera.start_recording('img/timestamped2.h264')
start = dt.datetime.now()
while (dt.datetime.now() - start).seconds < 5:
camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
camera.wait_recording(0.2)
camera.stop_recording()
# convert the video above to something playable through the browser, then delete the original unplayable version.
msg = os.system("MP4Box -fps 30 -add img/timestamped2.h264 img/timestamped2.mp4")
os.remove("img/timestamped2.h264")
"""
Explanation: Now let's add a calculated annotation to the recording (a timestamp)
End of explanation
"""
camera.close()
del camera
"""
Explanation: When we're all finished with the camera module, it's a good idea to close the object to prevent GPU memory leakage.
End of explanation
"""
|
pligor/predicting-future-product-prices | 04_time_series_prediction/.ipynb_checkpoints/15_price_history_seq2seq-native-with-last-input-as-decoder-input-checkpoint.ipynb | agpl-3.0 | from __future__ import division
import tensorflow as tf
from os import path
import numpy as np
import pandas as pd
import csv
from sklearn.model_selection import StratifiedShuffleSplit
from time import time
from matplotlib import pyplot as plt
import seaborn as sns
from mylibs.jupyter_notebook_helper import show_graph
from tensorflow.contrib import rnn
from tensorflow.contrib import learn
import shutil
from tensorflow.contrib.learn.python.learn import learn_runner
from mylibs.tf_helper import getDefaultGPUconfig
from sklearn.metrics import r2_score
from mylibs.py_helper import factors
from fastdtw import fastdtw
from scipy.spatial.distance import euclidean
from statsmodels.tsa.stattools import coint
from common import get_or_run_nn
from data_providers.price_history_seq2seq_data_provider import PriceHistorySeq2SeqDataProvider
from models.price_history_seq2seq_native import PriceHistorySeq2SeqNative
dtype = tf.float32
seed = 16011984
random_state = np.random.RandomState(seed=seed)
config = getDefaultGPUconfig()
%matplotlib inline
"""
Explanation: https://www.youtube.com/watch?v=ElmBrKyMXxs
https://github.com/hans/ipython-notebooks/blob/master/tf/TF%20tutorial.ipynb
https://github.com/ematvey/tensorflow-seq2seq-tutorials
End of explanation
"""
num_epochs = 10
num_features = 1
num_units = 400 #state size
input_len = 60
target_len = 30
batch_size = 47
#trunc_backprop_len = ??
"""
Explanation: Step 0 - hyperparams
vocab_size is all the potential words you could have (classification for translation case)
and max sequence length are the SAME thing
decoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now
End of explanation
"""
npz_path = '../price_history_03_dp_60to30_from_fixed_len.npz'
dp = PriceHistorySeq2SeqDataProvider(npz_path=npz_path, batch_size=batch_size, with_EOS=False)
dp.inputs.shape, dp.targets.shape
aa, bb = dp.next()
aa.shape, bb.shape
"""
Explanation: Step 1 - collect data (and/or generate them)
End of explanation
"""
model = PriceHistorySeq2SeqNative(rng=random_state, dtype=dtype, config=config, with_EOS=False)
graph = model.getGraph(batch_size=batch_size,
num_units=num_units,
input_len=input_len,
target_len=target_len)
#show_graph(graph)
"""
Explanation: Step 2 - Build model
End of explanation
"""
model = PriceHistorySeq2SeqNative(rng=random_state, dtype=dtype, config=config, with_EOS=False)
rnn_cell = PriceHistorySeq2SeqNative.RNN_CELLS.BASIC_RNN
num_epochs = 20
num_epochs, num_units, batch_size
def experiment():
return model.run(
npz_path=npz_path,
epochs=num_epochs,
batch_size=batch_size,
num_units=num_units,
input_len = input_len,
target_len = target_len,
rnn_cell=rnn_cell,
)
dyn_stats, preds_dict = get_or_run_nn(experiment, filename='008_rnn_seq2seq_native_noEOS_60to30_20epochs')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
"""
Explanation: Conclusion
There is no way this graph makes much sense but let's give it a try to see how bad really is
Step 3 training the network
RECALL: baseline is around 4 for huber loss for current problem, anything above 4 should be considered as major errors
Basic RNN cell (without EOS)
End of explanation
"""
rnn_cell = PriceHistorySeq2SeqNative.RNN_CELLS.GRU
num_epochs = 50
num_epochs, num_units, batch_size
def experiment():
return model.run(
npz_path=npz_path,
epochs=num_epochs,
batch_size=batch_size,
num_units=num_units,
input_len = input_len,
target_len = target_len,
rnn_cell=rnn_cell,
)
#dyn_stats = experiment()
dyn_stats, preds_dict = get_or_run_nn(experiment, filename='008_gru_seq2seq_native_noEOS_60to30_50epochs')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
"""
Explanation: Conclusion
The initial price difference of the predictions is still not as good as we would expect, perhaps using an EOS as they do in machine translation models is not the best architecture for our case
GRU cell - without EOS
End of explanation
"""
rnn_cell = PriceHistorySeq2SeqNative.RNN_CELLS.GRU
num_epochs = 50
num_units = 800
num_epochs, num_units, batch_size
def experiment():
return model.run(
npz_path=npz_path,
epochs=num_epochs,
batch_size=batch_size,
num_units=num_units,
input_len = input_len,
target_len = target_len,
rnn_cell=rnn_cell,
)
#dyn_stats = experiment()
dyn_stats, preds_dict = get_or_run_nn(experiment, filename='008_gru_seq2seq_native_noEOS_60to30_50epochs_800units')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
"""
Explanation: Conclusion
???
GRU cell - without EOS - 800 units
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.0/tutorials/plotting.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.0,<2.1"
"""
Explanation: Plotting
This tutorial explains the high-level interface to plotting provided by the Bundle. You are of course always welcome to access arrays and plot manually.
The default plotting backend in PHOEBE is matplotlib, and this tutorial will focus solely on matplotlib plots and will assume some familiarity with matplotlib and its terminology (ie axes, artists, subplots, etc).
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
"""
Explanation: This first line is only necessary for ipython noteboooks - it allows the plots to be shown on this page instead of in interactive mode
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b['q'] = 0.8
b['ecc'] = 0.1
b['irrad_method'] = 'none'
"""
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('orb', times=np.linspace(0,4,1000), dataset='orb01', component=['primary', 'secondary'])
times, fluxes, sigmas = np.loadtxt('test.lc.in', unpack=True)
b.add_dataset('lc', times=times, fluxes=fluxes, sigmas=sigmas, dataset='lc01')
"""
Explanation: And we'll attach some dummy datasets. See Datasets for more details.
End of explanation
"""
b.set_value('incl@orbit', 90)
b.run_compute(model='run_with_incl_90')
b.set_value('incl@orbit', 85)
b.run_compute(model='run_with_incl_85')
b.set_value('incl@orbit', 80)
b.run_compute(model='run_with_incl_80')
"""
Explanation: And run the forward models. See Computing Observables for more details.
End of explanation
"""
axs, artists = b.plot()
"""
Explanation: Showing and Saving
NOTE: in IPython notebooks calling plot will display directly below the call to plot. When not in IPython you have several options for viewing the figure:
call plt.show() after calling plot
use the returned axes and artist objects however you'd like
pass show=True to the plot method (same as calling plt.show())
pass save='myfilename' to the plot method (same as calling plt.savefig('myfilename'))
Default Plots
To see the options for plotting that are dataset-dependent see the tutorials on that dataset method:
ORB dataset
MESH dataset
LC dataset
RV dataset
By calling the plot method on the bundle (or any ParameterSet) without any arguments, a plot or series of subplots will be built based on the contents of that ParameterSet.
End of explanation
"""
axs, artists = b['orb@run_with_incl_80'].plot()
"""
Explanation: Any call to plot returns 2 lists - a list of the axes and a list of the artists that were drawn on those axes. Generally we won't need to do anything with these, but having them returned could come in handy if you want to manually edit those axes or artists before saving the image.
In this example with so many different models and datasets, it is quite simple to build a single plot by filtering the bundle and calling the plot method on the resulting ParameterSet.
End of explanation
"""
axs, artists = b['orb@run_with_incl_80'].plot(time=1.0)
"""
Explanation: Time (highlight and uncover)
The built-in plot method also provides convenience options to either highlight the interpolated point for a given time, or only show the dataset up to a given time.
Highlight
The higlight option is enabled by default so long as a time (or times) is passed to plot. It simply adds an extra marker at the sent time - interpolating in the synthetic model if necessary.
End of explanation
"""
axs, artists = b['orb@run_with_incl_80'].plot(time=1.0, highlight_marker='s', highlight_color='g', highlight_ms=20)
"""
Explanation: To change the style of the "highlighted" points, you can pass matplotlib recognized markers, colors, and markersizes to the highlight_marker, highlight_color, and highlight_ms keywords, respectively.
End of explanation
"""
axs, artists = b['orb@run_with_incl_80'].plot(time=1.0, highlight=False)
"""
Explanation: To disable highlighting, simply send highlight=False
End of explanation
"""
axs, artists = b['orb@run_with_incl_80'].plot(time=1.0, uncover=True)
"""
Explanation: Uncover
Uncover shows the observations or synthetic model up to the provided time and is disabled by default, even when a time is provided, but is enabled simply by providing uncover=True. There are no additional options available for uncover.
End of explanation
"""
axs, artists = b['primary@orb@run_with_incl_80'].plot()
axs, artists = b.plot(component='primary', kind='orb', model='run_with_incl_80')
axs, artists = b.plot('primary@orb@run_with_incl_80')
"""
Explanation: Selecting Datasets
In addition to filtering and calling plot on the resulting ParameterSet, plot can accept a twig or filter on any of the available parameter tags.
For this reason, any of the following give identical results:
End of explanation
"""
axs, artists = b.plot('primary@orb@run_with_incl_80', 'secondary@orb@run_with_incl_80')
"""
Explanation: An advantage to this last approach (providing a twig as a positional argument to the plot method) is that it can accept multiple positional arguments to plot from multiple datasets in a single call.
When these have the same dataset method, they will automatically be drawn to the same axes (by default).
End of explanation
"""
axs, artists = b['run_with_incl_80'].plot('primary@orb', 'lc01')
"""
Explanation: If the datasets have multiple dataset kinds, subplots will automatically be created.
End of explanation
"""
axs, artists = b['orb01@primary@run_with_incl_80'].plot(x='times', y='vxs')
"""
Explanation: Later we'll see how to customize the layout of these subplots in the figure and how to pass other plotting options.
Selecting Arrays
So far, each plotting call automatically chose default arrays from that dataset to plot along each axis. To override these defaults, simply point to the qualifier of the array that you'd like plotted along a given axis.
End of explanation
"""
b['orb01@primary@run_with_incl_80'].qualifiers
"""
Explanation: To see the list of available qualifiers that could be passed for x or y, call the qualifiers (or twigs) property on the ParameterSet.
End of explanation
"""
axs, artists = b['lc01@dataset'].plot(x='phases', yerrors=None)
"""
Explanation: For more information on each of the available arrays, see the relevant tutorial on that dataset method:
orb dataset
mesh dataset
lc dataset
rv dataset
Selecting Phase
And to plot in phase we just send x='phases' or x='phases:binary'.
Setting x='phases' will use the ephemeris from the top-level of the hierarchy
(as if you called b.get_ephemeris()), whereas passing a string after the colon,
will use the ephemeris of that component.
End of explanation
"""
axs, artists = b['orb01@primary@run_with_incl_80'].plot(xunit='AU', yunit='AU')
"""
Explanation: Units
Likewise, each array that is plotted is automatically plotted in its default units. To override these defaults, simply provide the unit (as a string or as a astropy units object) for a given axis.
End of explanation
"""
axs, artists = b['orb01@primary@run_with_incl_80'].plot(xlabel='X POS', ylabel='Z POS')
"""
Explanation: WARNING: when plotting two arrays with the same dimensions, PHOEBE attempts to set the aspect ratio to equal, but overriding to use two different units will result in undesired results. This may be fixed in the future, but for now can be avoided by using consistent units for the x and y axes when they have the same dimensions.
Axes Labels
Axes labels are automatically generated from the qualifier of the array and the plotted units. To override these defaults, simply pass a string for the label of a given axis.
End of explanation
"""
axs, artists = b['orb01@primary@run_with_incl_80'].plot(xlim=(-2,2))
"""
Explanation: Axes Limits
Axes limits are determined by the data automatically. To set custom axes limits, either use matplotlib methods on the returned axes objects, or pass limits as a list or tuple.
End of explanation
"""
axs, artists = b['lc01@dataset'].plot(yerrors='sigmas')
"""
Explanation: Errorbars
In the cases of observational data, errorbars can be added by passing the name of the column.
End of explanation
"""
axs, artists = b['lc01@dataset'].plot(yerrors=None)
"""
Explanation: To disable the errorbars, simply set yerrors=None.
End of explanation
"""
axs, artists = b['orb01@primary@run_with_incl_80'].plot(color='r')
"""
Explanation: Colors
Colors of points and lines, by default, cycle according to matplotlib's color policy. To manually set the color, simply pass a matplotlib recognized color to the color keyword.
End of explanation
"""
axs, artists = b['orb01@primary@run_with_incl_80'].plot(time=1.0, x='times', color='vzs')
"""
Explanation: In addition, you can point to an array in the dataset to use as color.
End of explanation
"""
axs, artists = b['orb01@primary@run_with_incl_80'].plot(time=1.0, x='times', color='vzs', cmap='spring')
"""
Explanation: Choosing colors works slightly differently for meshes (ie you can set facecolor and edgecolor and facecmap and edgecmap). For more details, see the tutorial on the mesh dataset.
Colormaps
The colormaps is determined automatically based on the parameter used for coloring (ie RVs will be a red-blue colormap). To override this, pass a matplotlib recognized colormap to the cmap keyword.
End of explanation
"""
axs, artists = b['orb@run_with_incl_80'].plot()
legend = plt.legend()
"""
Explanation: Labels and Legends
To add a legend, simply call plt.legend (for the current axes) or ax.legend on one of the returned axes.
For details on placement and formatting of the legend see matplotlib's documentation.
End of explanation
"""
axs, artists = b['primary@orb@run_with_incl_80'].plot(label='primary')
axs, artists = b['secondary@orb@run_with_incl_80'].plot(label='secondary')
legend = axs[0].legend()
"""
Explanation: The legend labels are generated automatically, but can be overriden by passing a string to the label keyword.
End of explanation
"""
axs, artists = b['orb01@primary@run_with_incl_80'].plot(linestyle=':', linewidth=4)
"""
Explanation: Other Plotting Options
Valid plotting options that are directly passed to matplotlib include:
- linestyle
- linewidth
- marker
- markersize
- color (see section above on Providing Colors)
End of explanation
"""
fig = plt.figure(figsize=(14,10))
ax = [fig.add_subplot(2,2,i+1) for i in range(4)]
axs, artists = b.plot('orb01@primary', y='ys', ax=ax[0])
ax[0].legend()
axs, artists = b['orb01@run_with_incl_80'].plot(y='ys', linestyle='--', time=5, uncover=True, ax=ax[1])
ax[1].legend()
axs, artists = b.plot(dataset='orb01', y='ys', ax=ax[2])
ax[2].legend()
axs, artists = b.plot(dataset='orb01', model='run_with_incl_80', x='times', y='vys', time=5, uncover=True, ax=ax[3])
ax[3].legend()
"""
Explanation: Custom Subplots
The plot method can both optionally take and return a matplotlib axes object.
This makes it quite easy to quickly build a figure with multiple subplots.
Below we'll mix a bunch of different ways to call plotting , and mix in highlighting
and uncovering. The only real difference here from before is that we pass a
single matplotlib axes to the plot call - that is the axes on which all lines
will be drawn during that call, even if it loops and creates multiple lines.
The actual axes instance is returned, and we want to create the legend on that
axes.
End of explanation
"""
fig = plt.figure(figsize=(14,10))
ax = [fig.add_subplot(2,2,i+1) for i in range(4)]
plot1 = {'twig': 'orb01@primary', 'y': 'ys', 'ax':ax[0]}
plot2 = {'twig': 'orb01@run_with_incl_80', 'y': 'ys', 'linestyle': '--', 'time': 5, 'uncover': True, 'ax':ax[1]}
plot3 = {'twig': 'orb01', 'y': 'ys', 'ax': ax[2]}
plot4 = {'dataset': 'orb01', 'model': 'run_with_incl_80', 'x': 'times', 'y': 'vys', 'time': 5, 'uncover': True, 'ax': ax[3]}
axs, artists = b.plot(plot1, plot2, plot3, plot4)
for axi in ax:
axi.legend()
"""
Explanation: Alternatively, this can be done in a single call to plot by passing dictionaries as positional arguments. Each dictionary, in essence, is passed on to its own plot call.
End of explanation
"""
fig = plt.figure(figsize=(14,10))
ax = [fig.add_subplot(2,2,i+1) for i in range(4)]
plot1 = {'twig': 'orb01@primary', 'y': 'ys', 'ax':ax[0]}
plot2 = {'twig': 'orb01@run_with_incl_80', 'y': 'ys', 'linestyle': '--', 'time': 5, 'uncover': True, 'ax':ax[1]}
plot3 = {'twig': 'orb01', 'y': 'ys', 'ax': ax[2]}
plot4 = {'dataset': 'orb01', 'model': 'run_with_incl_80', 'x': 'times', 'y': 'vys', 'time': 5, 'uncover': True, 'ax': ax[3]}
axs, artists = b.plot(plot1, plot2, plot3, plot4, x='xs', y='zs', color='r')
for axi in ax:
axi.legend()
"""
Explanation: Note that now when passing additional arguments, those will apply as defaults to EACH of the dictionaries, but will not override any values explicitly provided in the dictionaries.
End of explanation
"""
figure = plt.figure()
ax = figure.add_subplot(111, projection='3d')
axes, artists = b['orb@run_with_incl_80'].plot(time=0, facecolor='teffs', edgecolor=None, ax=ax)
"""
Explanation: 3D Axes
To plot a in 3d, simply provide a 3D axes (or have it as your current axes available through plt.gca())
Here many of the same principles apply as above - you can change the array by pointing to a parameter for z, set zlim, zunit, zlabel, etc.
End of explanation
"""
|
w4zir/ml17s | assignments/.ipynb_checkpoints/assignment01-regression-checkpoint.ipynb | mit | %matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn import linear_model
import matplotlib.pyplot as plt
import matplotlib as mpl
# read house_train.csv data in pandas dataframe df_train using pandas read_csv function
df_train = pd.read_csv('datasets/house_price/train.csv', encoding='utf-8')
# check data by printing first few rows
df_train.head()
# check columns in dataset
df_train.columns
# check correlation matrix, darker means more correlation
corrmat = df_train.corr()
f, aX_train= plt.subplots(figsize=(12, 9))
sns.heatmap(corrmat, vmax=.8, square=True);
# SalePrice correlation matrix with top k variables
k = 10 #number of variables for heatmap
cols = corrmat.nlargest(k, 'SalePrice')['SalePrice'].index
cm = np.corrcoef(df_train[cols].values.T)
sns.set(font_scale=1.25)
hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10}, yticklabels=cols.values, xticklabels=cols.values)
plt.show()
#scatterplot with some important variables
cols = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt']
sns.set()
sns.pairplot(df_train[cols], size = 2.5)
plt.show();
"""
Explanation: CSAL4243: Introduction to Machine Learning
Muhammad Mudassir Khan (mudasssir.khan@ucp.edu.pk)
Assignment 1: Linear Regression
In this assignment you are going to learn how Linear Regression works by using the code for linear regression and gradient descent we have been looking at in the class. You are also going to use linear regression from scikit-learn library for machine learning. You are going to learn how to download data from kaggle (a website for datasets and machine learning) and upload submissions to kaggle competitions. And you will be able to compete with the world.
Overview
Pseudocode
Tasks
Load and analyze data
Task 1: Effect of Learning Rate $\alpha$
Load X and y
Linear Regression with Gradient Descent code
Run Gradient Descent on training data
Plot trained line on data
Task 2: Predict test data output and submit it to Kaggle
Upload .csv file to Kaggle.com
Task 3: Use scikit-learn for Linear Regression
Task 4: Multivariate Linear Regression
Resources
Credits
<br>
<br>
Pseudocode
Linear Regressio with Gradient Descent
Load training data into X_train and y_train
[Optionally] normalize features X_train using $x^i = \frac{x^i - \mu^i}{\rho^i}$ where $\mu^i$ is mean and $\rho^i$ is standard deviation of feature $i$
Initialize hyperparameters
iterations
learning rate $\alpha$
Initialize $\theta_s$
At each iteration
Compute cost using $J(\theta) = \frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$ where $h(x) = \theta_0 + \theta_1 x_1 + \theta_2 x_2 .... + \theta_n x_n$
Update $\theta_s$ using $\begin{align} \; \; & \theta_j := \theta_j - \alpha \frac{1}{m} \sum\limits_{i=1}^{m} (h_\theta(x_{i}) - y_{i}) \cdot x^j_{i} \; & & \text{for j := 0...n} \end{align}$
[Optionally] Break if cost $J(\theta)$ does not change.
<br>
<br>
Download House Price dataset
The dataset you are going to use in this assignment is called House Prices, available at kaggle. To download the dataset go to dataset data tab. Download 'train.csv', 'test.csv', 'data_description.txt' and 'sample_submission.csv.gz' files. 'train.csv' is going to be used for training the model. 'test.csv' is used to test the model i.e. generalization. 'data_description.txt' contain feature description of the dataset. 'sample_submission.csv.gz' contain sample submission file that you need to generate to be submitted to kaggle.
<br>
Tasks
Effect of Learning Rate $\alpha$
Predict test data output and submit it to Kaggle
Use scikit-learn for Linear Regression
Multivariate Linear Regression
Load and analyze data
End of explanation
"""
# Load X and y variables from pandas dataframe df_train
cols = ['GrLivArea']
X_train = np.array(df_train[cols])
y_train = np.array(df_train[["SalePrice"]])
# Get m = number of samples and n = number of features
m = X_train.shape[0]
n = X_train.shape[1]
# append a column of 1's to X for theta_0
X_train = np.insert(X_train,0,1,axis=1)
"""
Explanation: <br>
Task 1: Effect of Learning Rate $\alpha$
Use Linear Regression code below using X="GrLivArea" as input variable and y="SalePrice" as target variable. Use different values of $\alpha$ given in table below and comment on why they are useful or not and which one is a good choice.
$\alpha=0.000001$:
$\alpha=0.00000001$:
$\alpha=0.000000001$:
<br>
Load X and y
End of explanation
"""
iterations = 1500
alpha = 0.000000001 # change it and find what happens
def h(X, theta): #Linear hypothesis function
hx = np.dot(X,theta)
return hx
def computeCost(theta,X,y): #Cost function
"""
theta is an n- dimensional vector, X is matrix with n- columns and m- rows
y is a matrix with m- rows and 1 column
"""
#note to self: *.shape is (rows, columns)
return float((1./(2*m)) * np.dot((h(X,theta)-y).T,(h(X,theta)-y)))
#Actual gradient descent minimizing routine
def gradientDescent(X,y, theta_start = np.zeros((n+1,1))):
"""
theta_start is an n- dimensional vector of initial theta guess
X is input variable matrix with n- columns and m- rows. y is a matrix with m- rows and 1 column.
"""
theta = theta_start
j_history = [] #Used to plot cost as function of iteration
theta_history = [] #Used to visualize the minimization path later on
for meaninglessvariable in range(iterations):
tmptheta = theta
# append for plotting
j_history.append(computeCost(theta,X,y))
theta_history.append(list(theta[:,0]))
#Simultaneously updating theta values
for j in range(len(tmptheta)):
tmptheta[j] = theta[j] - (alpha/m)*np.sum((h(X,theta) - y)*np.array(X[:,j]).reshape(m,1))
theta = tmptheta
return theta, theta_history, j_history
"""
Explanation: Linear Regression with Gradient Descent code
End of explanation
"""
#Actually run gradient descent to get the best-fit theta values
initial_theta = np.zeros((n+1,1));
theta, theta_history, j_history = gradientDescent(X_train,y_train,initial_theta)
plt.plot(j_history)
plt.title("Convergence of Cost Function")
plt.xlabel("Iteration number")
plt.ylabel("Cost function")
plt.show()
"""
Explanation: Run Gradient Descent on training data
End of explanation
"""
# predict output for training data
hx_train= h(X_train, theta)
# plot it
plt.scatter(X_train[:,1],y_train)
plt.plot(X_train[:,1],hx_train[:,0], color='red')
plt.show()
"""
Explanation: Plot trained line on data
End of explanation
"""
# read data in pandas frame df_test and check first few rows
# write code here
df_test.head()
# check statistics of test data, make sure no data is missing.
print(df_test.shape)
df_test[cols].describe()
# Get X_test, no target variable (SalePrice) provided in test data. It is what we need to predict.
X_test = np.array(df_test[cols])
#Insert the usual column of 1's into the "X" matrix
X_test = np.insert(X_test,0,1,axis=1)
# predict test data labels i.e. y_test
predict = h(X_test, theta)
# save prediction as .csv file
pd.DataFrame({'Id': df_test.Id, 'SalePrice': predict[:,0]}).to_csv("predict1.csv", index=False)
"""
Explanation: <br>
Task 2: Predict test data output and submit it to Kaggle
In this task we will use the model trained above to predict "SalePrice" on test data. Test data has all the input variables/features but no target variable. Out aim is to use the trained model to predict the target variable for test data. This is called generalization i.e. how good your model works on unseen data. The output in the form "Id","SalePrice" in a .csv file should be submitted to kaggle. Please provide your score on kaggle after this step as an image. It will be compared to the 5 feature Linear Regression later.
End of explanation
"""
from IPython.display import Image
Image(filename='images/asgn_01.png', width=500)
"""
Explanation: Upload .csv file to Kaggle.com
Create an account at https://www.kaggle.com
Go to https://www.kaggle.com/c/house-prices-advanced-regression-techniques/submit
Upload "predict1.csv" file created above.
Upload your score as an image below.
End of explanation
"""
# import scikit-learn linear model
from sklearn import linear_model
# get X and y
# write code here
# Create linear regression object
# write code here check link above for example
# Train the model using the training sets. Use fit(X,y) command
# write code here
# The coefficients
print('Intercept: \n', regr.intercept_)
print('Coefficients: \n', regr.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% np.mean((regr.predict(X_train) - y_train) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(X_train, y_train))
# read test X without 1's
# write code here
# predict output for test data. Use predict(X) command.
predict2 = # write code here
# remove negative sales by replacing them with zeros
predict2[predict2<0] = 0
# save prediction as predict2.csv file
# write code here
"""
Explanation: <br>
Task 3: Use scikit-learn for Linear Regression
In this task we are going to use Linear Regression class from scikit-learn library to train the same model. The aim is to move from understanding algorithm to using an exisiting well established library. There is a Linear Regression example available on scikit-learn website as well.
Use the scikit-learn linear regression class to train the model on df_train
Compare the parameters from scikit-learn linear_model.LinearRegression.coef_ to the $\theta_s$ from earlier.
Use the linear_model.LinearRegression.predict on test data and upload it to kaggle. See if your score improves. Provide screenshot.
Note: no need to append 1's to X_train. Scitkit linear regression has parameter called fit_intercept that is by defauly enabled.
End of explanation
"""
# define columns ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt']
# write code here
# check features range and statistics. Training dataset looks fine as all features has same count.
df_train[cols].describe()
# Load X and y variables from pandas dataframe df_train
# write code here
# Get m = number of samples and n = number of features
# write code here
#Feature normalizing the columns (subtract mean, divide by standard deviation)
#Store the mean and std for later use
#Note don't modify the original X matrix, use a copy
stored_feature_means, stored_feature_stds = [], []
Xnorm = np.array(X_train).copy()
for icol in range(Xnorm.shape[1]):
stored_feature_means.append(np.mean(Xnorm[:,icol]))
stored_feature_stds.append(np.std(Xnorm[:,icol]))
#Skip the first column if 1's
# if not icol: continue
#Faster to not recompute the mean and std again, just used stored values
Xnorm[:,icol] = (Xnorm[:,icol] - stored_feature_means[-1])/stored_feature_stds[-1]
# check data after normalization
pd.DataFrame(data=Xnorm,columns=cols).describe()
# Run Linear Regression from scikit-learn or code given above.
# write code here. Repeat from above.
# To predict output using ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt'] as input features.
# Check features range and statistics to see if there is any missing data.
# As you can see from count "GarageCars" and "TotalBsmtSF" has 1 missing value each.
df_test[cols].describe()
# Replace missing value with the mean of the feature
df_test['GarageCars'] = df_test['GarageCars'].fillna((df_test['GarageCars'].mean()))
df_test['TotalBsmtSF'] = df_test['TotalBsmtSF'].fillna((df_test['TotalBsmtSF'].mean()))
df_test[cols].describe()
# read test X without 1's
# write code here
# predict using trained model
predict3 = # write code here
# replace any negative predicted saleprice by zero
predict3[predict3<0] = 0
# predict target/output variable for test data using the trained model and upload to kaggle.
# write code to save output as predict3.csv here
"""
Explanation: <br>
Task 4: Multivariate Linear Regression
Lastly use columns ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt'] and scikit-learn or the code given above to predict output on test data. Upload it to kaggle like earlier and see how much it improves your score.
Everything remains same except dimensions of X changes.
There might be some data missing from the test or train data that you can check using pandas.DataFrame.describe() function. Below we provide some helping functions for removing that data.
End of explanation
"""
|
steinam/teacher | jup_notebooks/datenbanken/Subselects_11FI3.ipynb | mit | %load_ext sql
%sql mysql://steinam:steinam@localhost/versicherung_complete
"""
Explanation: Subselect / Unterabfragen)
Zur Durchführung einer Abfrage werden Informationen benötigt, die erst durch eine eigene Abfrage geholt werden müssen.
Sie können stehen
als Vertreter für einen Wert
als Vertreter für eine Liste
als Vertreter für eine Tabelle
als Vertreter für ein Feld
End of explanation
"""
% load_ext sql
"""
Explanation:
End of explanation
"""
%%sql
select Personalnummer, Name, Vorname
from Mitarbeiter
where Abteilung_ID =
( select ID from Abteilung
where Kuerzel = 'Schadensabwicklung' );
"""
Explanation: Vertreter für Wert
Nenne alle Mitarbeiter der Abteilung „Schadensabwicklung“.
End of explanation
"""
%%sql
select Personalnummer, Name, Vorname
from Mitarbeiter
where Abteilung_ID =
( select ID from Abteilung
where Kuerzel = 'ScAb' );
"""
Explanation: Lösung
End of explanation
"""
%%sql
SELECT ID, Datum, Ort, Schadenshoehe
from Schadensfall
where Schadenshoehe < (
select AVG(Schadenshoehe) from Schadensfall
);
"""
Explanation: Vertreter für Spaltenfunktionen
Die Ergebnisse von Aggregatfunktionen werden häufig in der WHERE-Klausel benötigt
Beispiel:
Hole die Schadensfälle mit unterdurchschnittlicher Schadenshöhe.
Lösung
Teil 1: Berechne die durchschnittliche Schadenshöhe aller Schadensfälle.
Teil 2: Übernimm das Ergebnis als Vergleichswert in die eigentliche Abfrage.
End of explanation
"""
%%sql
select sf.ID, sf.Datum, sf.Schadenshoehe, EXTRACT(YEAR from
sf.Datum) AS Jahr
from Schadensfall sf
where ABS(Schadenshoehe - (
select AVG(sf2.Schadenshoehe)
from Schadensfall sf2
where YEAR(sf2.Datum) = YEAR(sf.Datum)
)
) <= 300;
"""
Explanation: Aufgabe
Bestimme alle Schadensfälle, die von der durchschnittlichen Schadenshöhe eines Jahres
maximal 300 € abweichen.
Lösung
Teil 1: Bestimme den Durchschnitt aller Schadensfälle innerhalb eines Jahres.
Teil 2: Hole alle Schadensfälle, deren Schadenshöhe im betreffenden Jahr innerhalb des Bereichs „Durchschnitt plus/minus 300“ liegen.
End of explanation
"""
%%sql
select ID, Kennzeichen, Fahrzeugtyp_ID as TypID
from Fahrzeug
where Fahrzeugtyp_ID in(
select ID
from Fahrzeugtyp
where Hersteller_ID = (
select ID
from Fahrzeughersteller
where Name = 'Volkswagen' ) );
"""
Explanation: Bemerkung
Dies ist ein Paradebeispiel dafür, wie Unterabfragen nicht benutzt werden sollen. Für jeden
einzelnen Datensatz muss in der WHERE-Bedingung eine neue Unterabfrage gestartet werden − mit eigener WHERE-Klausel und Durchschnittsberechnung. Viel besser wäre eine der JOIN-Varianten.
Weitere Lösungsmöglichkeiten (Lutz (13/14)
```mysql
select beschreibung, schadenshoehe
from schadensfall where
schadenshoehe <= (
select avg(schadenshoehe)
from schadensfall) + 300
and schadenshoehe >= (select avg(schadenshoehe)
from schadensfall) - 300
select beschreibung, schadenshoehe
from schadensfall where
schadenshoehe between (
select avg(schadenshoehe)
from schadensfall) - 300
and (select avg(schadenshoehe)
from schadensfall) + 300
select @average:=avg(schadenshoehe) from schadensfall;
select id from schadensfall where abs(schadenshoehe -
@average) <= 300;
```
Ergebnis als Liste mehrerer Werte
Das Ergebnis einer Abfrage kann als Filter für die eigentliche Abfrage benutzt werden.
Aufgabe
Bestimme alle Fahrzeuge eines bestimmten Herstellers.
Lösung
Teil 1: Hole die ID des gewünschten Herstellers.
Teil 2: Hole alle IDs der Tabelle Fahrzeugtyp zu dieser Hersteller-ID.
Teil 3: Hole alle Fahrzeuge, die zu dieser Liste von Fahrzeugtypen-IDs passen.
End of explanation
"""
%%sql
select *
from Schadensfall
where ID in ( SELECT ID
from Schadensfall
where ( ABS(Schadenshoehe - (
select AVG(sf2.Schadenshoehe)
from Schadensfall sf2
where YEAR(sf2.Datum) = 2008
)
) <= 300 )
and ( YEAR(Datum) = 2008 )
);
"""
Explanation: Aufgabe
Gib alle Informationen zu den Schadensfällen des Jahres 2008, die von der durchschnittlichen Schadenshöhe 2008 maximal 300 € abweichen.
Lösung
Teil 1: Bestimme den Durchschnitt aller Schadensfälle innerhalb von 2008.
Teil 2: Hole alle IDs von Schadensfällen, deren Schadenshöhe innerhalb des Bereichs „Durchschnitt plus/minus 300“ liegen.
Teil 3: Hole alle anderen Informationen zu diesen IDs.
End of explanation
"""
%sql
SELECT sf.ID, sf.Datum, sf.Schadenshoehe, temp.Jahr,
temp.Durchschnitt
FROM Schadensfall sf,
( SELECT AVG(sf2.Schadenshoehe) AS Durchschnitt,
EXTRACT(YEAR FROM sf2.Datum) as Jahr
FROM Schadensfall sf2
group by EXTRACT(YEAR FROM sf2.Datum)
) temp
WHERE temp.Jahr = EXTRACT(YEAR FROM sf.Datum)
and ABS(Schadenshoehe - temp.Durchschnitt) <= 300;
"""
Explanation: Vertreter für eine Tabelle
Das Ergebnis einer Abfrage kann in der Hauptabfrage überall dort eingesetzt werden, wo
eine Tabelle vorgesehen ist. Die Struktur dieser Situation sieht so aus:
```mysql
SELECT <spaltenliste>
FROM <haupttabelle>,
(SELECT <spaltenliste>
FROM <zusatztabellen>
<weitere Bestandteile der Unterabfrage>
) <name>
<weitere Bestandteile der Hauptabfrage>
```
Die Unterabfrage kann grundsätzlich alle SELECT-Bestandteile enthalten.
ORDER BY kann nicht sinnvoll genutzt werden, weil das Ergebnis der Unterabfrage mit der Haupttabelle oder einer
anderen Tabelle verknüpft wird wodurch eine Sortierung sowieso verlorenginge.
Es muss ein Name als Tabellen-Alias angegeben werden, der als Ergebnistabelle in der Hauptabfrage verwendet wird.
Aufgabe
Bestimme alle Schadensfälle, die von der durchschnittlichen Schadenshöhe eines Jahres maximal 300 € abweichen.
Lösung
Teil 1: Stelle alle Jahre zusammen und bestimme den Durchschnitt aller Schadensfälle innerhalb eines Jahres.
Teil 2: Hole alle Schadensfälle, deren Schadenshöhe im jeweiligen Jahr innerhalb des Bereichs „Durchschnitt plus/minus 300“ liegen.
End of explanation
"""
%%sql
SELECT Fahrzeug.ID, Kennzeichen, Typen.ID As TYP, Typen.Bezeichnung
FROM Fahrzeug,
(SELECT ID, Bezeichnung
FROM Fahrzeugtyp
WHERE Hersteller_ID =
(SELECT ID
FROM Fahrzeughersteller
WHERE Name = 'Volkswagen' )
) Typen
WHERE Fahrzeugtyp_ID = Typen.ID;
"""
Explanation: Durch eine Gruppierung werden alle Jahreszahlen und die durchschnittlichen Schadenshöhen zusammengestellt (Teil 1 der Lösung).
Für Teil 2 der Lösung muss für jeden Schadensfall nur noch Jahr und Schadenshöhe mit dem betreffenden Eintrag in der Ergebnistabelle temp verglichen werden.
Das ist der wesentliche Unterschied und entscheidende Vorteil zu anderen Lösungen: Die
Durchschnittswerte werden einmalig zusammengestellt und nur noch abgerufen; sie müs-
sen nicht bei jedem Datensatz neu (und ständig wiederholt) berechnet werden.
Aufgabe
Bestimme alle Fahrzeuge eines bestimmten Herstellers mit Angabe des Typs.
Teil 1: Hole die ID des gewünschten Herstellers.
Teil 2: Hole alle IDs und Bezeichnungen der Tabelle Fahrzeugtyp, die zu dieser Hersteller-ID gehören.
Teil 3: Hole alle Fahrzeuge, die zu dieser Liste von Fahrzeugtyp-IDs gehören.
End of explanation
"""
%sql mysql://steinam:steinam@localhost/so_2016
%%sql
-- Original Roth
Select Kurs.KursID, Kursart.Bezeichnung,
Kurs.DatumUhrzeitBeginn,
((count(KundeKurs.KundenID)/Kursart.TeilnehmerMax) * 100) as Auslastung
from Kurs, Kursart, Kundekurs
where KundeKurs.KursID = Kurs.KursID and Kursart.KursartID = Kurs.KursartID
group by Kurs.KursID, Kurs.DatumUhrzeitBeginn, Kursart.Bezeichnung
having Auslastung < 50;
%%sql
select kursid from kurs
where
((select teilnehmerMax from kursart where kursart.kursartId = kurs.kursartId) * 0.5)
>
(count(KundeKurs.kundenid) where KundeKurs.KursID = kurs.KursID);
%%sql
Select Kurs.KursID, Kursart.Bezeichnung,
Kurs.DatumUhrzeitBeginn,
((count(KundeKurs.KundenID)/Kursart.TeilnehmerMax) * 100) as Auslastung
from Kurs, Kursart, Kundekurs
where KundeKurs.KursID = Kurs.KursID and Kursart.KursartID = Kurs.KursartID
group by Kurs.KursID, Kurs.DatumUhrzeitBeginn, Kursart.Bezeichnung
having Auslastung < 50
%%sql
Select Kurs.KursID, Kursart.Bezeichnung,
Kurs.DatumUhrzeitBeginn,
((count(KundeKurs.KundenID)/Kursart.TeilnehmerMax) * 100) as Auslastung
from kurs left join kundekurs
on kurs.`kursid` = kundekurs.`Kursid`
inner join kursart
on `kurs`.`kursartid` = `kursart`.`kursartid`
group by Kurs.KursID, Kurs.DatumUhrzeitBeginn, Kursart.Bezeichnung
having Auslastung < 50
"""
Explanation: Übungen
Welche der folgenden Feststellungen sind richtig, welche sind falsch?
Das Ergebnis einer Unterabfrage kann verwendet werden, wenn es ein einzelner Wert oder eine Liste in Form einer Tabelle ist. Andere Ergebnisse sind nicht möglich.
Ein einzelner Wert als Ergebnis kann durch eine direkte Abfrage oder durch eine Spaltenfunktion erhalten werden.
Unterabfragen sollten nicht verwendet werden, wenn die WHERE-Bedingung für jede Zeile der Hauptabfrage einen anderen Wert erhält und deshalb die Unterabfrage neu ausgeführt werden muss.
Mehrere Unterabfragen können verschachtelt werden.
Für die Arbeitsgeschwindigkeit ist es gleichgültig, ob mehrere Unterabfragen oder JOINs verwendet werden.
Eine Unterabfrage mit einer Tabelle als Ergebnis kann GROUP BY nicht sinnvoll nutzen.
Eine Unterabfrage mit einer Tabelle als Ergebnis kann ORDER BY nicht sinnvoll nutzen.
Bei einer Unterabfrage mit einer Tabelle als Ergebnis ist ein Alias-Name für die Tabelle sinnvoll, aber nicht notwendig.
Bei einer Unterabfrage mit einer Tabelle als Ergebnis sind Alias-Namen für die Spalten sinnvoll, aber nicht notwendig.
Welche Verträge (mit einigen Angaben) hat der Mitarbeiter „Braun, Christian“ abgeschlossen? Ignorieren Sie die Möglichkeit, dass es mehrere Mitarbeiter dieses Namens geben könnte.
Zeigen Sie alle Verträge, die zum Kunden 'Heckel Obsthandel GmbH' gehören. Ignorieren Sie die Möglichkeit, dass der Kunde mehrfach gespeichert sein könnte.
Ändern Sie die Lösung von Übung 3, sodass auch mehrere Kunden mit diesem Namen als Ergebnis denkbar sind.
Zeigen Sie alle Fahrzeuge, die im Jahr 2008 an einem Schadensfall beteiligt waren.
Zeigen Sie alle Fahrzeugtypen (mit ID, Bezeichnung und Name des Herstellers), die im Jahr 2008 an einem Schadensfall beteiligt waren.
Bestimmen Sie alle Fahrzeuge eines bestimmten Herstellers mit Angabe des Typs.
Zeigen Sie zu jedem Mitarbeiter der Abteilung „Vertrieb“ den ersten Vertrag (mit einigen Angaben) an, den er abgeschlossen hat. Der Mitarbeiter soll mit ID und Name/Vorname angezeigt werden.
Von der Deutschen Post AG wird eine Tabelle PLZ_Aenderung mit folgenden Inhalten geliefert:
csv
ID PLZalt Ortalt PLZneu Ortneu
1 45658 Recklinghausen 45659 Recklinghausen
2 45721 Hamm-Bossendorf 45721 Haltern OT Hamm
3 45772 Marl 45770 Marl
4 45701 Herten 45699 Herten
Ändern Sie die Tabelle Versicherungsnehmer so, dass bei allen Adressen, bei denen PLZ/Ort mit PLZalt/Ortalt
übereinstimmen, diese Angaben durch PLZneu/Ortneu geändert werden.
Hinweise: Beschränken Sie sich auf die Änderung mit der ID=3. (Die vollständige Lösung ist erst mit
SQL-Programmierung möglich.) Bei dieser Änderungsdatei handelt es sich nur um fiktive Daten, keine echten Änderungen.
Sommer 2016
End of explanation
"""
|
dataewan/deep-learning | face_generation/dlnd_face_generation.ipynb | mit | data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
data_dir = '/input'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
"""
Explanation: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project:
- MNIST
- CelebA
Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.
If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".
End of explanation
"""
show_n_images = 25
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
"""
Explanation: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
End of explanation
"""
show_n_images = 25
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
"""
Explanation: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https://en.wikipedia.org/wiki/Channel_(digital_image%29) while the CelebA images have [3 color channels (RGB color channel)](https://en.wikipedia.org/wiki/Channel_(digital_image%29#RGB_Images).
Build the Neural Network
You'll build the components necessary to build a GANs by implementing the following functions below:
- model_inputs
- discriminator
- generator
- model_loss
- model_opt
- train
Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
"""
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
"""
# TODO: Implement Function
inputs_real = tf.placeholder(tf.float32, shape=(None, image_width, image_height, image_channels), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs_real, inputs_z, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
- Z input placeholder with rank 2 using z_dim.
- Learning rate placeholder with rank 0.
Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
End of explanation
"""
def discriminator(images, reuse=False):
"""
Create the discriminator network
:param image: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
"""
alpha = 0.2
keep_prob=0.8
with tf.variable_scope('discriminator', reuse=reuse):
# using 4 layer network as in DCGAN Paper
# Conv 1
conv1 = tf.layers.conv2d(images, 64, 5, 2, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
lrelu1 = tf.maximum(alpha * conv1, conv1)
# Conv 2
conv2 = tf.layers.conv2d(lrelu1, 128, 5, 2, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
batch_norm2 = tf.layers.batch_normalization(conv2, training=True)
lrelu2 = tf.maximum(alpha * batch_norm2, batch_norm2)
drop2 = tf.nn.dropout(lrelu2, keep_prob=keep_prob)
# Conv 3
conv3 = tf.layers.conv2d(drop2, 256, 5, 1, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
batch_norm3 = tf.layers.batch_normalization(conv3, training=True)
lrelu3 = tf.maximum(alpha * batch_norm3, batch_norm3)
drop3 = tf.nn.dropout(lrelu3, keep_prob=keep_prob)
# Conv 4
conv4 = tf.layers.conv2d(drop3, 512, 5, 1, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
batch_norm4 = tf.layers.batch_normalization(conv4, training=True)
lrelu4 = tf.maximum(alpha * batch_norm4, batch_norm4)
drop4 = tf.nn.dropout(lrelu4, keep_prob=keep_prob)
# Flatten
flat = tf.reshape(drop4, (-1, 7*7*512))
# Logits
logits = tf.layers.dense(flat, 1)
# Output
out = tf.sigmoid(logits)
return out, logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
"""
Explanation: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).
End of explanation
"""
def generator(z, out_channel_dim, is_train=True):
"""
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
"""
alpha = 0.2
keep_prob=0.8
with tf.variable_scope('generator', reuse=False if is_train==True else True):
# Fully connected
fc1 = tf.layers.dense(z, 7*7*512)
fc1 = tf.reshape(fc1, (-1, 7, 7, 512))
fc1 = tf.maximum(alpha*fc1, fc1)
# Starting Conv Transpose Stack
deconv2 = tf.layers.conv2d_transpose(fc1, 256, 3, 1, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
batch_norm2 = tf.layers.batch_normalization(deconv2, training=is_train)
lrelu2 = tf.maximum(alpha * batch_norm2, batch_norm2)
drop2 = tf.nn.dropout(lrelu2, keep_prob=keep_prob)
deconv3 = tf.layers.conv2d_transpose(drop2, 128, 3, 1, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
batch_norm3 = tf.layers.batch_normalization(deconv3, training=is_train)
lrelu3 = tf.maximum(alpha * batch_norm3, batch_norm3)
drop3 = tf.nn.dropout(lrelu3, keep_prob=keep_prob)
deconv4 = tf.layers.conv2d_transpose(drop3, 64, 3, 2, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
batch_norm4 = tf.layers.batch_normalization(deconv4, training=is_train)
lrelu4 = tf.maximum(alpha * batch_norm4, batch_norm4)
drop4 = tf.nn.dropout(lrelu4, keep_prob=keep_prob)
# Logits
logits = tf.layers.conv2d_transpose(drop4, out_channel_dim, 3, 2, 'SAME', kernel_initializer=tf.contrib.layers.xavier_initializer())
# Output
out = tf.tanh(logits)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
"""
Explanation: Generator
Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
End of explanation
"""
def model_loss(input_real, input_z, out_channel_dim):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
g_model = generator(input_z, out_channel_dim)
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real) * 0.9)
)
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake))
)
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake))
)
return d_loss, g_loss
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
"""
Explanation: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
- discriminator(images, reuse=False)
- generator(z, out_channel_dim, is_train=True)
End of explanation
"""
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS, scope='generator')):
g_train_opt = tf.train.AdamOptimizer(learning_rate = learning_rate,beta1 = beta1).minimize(g_loss, var_list = g_vars)
return d_train_opt, g_train_opt
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
"""
Explanation: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
"""
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
"""
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
"""
Explanation: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
End of explanation
"""
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
"""
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
"""
tf.reset_default_graph()
input_real, input_z, _ = model_inputs(data_shape[1], data_shape[2], data_shape[3], z_dim)
d_loss, g_loss = model_loss(input_real, input_z, data_shape[3])
d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
batch_images = batch_images * 2
steps += 1
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
_ = sess.run(d_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_opt, feed_dict={input_z: batch_z})
if steps % 100 == 0:
train_loss_d = d_loss.eval({input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(epoch_i+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
_ = show_generator_output(sess, 1, input_z, data_shape[3], data_image_mode)
"""
Explanation: Train
Implement train to build and train the GANs. Use the following functions you implemented:
- model_inputs(image_width, image_height, image_channels, z_dim)
- model_loss(input_real, input_z, out_channel_dim)
- model_opt(d_loss, g_loss, learning_rate, beta1)
Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.
End of explanation
"""
batch_size = 32
z_dim = 100
learning_rate = 0.0002
beta1 = 0.5
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
"""
Explanation: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
End of explanation
"""
batch_size = 64
z_dim = 100
learning_rate = 0.0002
beta1 = 0.5
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
"""
Explanation: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
End of explanation
"""
print("Done")
"""
Explanation: Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
End of explanation
"""
|
borja876/Thinkful-DataScience-Borja | Capstone+Narrative+analytics+and+experimentation.ipynb | mit | import numpy as np
import pandas as pd
import scipy
import matplotlib.pyplot as plt
from scipy.stats import ttest_ind
from scipy import stats
import itertools
import seaborn as sns
%matplotlib inline
w = pd.read_csv('https://raw.githubusercontent.com/borja876/Thinkful-DataScience-Borja/master/Electricity%20Consumption.csv')
x = pd.read_csv('https://raw.githubusercontent.com/borja876/Thinkful-DataScience-Borja/master/GDP%20current%20prices.csv')
y = pd.read_csv('https://raw.githubusercontent.com/borja876/Thinkful-DataScience-Borja/master/Population.csv')
df = pd.DataFrame(w)
df1 = pd.DataFrame(x)
df2 = pd.DataFrame(y)
"""
Explanation: Capston Project: Analytic Report & Research Proposal
Evolution of GPD and Household Electricity Consumption 2000-2014
End of explanation
"""
#Cleanse Data Frame Electricity Consumption:
#Delete rows without countries, rename columns and take out those that are not used
#Zeros are identified to see their potential impact on the data set and further results
dfa= df[:3206]
dfb= dfa.rename(columns={'Country or Area': 'Country', 'Quantity': 'Household Consumption (GWh)'})
dfc = dfb.drop(["Quantity Footnotes", "Unit", "Commodity - Transaction"], axis=1)
dfc['Year'] = dfc['Year'].astype(int)
#dfc.loc[dfc['Household Consumption (GWh)'] == 0]
#Cleanse Data Frame GDP Current Prices (USD):
#Cleanse Data Frame Electricity Consumption:
#Delete rows without countries, rename columns and take out those that are not used
#Zeros are identified to see their potential impact on the data set and further results
dfd= df1[:3322]
dfe= dfd.rename(columns={'Country or Area': 'Country', 'Value': 'GDP Current Prices (USD)'})
dfg = dfe.drop('Value Footnotes', axis=1)
dfg['Year'] = dfg['Year'].astype(int)
#dfg.loc[dfg['GDP Current Prices (USD)'] == 0]
#Cleanse Data Frame Population:
#Take out rows without countries
#Rename columns
#Clean columns taking out those that are not used
dfh= df2[:3522]
dfi= dfh.rename(columns={'Country or Area': 'Country', 'Value': 'Population'})
dfj = dfi.drop('Value Footnotes', axis=1)
dfj['Year'] = dfj['Year'].astype(int)
#dfj.loc[dfj['Population'] == 0]
#Merge data into a single dataset
#Cleanse new dataset
result = dfc.merge(dfg, left_on=["Country","Year"], right_on=["Country","Year"], how='outer')
result = result.merge(dfj, left_on=["Country","Year"], right_on=["Country","Year"], how='outer')
result = result.dropna()
#Rescale & rename variables in the new data set to GDP Current Prices (Million USD) & Population (Thousands)
result['GDP Current Prices (Million USD)']=result['GDP Current Prices (USD)']/1000000
result['Population (Thousands)']=result['Population']/1000
#Drop redundant columns after merging the data sets
result = result.drop('GDP Current Prices (USD)',1)
result = result.drop('Population',1)
result= result.rename(columns={'GDP Current Prices (USD)': 'GDP Current Prices (Million USD)', 'Population': 'Population (Thousands)'})
#Use population as a common ground to standardise GDP and household consumption to standardise both variables
result['Standard GDP Current Prices (USD)'] = (result['GDP Current Prices (Million USD)']*1000)/result['Population (Thousands)']
result['Standard Household Consumption (kWh)'] = (result['Household Consumption (GWh)']*1000)/result['Population (Thousands)']
"""
Explanation: Data sets description
The purpose of this report is to analyze the evolution of the household electricity market and GDP per continent in the period 2000-2014. This will show how different economic cycles have affected both variables and the potential correlation between them. Furthermore, in a per continent basis, the both variables will be scrutinized to depict the similarities and differences between continents that will show the relationship between them. Three data sets have been imported from United Nations (from now onwards UN) database. The data sets contain global information for the time span 2000-2014 regarding:
Electricity consumption for the household market (GhW). In this dataset, the UN has estimated values for some countries (ex-colonies of the UK) based on their April-March consumption. This estimation has been done up until 2004 yearly electricity consumption was standardized following the natural year and not the fiscal year. Electricity consumption in Iraq in 2014 has a null value, as it was not reported due to the war that started in 2015.
GDP per country in current USD. From all the data sets available, measuring GDP the one measuring it in current USD has been chosen to avoid the impact of the different base years across countries when calculating the base of the deflation year and to avoid the use of different exchange rates across countries during the time span under analysis.
Population. In this case, the population has been converted into (Thousands) as a unit to make the standardized GDP and electricity consumption significant. This variable shows the net population at the end of the year considering births, deaths and declared migration.
The three of them are significant because, although different data sets containing this information exist, only this institution (UN) gathers them. They are consistent in terms of the methods used to gather this information and credible due to the institution that is providing them. The three variables can be compared against each other reducing the bias that may exist in the data due to different data gathering technics used across countries.
Electricity consumption and GDP are two metrics that measure the wealth and wellbeing of a country. Its evolution during years can not only show where economy has experienced a slowdown and lose of welfare.
The chosen time span neutralizes the effect of the disappearance of the USSR which reduces the distortion of the obtained results. Furthermore, the evolution of these two variables before the year 2000 has not representative when predicting the future evolution of wealth and wellbeing of a country/continent or/and its economic slowdown. The main reason being that the way in which the information is gathered in both cases has changed at a macroeconomic level and base years have been adjusted in all countries to the year 2000.
Both variables, Electricity and GDP, have been analyzed after standardizing them considering the Population as the common factor. Additionally, they have been rescaled to kWh (electricity consumption) and (Million USD) to make them comparable. Household electricity consumption per individual and GDP per individual at current prices are better proxies of the welfare of the country and scaling issues disappear.
End of explanation
"""
#Import list of countries per continent from external data set and clean the columns that will not be used.
countries = pd.read_json('https://raw.githubusercontent.com/borja876/Thinkful-DataScience-Borja/master/countries.txt')
countries =countries.drop (['capital','code','timezones'],1)
#Merge both data sets for have a complete working data set
result = result.merge(countries, left_on=["Country"], right_on=["name"], how='inner')
result = result.drop('name',1)
result = result.rename(columns={'continent': 'Continent'})
"""
Explanation: A list of countries per continent has been imported from a different source (https://gist.github.com/pamelafox/986163). The use of this list aims to group by continent the information provided by the UN at a country level. This has risen additional difficulties due to the use different names between sources for several countries, for example: Russian Federation vs. Russia, Netherlands vs. The Netherlands, etc. Moreover, some countries have been identified in the original data set that are not included in the original list. This have been added to the list for the completeness. The aim of this addition is to have all continents accurately represented. America has been split into North and South America to have a detailed view of the evolution of both regions independently and to avoid distortion. The final continents used are:
Africa
Oceania
North America
South America
Europe
Asia
End of explanation
"""
result.head()
"""
Explanation: After checking that all countries considered by the UN are captured in the eternal list of countries used to group them by continent the final data set is created. The following table shows the first five rows of the final data set that will be used for the purpose of this report:
End of explanation
"""
summary = result.describe().astype(int)
summary.drop('Year',1)
v = result.var()
v1 = pd.DataFrame(v, columns=['Var'])
w=result.skew()
w1 = pd.DataFrame(w, columns=['Skew'])
ww=result.kurt()
ww1 = pd.DataFrame(ww, columns=['kurt'])
df55 = v1.assign(Skew=w1.values, Kurt=ww1.values)
df56=df55.transpose()
frames = [summary, df56]
summaryb = pd.concat(frames).drop('Year',1)
summaryb
"""
Explanation: Exploratory Data Analysis
The following summary statistics have been conducted. In the table below, it can be seen the difference between extreme values for all variables, ranging from zero/tens to millions for electricity consumption and GDP. Once standardized variances continue to be high. This recommends the use of the median instead of the mean to avoid the effect of these extreme values. These extreme values come from United States (North America) which is equivalent to the whole Europe and Iraq (min) as zero for 2014.
The values of Skewness and Kurtosis show that variables cannot be following a normal distribution.
End of explanation
"""
print(ttest_ind(result['Household Consumption (GWh)'], result['GDP Current Prices (Million USD)'], equal_var=False))
print(ttest_ind(result['Standard Household Consumption (kWh)'], result['Standard GDP Current Prices (USD)'], equal_var=False))
#Show the prices with a stripplot
plt.figure(figsize=(20, 5))
plt.subplot(1, 2, 1)
sns.stripplot(x="Continent", y="Standard Household Consumption (kWh)", data=result, jitter=True)
plt.title('Standard Household Consumption (kWh)')
plt.subplot(1, 2, 2)
sns.stripplot(x="Continent", y="Standard GDP Current Prices (USD)", data=result, jitter=True)
plt.title('Standard GDP Current Prices (USD)')
"""
Explanation: A t test has been conducted showing that the difference in means between both the standard electricity consumption and the standard GDP are truly because the populations are different and not due to variability. As both original variables are representing different populations, the standardised ones follow the same principle (as it can deducted from the obtained p values).
End of explanation
"""
#Distribution of the variable price
plt.figure(figsize=(20, 5))
plt.subplot(1, 5, 1)
sns.distplot(result['Household Consumption (GWh)'])
plt.xlim([0, 500000])
plt.title('Household Consumption (GWh)')
plt.subplot(1, 5, 2)
sns.distplot(result['GDP Current Prices (Million USD)'])
plt.title('GDP Current Prices (Million USD)')
plt.subplot(1, 5, 3)
sns.distplot(result['Population (Thousands)'])
plt.xlim([0, 500000])
plt.title('Population (Thousands)')
plt.subplot(1, 5, 4)
sns.distplot(result['Standard GDP Current Prices (USD)'])
plt.xlim([0, 70000])
plt.title('Standard GDP Current Prices (USD p. Capita)')
plt.subplot(1, 5, 5)
sns.distplot(result['Standard Household Consumption (kWh)'])
plt.xlim([0, 6000])
plt.title('Standard Household Consumption (kWh)')
plt.tight_layout()
plt.show()
#Density regions for the relationship between availability and price
sns.jointplot("Standard Household Consumption (kWh)", "Standard GDP Current Prices (USD)", data=result,kind="reg", fit_reg=True)
"""
Explanation: Initial exploration of the data set per continent shows that Europe and Asia are the continents that have greater variance in both Standard Household Consumption and Standard GDP Current prices (USD). There is no evidence that there is any kind of correlation between both variables. As it can be seen in the graph below, none of the variables follow a normal distribution. All are skewed to lower values of the variables showing different levels of dispersion. This could be due to the difference between countries and years in each continent, both in terms of GDP and electricity consumption.
End of explanation
"""
g = sns.lmplot(x="Standard Household Consumption (kWh)", y="Standard GDP Current Prices (USD)", hue="Continent",
truncate=True, size=10, data=result)
# Use more informative axis labels than are provided by default
g.set_axis_labels("Standard Household Consumption (kWh)", "Standard GDP Current Prices (USD)")
"""
Explanation: At a global level it seems that there is a high correlation between the household electricity consumption and GDP. There have been different policies in place around the world in the period under analysis to reduce the electricity consumption. Furthermore, appliances of all sort have evolved to be more efficient in terms of electricity consumption. Nevertheless, there is still a strong correlation between both variables.
End of explanation
"""
sns.lmplot(x="Standard Household Consumption (kWh)", y="Standard GDP Current Prices (USD)", col="Continent", hue="Continent", data=result,
col_wrap=2, fit_reg= True, palette="muted", size=6,
scatter_kws={"s": 50, "alpha": 1})
"""
Explanation: A closer inspection of the correlation between both variables show that there are different levels of relationship between both variables. North America is the continent with the strongest correlation between both variables followed by Europe and Asia. In all three cases, different energy saving policies were introduced between 2000 and 2014 at different stages and focusing on different aspects of household demand. It seems that Northern hemisphere countries have a stronger reliance on electricity to grow their economies and that electricity saving policies are not having the desired outcome or at least they are lagging in terms of the expected results.
End of explanation
"""
|
hannorein/reboundx | ipython_examples/GettingStartedParameters.ipynb | gpl-3.0 | import rebound
import reboundx
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1.)
ps = sim.particles
rebx = reboundx.Extras(sim)
gr = rebx.load_force('gr')
rebx.add_force(gr)
"""
Explanation: Adding Parameters With REBOUNDx
We start by creating a simulation, attaching REBOUNDx, and adding the effects of general relativity:
End of explanation
"""
ps[1].params['primary'] = 1
gr.params['c'] = 3.e8
"""
Explanation: The documentation page https://reboundx.readthedocs.io/en/latest/effects.html lists the various required and optional parameters that need to be set for each effect in REBOUNDx. Adding these parameters to particles, forces and operators is easy. We do it through the params attribute:
End of explanation
"""
sim.integrate(10.)
gr.params['c']
"""
Explanation: We would now sim.integrate as usual. If we want, we can access these values later (e.g., some effects could update these values as the simulation progresses). Here they don't:
End of explanation
"""
speed = 5
gr.params['c'] = speed
"""
Explanation: Details
For simples types (ints and floats), assigning variables to parameters makes a copy of the value. For example:
End of explanation
"""
speed = 10
gr.params['c']
"""
Explanation: If we now update speed, this will not be reflected in our 'c' parameter:
End of explanation
"""
ps[1].params['force'] = gr
"""
Explanation: More complicated objects are assigned as pointers. For example, adding REBOUNDx structures like forces works out of the box. As a simple example (with no meaning whatsoever):
End of explanation
"""
gr.params['c'] = 10
newgr = ps[1].params['force']
newgr.params['c']
"""
Explanation: Now if we update gr, the changes will be reflected in the 'force' parameter:
End of explanation
"""
try:
waterfrac = ps[1].params['waterfrac']
except:
print('No water on this planet')
"""
Explanation: If the parameter doesn't exist REBOUNDx will raise an exception, which we can catch and handle however we want
End of explanation
"""
try:
gr.params['q'] = 7
except AttributeError as e:
print(e)
"""
Explanation: Adding Your Own Parameters
In order to go back and forth between Python and C, REBOUNDx keeps a list of registered parameter names with their corresponding types. This list is compiled from all the parameters used by the various forces and operators in REBOUNDx listed here: https://reboundx.readthedocs.io/en/latest/effects.html.
If you try to add one that's not on the list, it will complain:
End of explanation
"""
from reboundx.extras import REBX_C_PARAM_TYPES
REBX_C_PARAM_TYPES
"""
Explanation: You can register the name permanently on the C side, but can also do it from Python. You must pass a name along with one of the C types:
End of explanation
"""
rebx.register_param("q", "REBX_TYPE_DOUBLE")
gr.params['q'] = 7
gr.params['q']
"""
Explanation: For example, say we want a double:
End of explanation
"""
from ctypes import *
class SPH_sim(Structure):
_fields_ = [("dt", c_double),
("Nparticles", c_int)]
my_sph_sim = SPH_sim()
my_sph_sim.dt = 0.1
my_sph_sim.Nparticles = 10000
"""
Explanation: Custom Parameters
You can also add your own more complicated custom types (for example from another library) straightfowardly, with a couple caveats. First, the object must be wrapped as a ctypes object in order to communicate with the REBOUNDx C library, e.g.
End of explanation
"""
rebx.register_param("sph", "REBX_TYPE_POINTER")
gr.params['sph'] = my_sph_sim
"""
Explanation: We also have to register it as a generic POINTER:
End of explanation
"""
mysph = gr.params['sph']
mysph = cast(mysph, POINTER(SPH_sim)).contents
mysph.dt
"""
Explanation: Now when we get the parameter, REBOUNDx does not know how to cast it. You get a ctypes.c_void_p object back, which you have to manually cast to the Structure class we've created. See the ctypes library documentation for details:
End of explanation
"""
|
kayzhou22/DSBiz_Project_LendingClub | Data_Preprocessing/Collaboration-appLoan_DataProcessing.ipynb | mit | import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.feature_selection import RFE
from sklearn.svm import SVR
from sklearn.svm import LinearSVC
from sklearn.svm import LinearSVR
import seaborn as sns
import matplotlib.pylab as pl
%matplotlib inline
"""
Explanation: Lending Club Default Rate Analysis
End of explanation
"""
df_app_2015 = pd.read_csv('LoanStats3d_securev1.csv.zip', compression='zip',header=1, skiprows=[-2,-1],low_memory=False)
df_app_2015.head(3)
# Pre-select columns
df = df_app_2015.ix[:, ['loan_status','loan_amnt', 'int_rate', 'sub_grade',\
'purpose',\
'annual_inc', 'emp_length', 'home_ownership',\
'fico_range_low','fico_range_high',\
'num_actv_bc_tl', 'tot_cur_bal', 'mort_acc','num_actv_rev_tl',\
'pub_rec_bankruptcies','dti' ]]
"""
Explanation: Columns Interested
loan_status -- Current status of the loan<br/>
loan_amnt -- The listed amount of the loan applied for by the borrower. If at some point in time, the credit department reduces the loan amount, then it will be reflected in this value.<br/>
int_rate -- interest rate of the loan <br/>
sub_grade -- LC assigned sub loan grade -- dummie (grade -- LC assigned loan grade<br/>-- dummie)<br/>
purpose -- A category provided by the borrower for the loan request. -- dummie<br/>
annual_inc -- The self-reported annual income provided by the borrower during registration.<br/>
emp_length -- Employment length in years. Possible values are between 0 and 10 where 0 means less than one year and 10 means ten or more years. -- dummie<br/>
fico_range_low<br/>
fico_range_high
home_ownership -- The home ownership status provided by the borrower during registration or obtained from the credit report. Our values are: RENT, OWN, MORTGAGE, OTHER -- dummie<br/>
tot_cur_bal -- Total current balance of all accounts
num_actv_bc_tl -- number of active bank accounts (avg_cur_bal -- average current balance of all accounts )<br/>
mort_acc -- number of mortgage accounts<br/>
num_actv_rev_tl -- Number of currently active revolving trades<br/>
dti -- A ratio calculated using the borrower’s total monthly debt payments on the total debt obligations, excluding mortgage and the requested LC loan, divided by the borrower’s self-reported monthly income.
pub_rec_bankruptcies - Number of public record bankruptcies<br/>
2015 Lending Club Data
1. Approved Loans
End of explanation
"""
## in Nehal and Kay's notebooks
"""
Explanation: 1. Data Understanding -- Selected Decriptive Analysis
End of explanation
"""
df_app_2015.tail(3)
df.head(3)
df.loan_status.unique()
df = df.dropna()
len(df)
#df.loan_status.fillna('none', inplace=True) ## there is no nan
df.loan_status.unique()
defaulters=['Default','Charged Off', 'Late (31-120 days)']
non_defaulters=['Fully Paid']
uncertain = ['Current','Late (16-30 days)','In Grace Period', 'none']
len(df[df.loan_status.isin(uncertain)].loan_status)
df.info()
## select instances of defaulters and non_defulters
df2 = df.copy()
df2['Target']= 2 ## uncertain
df2.loc[df2.loan_status.isin(defaulters),'Target'] = 0 ## defaulters
df2.loc[df2.loan_status.isin(non_defaulters),'Target'] = 1 ## paid -- (and to whom to issue the loan)
print('Value in Target value for non defaulters')
print(df2.loc[df2.loan_status.isin(non_defaulters)].Target.unique())
print(len(df2[df2['Target'] == 1]))
print('Value in Target value for defaulters')
print(df2.loc[df2.loan_status.isin(defaulters)].Target.unique())
print(len(df2[df2['Target'] == 0]))
print('Value in Target value for uncertained-- unlabeled ones to predict')
print(df2.loc[df2.loan_status.isin(uncertain)].Target.unique())
print(len(df2[df2['Target'] == 2]))
42302/94968
"""
Explanation: 2. Data Munging
Functions that performs data mining tasks
1a. Create column “default” using “loan_status”
Valentin (edited by Kay)
End of explanation
"""
# function to create dummies
def create_dummies(column_name,df):
temp=pd.get_dummies(df[column_name],prefix=column_name)
df=pd.concat([df,temp],axis=1)
return df
dummy_list=['emp_length','home_ownership','purpose','sub_grade']
for col in dummy_list:
df2=create_dummies(col,df2)
for col in dummy_list:
df2=df2.drop(col,1)
temp=df2['int_rate'].astype(str).str.replace('%', '').replace(' ','').astype(float)
df2=df2.drop('int_rate',1)
df2=pd.concat([df2,temp],axis=1)
df2=df2.drop('loan_status',1)
for col in df2.columns:
print((df2[col].dtype))
"""
Explanation: 2a. Convert data type on certain columns and create dummies
Nehal
End of explanation
"""
df2.shape
df2['loan_amnt'][sorted(np.random.randint(0, high=10, size=5))]
# Reference:
# http://stackoverflow.com/questions/22354094/pythonic-way-of-detecting-outliers-in-one-dimensional-observation-data
def main(df, col, thres):
outliers_all = []
ind = sorted(np.random.randint(0, high=len(df), size=5000)) # randomly pick instances from the dataframe
#select data from our dataframe
x = df[col][ind]
num = len(ind)
outliers = plot(x, col, num, thres) # append all the outliers in the list
pl.show()
return outliers
def mad_based_outlier(points, thresh):
if len(points.shape) == 1:
points = points[:,None]
median = np.median(points, axis=0)
diff = np.sum((points - median)**2, axis=-1)
diff = np.sqrt(diff)
med_abs_deviation = np.median(diff)
modified_z_score = 0.6745 * diff / med_abs_deviation
return modified_z_score > thresh
def plot(x, col, num, thres):
fig, ax = pl.subplots(nrows=1, figsize=(10, 3))
sns.distplot(x, ax=ax, rug=True, hist=False)
outliers = np.asarray(x[mad_based_outlier(x, thres)])
ax.plot(outliers, np.zeros_like(outliers), 'ro', clip_on=False)
fig.suptitle('MAD-based Outlier Tests with selected {} values'.format(col, num, size=20))
return outliers
### Find outliers
##
boundries = []
outliers_loan = main(df2, 'loan_amnt', thres=2.2)
boundries.append(outliers_loan.min())
## annual income
outliers_inc = main(df2, 'annual_inc', 8)
boundries.append(outliers_inc.min())
## For total current balance of bank accounts
outliers_bal = main(df2, 'tot_cur_bal', 8)
boundries.append(outliers_bal.min())
columns = ['loan_amnt', 'annual_inc', 'tot_cur_bal']
for col, bound in zip(columns, boundries):
print ('Lower bound of detected Outliers for {}: {}'.format(col, bound))
# Use the outlier boundry to "regularize" the dataframe
df2_r = df2[df2[col] <= bound]
"""
Explanation: 3a. Check and remove outliers (methods: MAD)
End of explanation
"""
# df2_r.info()
df2_r.shape
#### Fill NaN with "none"??? ####
#df_filled = df2.fillna(value='none')
#df_filled.head(3)
df2_r = df2_r.dropna()
print (len(df2_r))
"""
Explanation: 4a. Remove or replace missing values of certain columns
End of explanation
"""
# df2_r.to_csv('approved_loan_2015_clean.csv')
"""
Explanation: 6. Save the cleaned data
End of explanation
"""
|
relf/smt | tutorial/SMT_EGO_application.ipynb | bsd-3-clause |
import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
plt.ion()
def fun(point):
return np.atleast_2d((point-3.5)*np.sin((point-3.5)/(np.pi)))
X_plot = np.atleast_2d(np.linspace(0, 25, 10000)).T
Y_plot = fun(X_plot)
lines = []
fig = plt.figure(figsize=[5,5])
ax = fig.add_subplot(111)
true_fun, = ax.plot(X_plot,Y_plot)
lines.append(true_fun)
ax.set_title('$x \sin{x}$ function')
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.show()
#dimension of the problem
ndim = 1
"""
Explanation: <div class="jumbotron text-left"><b>
This tutorial describes how to use the SMT toolbox to do some Bayesian Optimization (EGO method) to solve unconstrained optimization problem
<div>
Rémy Priem and Nathalie BARTOLI ONERA/DTIS/M2CI - April 2020
<p class="alert alert-success" style="padding:1em">
To use SMT models, please follow this link : https://github.com/SMTorg/SMT/blob/master/README.md. The documentation is available here: http://smt.readthedocs.io/en/latest/
</p>
The reference paper is available
here https://www.sciencedirect.com/science/article/pii/S0965997818309360?via%3Dihub
or as a preprint: http://mdolab.engin.umich.edu/content/python-surrogate-modeling-framework-derivatives
<div class="alert alert-info fade in" id="d110">
<p>In this notebook, two examples are presented to illustrate Bayesian Optimization</p>
<ol> - a 1D-example (xsinx function) where the algorithm is explicitely given and the use of different criteria is presented </ol>
<ol> - a 2D-exemple (Rosenbrock function) where the EGO algorithm from SMT is used </ol>
</div>
# Bayesian Optimization
End of explanation
"""
x_data = np.atleast_2d([0,7,25]).T
y_data = fun(x_data)
"""
Explanation: Here, the training data are the points xdata=[0,7,25].
End of explanation
"""
from smt.surrogate_models import KPLS, KRG, KPLSK
########### The Kriging model
# The variable 'theta0' is a list of length ndim.
t = KRG(theta0=[1e-2]*ndim,print_prediction = False, corr='squar_exp')
#Training
t.set_training_values(x_data,y_data)
t.train()
# Prediction of the points for the plot
Y_GP_plot = t.predict_values(X_plot)
Y_GP_plot_var = t.predict_variances(X_plot)
fig = plt.figure(figsize=[5,5])
ax = fig.add_subplot(111)
true_fun, = ax.plot(X_plot,Y_plot)
data, = ax.plot(x_data,y_data,linestyle='',marker='o')
gp, = ax.plot(X_plot,Y_GP_plot,linestyle='--',color='g')
sig_plus = Y_GP_plot+3*np.sqrt(Y_GP_plot_var)
sig_moins = Y_GP_plot-3*np.sqrt(Y_GP_plot_var)
un_gp = ax.fill_between(X_plot.T[0],sig_plus.T[0],sig_moins.T[0],alpha=0.3,color='g')
lines = [true_fun,data,gp,un_gp]
ax.set_title('$x \sin{x}$ function')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.legend(lines,['True function','Data','GPR prediction','99 % confidence'])
plt.show()
"""
Explanation: Build the GP model with a square exponential kernel with SMT toolbox knowing $(x_{data}, y_{data})$.
End of explanation
"""
from scipy.stats import norm
from scipy.optimize import minimize
def EI(GP,points,f_min):
pred = GP.predict_values(points)
var = GP.predict_variances(points)
args0 = (f_min - pred)/np.sqrt(var)
args1 = (f_min - pred)*norm.cdf(args0)
args2 = np.sqrt(var)*norm.pdf(args0)
if var.size == 1 and var == 0.0: # can be use only if one point is computed
return 0.0
ei = args1 + args2
return ei
Y_GP_plot = t.predict_values(X_plot)
Y_GP_plot_var = t.predict_variances(X_plot)
Y_EI_plot = EI(t,X_plot,np.min(y_data))
fig = plt.figure(figsize=[10,10])
ax = fig.add_subplot(111)
true_fun, = ax.plot(X_plot,Y_plot)
data, = ax.plot(x_data,y_data,linestyle='',marker='o')
gp, = ax.plot(X_plot,Y_GP_plot,linestyle='--',color='g')
sig_plus = Y_GP_plot+3*np.sqrt(Y_GP_plot_var)
sig_moins = Y_GP_plot-3*np.sqrt(Y_GP_plot_var)
un_gp = ax.fill_between(X_plot.T[0],sig_plus.T[0],sig_moins.T[0],alpha=0.3,color='g')
ax1 = ax.twinx()
ei, = ax1.plot(X_plot,Y_EI_plot,color='red')
lines = [true_fun,data,gp,un_gp,ei]
ax.set_title('$x \sin{x}$ function')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax1.set_ylabel('ei')
fig.legend(lines,['True function','Data','GPR prediction','99 % confidence','Expected Improvement'],loc=[0.13,0.64])
plt.show()
"""
Explanation: Bayesian optimization is defined by Jonas Mockus in (Mockus, 1975) as an optimization technique based upon the minimization of the expected deviation from the extremum of the studied function.
The objective function is treated as a black-box function. A Bayesian strategy sees the objective as a random function and places a prior over it. The prior captures our beliefs about the behavior of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the posterior distribution over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criterion) that determines what the next query point should be.
One of the earliest bodies of work on Bayesian optimisation that we are aware of is (Kushner, 1962 ; Kushner, 1964). Kushner used Wiener processes for one-dimensional problems. Kushner’s decision model was based on maximizing the probability of improvement, and included a parameter that controlled the trade-off between ‘more global’ and ‘more local’ optimization, in the same spirit as the Exploration/Exploitation trade-off.
Meanwhile, in the former Soviet Union, Mockus and colleagues developed a multidimensional Bayesian optimization method using linear combinations of Wiener fields, some of which was published in English in (Mockus, 1975). This paper also describes an acquisition function that is based on myopic expected improvement of the posterior, which has been widely adopted in Bayesian optimization as the Expected Improvement function.
In 1998, Jones used Gaussian processes together with the expected improvement function to successfully perform derivative-free optimization and experimental design through an algorithm called Efficient Global Optimization, or EGO (Jones, 1998).
Efficient Global Optimization
In what follows, we describe the Efficient Global Optimization (EGO) algorithm, as published in (Jones, 1998).
Let $F$ be an expensive black-box function to be minimized. We sample $F$ at the different locations $X = {x_1, x_2,\ldots,x_n}$ yielding the responses $Y = {y_1, y_2,\ldots,y_n}$. We build a Kriging model (also called Gaussian process) with a mean function $\mu$ and a variance function $\sigma^{2}$.
The next step is to compute the criterion EI. To do this, let us denote:
$$f_{min} = \min {y_1, y_2,\ldots,y_n}.$$
The Expected Improvement funtion (EI) can be expressed:
$$E[I(x)] = E[\max(f_{min}-Y, 0)],$$
where $Y$ is the random variable following the distribution $\mathcal{N}(\mu(x), \sigma^{2}(x))$.
By expressing the right-hand side of EI expression as an integral, and applying some tedious integration by parts, one can express the expected improvement in closed form:
$$
E[I(x)] = (f_{min} - \mu(x))\Phi\left(\frac{f_{min} - \mu(x)}{\sigma(x)}\right) + \sigma(x) \phi\left(\frac{f_{min} - \mu(x)}{\sigma(x)}\right)
$$
where $\Phi(\cdot)$ and $\phi(\cdot)$ are respectively the cumulative and probability density functions of $\mathcal{N}(0,1)$.
Next, we determine our next sampling point as :
\begin{align}
x_{n+1} = \arg \max_{x} \left(E[I(x)]\right)
\end{align}
We then test the response $y_{n+1}$ of our black-box function $F$ at $x_{n+1}$, rebuild the model taking into account the new information gained, and research the point of maximum expected improvement again.
We summarize here the EGO algorithm:
EGO(F, $n_{iter}$) # Find the best minimum of $\operatorname{F}$ in $n_{iter}$ iterations
For ($i=0:n_{iter}$)
$mod = {model}(X, Y)$ # surrogate model based on sample vectors $X$ and $Y$
$f_{min} = \min Y$
$x_{i+1} = \arg \max {EI}(mod, f_{min})$ # choose $x$ that maximizes EI
$y_{i+1} = {F}(x_{i+1})$ # Probe the function at most promising point $x_{i+1}$
$X = [X,x_{i+1}]$
$Y = [Y,y_{i+1}]$
$i = i+1$
$f_{min} = \min Y$
Return : $f_{min}$ # This is the best known solution after $n_{iter}$ iterations
Now we want to optimize this function by using Bayesian Optimization and comparing
- Surrogate Based optimization (SBO)
- Expected Improvement criterion (EI)
In a first step we compute the EI criterion
End of explanation
"""
#surrogate Based optimization: min the Surrogate model by using the mean mu
def SBO(GP,point):
res = GP.predict_values(point)
return res
#lower confidence bound optimization: minimize by using mu - 3*sigma
def LCB(GP,point):
pred = GP.predict_values(point)
var = GP.predict_variances(point)
res = pred-3.*np.sqrt(var)
return res
IC = 'EI'
import matplotlib.image as mpimg
import matplotlib.animation as animation
from IPython.display import HTML
plt.ioff()
x_data = np.atleast_2d([0,7,25]).T
y_data = fun(x_data)
n_iter = 15
gpr = KRG(theta0=[1e-2]*ndim,print_global = False)
for k in range(n_iter):
x_start = np.atleast_2d(np.random.rand(20)*25).T
f_min_k = np.min(y_data)
gpr.set_training_values(x_data,y_data)
gpr.train()
if IC == 'EI':
obj_k = lambda x: -EI(gpr,np.atleast_2d(x),f_min_k)[:,0]
elif IC =='SBO':
obj_k = lambda x: SBO(gpr,np.atleast_2d(x))
elif IC == 'LCB':
obj_k = lambda x: LCB(gpr,np.atleast_2d(x))
opt_all = np.array([minimize(lambda x: float(obj_k(x)), x_st, method='SLSQP', bounds=[(0,25)]) for x_st in x_start])
opt_success = opt_all[[opt_i['success'] for opt_i in opt_all]]
obj_success = np.array([opt_i['fun'] for opt_i in opt_success])
ind_min = np.argmin(obj_success)
opt = opt_success[ind_min]
x_et_k = opt['x']
y_et_k = fun(x_et_k)
y_data = np.atleast_2d(np.append(y_data,y_et_k)).T
x_data = np.atleast_2d(np.append(x_data,x_et_k)).T
Y_GP_plot = gpr.predict_values(X_plot)
Y_GP_plot_var = gpr.predict_variances(X_plot)
Y_EI_plot = -EI(gpr,X_plot,f_min_k)
fig = plt.figure(figsize=[10,10])
ax = fig.add_subplot(111)
if IC == 'LCB' or IC == 'SBO':
ei, = ax.plot(X_plot,Y_EI_plot,color='red')
else:
ax1 = ax.twinx()
ei, = ax1.plot(X_plot,Y_EI_plot,color='red')
true_fun, = ax.plot(X_plot,Y_plot)
data, = ax.plot(x_data[0:k+3],y_data[0:k+3],linestyle='',marker='o',color='orange')
opt, = ax.plot(x_data[k+3],y_data[k+3],linestyle='',marker='*',color='r')
gp, = ax.plot(X_plot,Y_GP_plot,linestyle='--',color='g')
sig_plus = Y_GP_plot+3*np.sqrt(Y_GP_plot_var)
sig_moins = Y_GP_plot-3*np.sqrt(Y_GP_plot_var)
un_gp = ax.fill_between(X_plot.T[0],sig_plus.T[0],sig_moins.T[0],alpha=0.3,color='g')
lines = [true_fun,data,gp,un_gp,opt,ei]
ax.set_title('$x \sin{x}$ function')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.legend(lines,['True function','Data','GPR prediction','99 % confidence','Next point to Evaluate','Infill Criteria'])
plt.savefig('Optimisation %d' %k)
plt.close(fig)
ind_best = np.argmin(y_data)
x_opt = x_data[ind_best]
y_opt = y_data[ind_best]
print('Results : X = %s, Y = %s' %(x_opt,y_opt))
fig = plt.figure(figsize=[10,10])
ax = plt.gca()
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
ims = []
for k in range(n_iter):
image_pt = mpimg.imread('Optimisation %d.png' %k)
im = plt.imshow(image_pt)
ims.append([im])
ani = animation.ArtistAnimation(fig, ims,interval=500)
HTML(ani.to_jshtml())
"""
Explanation: Now we compute the EGO method and compare it to other infill criteria
- SBO (surrogate based optimization): directly using the prediction of the surrogate model ($\mu$)
- LCB (Lower Confidence bound): using the confidence interval : $\mu -3 \times \sigma$
- EI for expected Improvement (EGO)
End of explanation
"""
from smt.applications.ego import EGO
from smt.sampling_methods import LHS
"""
Explanation: ## Use the EGO from SMT
End of explanation
"""
#define the rosenbrock function
def rosenbrock(x):
"""
Evaluate objective and constraints for the Rosenbrock test case:
"""
n,dim = x.shape
#parameters:
Opt =[]
Opt_point_scalar = 1
#construction of O vector
for i in range(0, dim):
Opt.append(Opt_point_scalar)
#Construction of Z vector
Z= np.zeros((n,dim))
for i in range(0,dim):
Z[:,i] = (x[:,i]-Opt[i]+1)
#Sum
sum1 = np.zeros((n,1))
for i in range(0,dim-1):
sum1[:,0] += 100*(((Z[:,i]**2)-Z[:,i+1])**2)+((Z[:,i]-1)**2)
return sum1
xlimits=np.array([[-2,2], [-2,2]])
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
#To plot the Rosenbrock function
num_plot = 50 #to plot rosenbrock
x = np.linspace(xlimits[0][0],xlimits[0][1],num_plot)
res = []
for x0 in x:
for x1 in x:
res.append(rosenbrock(np.array([[x0,x1]])))
res = np.array(res)
res = res.reshape((50,50)).T
X,Y = np.meshgrid(x,x)
fig = plt.figure(figsize=[10,10])
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, res, cmap=cm.coolwarm,
linewidth=0, antialiased=False,alpha=0.5)
plt.title(' Rosenbrock function')
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
criterion='EI' #'EI' or 'SBO' or 'LCB'
#number of points in the initial DOE
ndoe = 10 #(at least ndim+1)
#number of iterations with EGO
n_iter = 50
#Build the initial DOE, add the random_state option to have the reproducibility of the LHS points
sampling = LHS(xlimits=xlimits, random_state=1)
xdoe = sampling(ndoe)
#EGO call
ego = EGO(n_iter=n_iter, criterion=criterion, xdoe=xdoe, xlimits=xlimits)
x_opt, y_opt, ind_best, x_data, y_data = ego.optimize(fun=rosenbrock)
print('Xopt for Rosenbrock ', x_opt,y_opt, ' obtained using EGO criterion = ', criterion )
print('Check if the optimal point is Xopt= (1,1) with the Y value=0')
print('if not you can increase the number of iterations with n_iter but the CPU will increase also.')
print('---------------------------')
#To plot the Rosenbrock function
#3D plot
x = np.linspace(xlimits[0][0],xlimits[0][1],num_plot)
res = []
for x0 in x:
for x1 in x:
res.append(rosenbrock(np.array([[x0,x1]])))
res = np.array(res)
res = res.reshape((50,50)).T
X,Y = np.meshgrid(x,x)
fig = plt.figure(figsize=(10, 10))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, res, cmap=cm.coolwarm,
linewidth=0, antialiased=False,alpha=0.5)
#to add the points provided by EGO
ax.scatter(x_data[:ndoe,0],x_data[:ndoe,1],y_data[:ndoe],zdir='z',marker = '.',c='k',s=100, label='Initial DOE')
ax.scatter(x_data[ndoe:,0],x_data[ndoe:,1],y_data[ndoe:],zdir='z',marker = 'x',c='r', s=100, label= 'Added point')
ax.scatter(x_opt[0],x_opt[1],y_opt,zdir='z',marker = '*',c='g', s=100, label= 'EGO optimal point')
plt.title(' Rosenbrock function during EGO algorithm')
plt.xlabel('x1')
plt.ylabel('x2')
plt.legend()
plt.show()
#2D plot
#to add the points provided by EGO
plt.plot(x_data[:ndoe,0],x_data[:ndoe,1],'.', label='Initial DOE')
plt.plot(x_data[ndoe:,0],x_data[ndoe:,1],'x', c='r', label='Added point')
plt.plot(x_opt[:1],x_opt[1:],'*',c='g', label= 'EGO optimal point')
plt.plot([1], [1],'*',c='m', label= 'Optimal point')
plt.title(' Rosenbrock function during EGO algorithm')
plt.xlabel('x1')
plt.ylabel('x2')
plt.legend()
plt.show()
"""
Explanation: Choose your criterion to perform the optimization: EI, SBO or LCB
Choose the size of the initial DOE
Choose the number of EGO iterations
Try with a 2D function : 2D Rosenbrock function
Rosenbrock Function in dimension N
$$
f(\mathbf{x}) = \sum_{i=1}^{N-1} 100 (x_{i+1} - x_i^2 )^2 + (1-x_i)^2 \quad \mbox{where} \quad \mathbf{x} = [x_1, \ldots, x_N] \in \mathbb{R}^N.
$$
$$x_i \in [-2,2]$$
End of explanation
"""
criterion='SBO' #'EI' or 'SBO' or 'LCB'
#number of points in the initial DOE
ndoe = 10 #(at least ndim+1)
#number of iterations with EGO
n_iter = 50
#Build the initial DOE
sampling = LHS(xlimits=xlimits, random_state=1)
xdoe = sampling(ndoe)
#EGO call
ego = EGO(n_iter=n_iter, criterion=criterion, xdoe=xdoe, xlimits=xlimits)
x_opt, y_opt, ind_best, x_data, y_data = ego.optimize(fun=rosenbrock)
print('Xopt for Rosenbrock ', x_opt, y_opt, ' obtained using EGO criterion = ', criterion)
print('Check if the optimal point is Xopt=(1,1) with the Y value=0')
print('---------------------------')
"""
Explanation: We can now compare the results by using only the mean information provided by surrogate model approximation
End of explanation
"""
|
larroy/mxnet | example/autoencoder/variational_autoencoder/VAE_example.ipynb | apache-2.0 | mnist = mx.test_utils.get_mnist()
image = np.reshape(mnist['train_data'],(60000,28*28))
label = image
image_test = np.reshape(mnist['test_data'],(10000,28*28))
label_test = image_test
[N,features] = np.shape(image) #number of examples and features
print(N,features)
nsamples = 5
idx = np.random.choice(len(mnist['train_data']), nsamples)
_, axarr = plt.subplots(1, nsamples, sharex='col', sharey='row',figsize=(12,3))
for i,j in enumerate(idx):
axarr[i].imshow(np.reshape(image[j,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
plt.show()
"""
Explanation: Building a Variational Autoencoder in MXNet
Xiaoyu Lu, July 5th, 2017
This tutorial guides you through the process of building a variational encoder in MXNet. In this notebook we'll focus on an example using the MNIST handwritten digit recognition dataset. Refer to Auto-Encoding Variational Bayes for more details on the model description.
Prerequisites
To complete this tutorial, we need following python packages:
numpy, matplotlib
1. Loading the Data
We first load the MNIST dataset, which contains 60000 training and 10000 test examples. The following code imports required modules and loads the data. These images are stored in a 4-D matrix with shape (batch_size, num_channels, width, height). For the MNIST dataset, there is only one color channel, and both width and height are 28, so we reshape each image as a 28x28 array. See below for a visualization:
End of explanation
"""
model_prefix = None
batch_size = 100
latent_dim = 5
nd_iter = mx.io.NDArrayIter(data={'data':image},label={'loss_label':label},
batch_size = batch_size)
nd_iter_test = mx.io.NDArrayIter(data={'data':image_test},label={'loss_label':label_test},
batch_size = batch_size)
"""
Explanation: We can optionally save the parameters in the directory variable 'model_prefix'. We first create data iterators for MXNet, with each batch of data containing 100 images.
End of explanation
"""
## define data and loss labels as symbols
data = mx.sym.var('data')
loss_label = mx.sym.var('loss_label')
## define fully connected and activation layers for the encoder, where we used tanh activation function.
encoder_h = mx.sym.FullyConnected(data=data, name="encoder_h",num_hidden=400)
act_h = mx.sym.Activation(data=encoder_h, act_type="tanh",name="activation_h")
## define mu and log variance which are the fully connected layers of the previous activation layer
mu = mx.sym.FullyConnected(data=act_h, name="mu",num_hidden = latent_dim)
logvar = mx.sym.FullyConnected(data=act_h, name="logvar",num_hidden = latent_dim)
## sample the latent variables z according to Normal(mu,var)
z = mu + mx.symbol.broadcast_mul(mx.symbol.exp(0.5 * logvar),
mx.symbol.random_normal(loc=0, scale=1, shape=(batch_size, latent_dim)),
name="z")
"""
Explanation: 2. Building the Network Architecture
2.1 Gaussian MLP as encoder
Next we constuct the neural network, as in the paper, we use Multilayer Perceptron (MLP) for both the encoder and decoder. For encoder, a Gaussian MLP is used as follows:
\begin{align}
\log q_{\phi}(z|x) &= \log \mathcal{N}(z:\mu,\sigma^2I) \
\textit{ where } \mu &= W_2h+b_2, \log \sigma^2 = W_3h+b_3\
h &= \tanh(W_1x+b_1)
\end{align}
where ${W_1,W_2,W_3,b_1,b_2,b_3}$ are the weights and biases of the MLP.
Note below that encoder_mu(mu) and encoder_logvar(logvar) are symbols. So, we can use get_internals() to get the values of them, after which we can sample the latent variable $z$.
End of explanation
"""
# define fully connected and tanh activation layers for the decoder
decoder_z = mx.sym.FullyConnected(data=z, name="decoder_z",num_hidden=400)
act_z = mx.sym.Activation(data=decoder_z, act_type="tanh",name="activation_z")
# define the output layer with sigmoid activation function, where the dimension is equal to the input dimension
decoder_x = mx.sym.FullyConnected(data=act_z, name="decoder_x",num_hidden=features)
y = mx.sym.Activation(data=decoder_x, act_type="sigmoid",name='activation_x')
"""
Explanation: 2.2 Bernoulli MLP as decoder
In this case let $p_\theta(x|z)$ be a multivariate Bernoulli whose probabilities are computed from $z$ with a feed forward neural network with a single hidden layer:
\begin{align}
\log p(x|z) &= \sum_{i=1}^D x_i\log y_i + (1-x_i)\log (1-y_i) \
\textit{ where } y &= f_\sigma(W_5\tanh (W_4z+b_4)+b_5)
\end{align}
where $f_\sigma(\dot)$ is the elementwise sigmoid activation function, ${W_4,W_5,b_4,b_5}$ are the weights and biases of the decoder MLP. A Bernouilli likelihood is suitable for this type of data but you can easily extend it to other likelihood types by parsing into the argument likelihood in the VAE class, see section 4 for details.
End of explanation
"""
# define the objective loss function that needs to be minimized
KL = 0.5*mx.symbol.sum(1+logvar-pow( mu,2)-mx.symbol.exp(logvar),axis=1)
loss = -mx.symbol.sum(mx.symbol.broadcast_mul(loss_label,mx.symbol.log(y))
+ mx.symbol.broadcast_mul(1-loss_label,mx.symbol.log(1-y)),axis=1)-KL
output = mx.symbol.MakeLoss(sum(loss),name='loss')
"""
Explanation: 2.3 Joint Loss Function for the Encoder and the Decoder
The variational lower bound also called evidence lower bound (ELBO) can be estimated as:
\begin{align}
\mathcal{L}(\theta,\phi;x_{(i)}) \approx \frac{1}{2}\left(1+\log ((\sigma_j^{(i)})^2)-(\mu_j^{(i)})^2-(\sigma_j^{(i)})^2\right) + \log p_\theta(x^{(i)}|z^{(i)})
\end{align}
where the first term is the KL divergence of the approximate posterior from the prior, and the second term is an expected negative reconstruction error. We would like to maximize this lower bound, so we can define the loss to be $-\mathcal{L}$(minus ELBO) for MXNet to minimize.
End of explanation
"""
# set up the log
nd_iter.reset()
logging.getLogger().setLevel(logging.DEBUG)
# define function to trave back training loss
def log_to_list(period, lst):
def _callback(param):
"""The checkpoint function."""
if param.nbatch % period == 0:
name, value = param.eval_metric.get()
lst.append(value)
return _callback
# define the model
model = mx.mod.Module(
symbol = output ,
data_names=['data'],
label_names = ['loss_label'])
# training the model, save training loss as a list.
training_loss=list()
# initilize the parameters for training using Normal.
init = mx.init.Normal(0.01)
model.fit(nd_iter, # train data
initializer=init,
# if eval_data is supplied, test loss will also be reported
# eval_data = nd_iter_test,
optimizer='sgd', # use SGD to train
optimizer_params={'learning_rate':1e-3,'wd':1e-2},
# save parameters for each epoch if model_prefix is supplied
epoch_end_callback = None if model_prefix==None else mx.callback.do_checkpoint(model_prefix, 1),
batch_end_callback = log_to_list(N/batch_size,training_loss),
num_epoch=100,
eval_metric = 'Loss')
ELBO = [-training_loss[i] for i in range(len(training_loss))]
plt.plot(ELBO)
plt.ylabel('ELBO');plt.xlabel('epoch');plt.title("training curve for mini batches")
plt.show()
"""
Explanation: 3. Training the model
Now, we can define the model and train it. First we will initilize the weights and the biases to be Gaussian(0,0.01), and then use stochastic gradient descent for optimization. To warm start the training, one may also initilize with pre-trainined parameters arg_params using init=mx.initializer.Load(arg_params).
To save intermediate results, we can optionally use epoch_end_callback = mx.callback.do_checkpoint(model_prefix, 1) which saves the parameters to the path given by model_prefix, and with period every $1$ epoch. To assess the performance, we output $-\mathcal{L}$(minus ELBO) after each epoch, with the command eval_metric = 'Loss' which is defined above. We will also plot the training loss for mini batches by accessing the log and saving it to a list, and then parsing it to the argument batch_end_callback.
End of explanation
"""
arg_params = model.get_params()[0]
nd_iter_test.reset()
test_batch = nd_iter_test.next()
# if saved the parameters, can load them using `load_checkpoint` method at e.g. 100th epoch
# sym, arg_params, aux_params = mx.model.load_checkpoint(model_prefix, 100)
# assert sym.tojson() == output.tojson()
e = y.bind(mx.cpu(), {'data': test_batch.data[0],
'encoder_h_weight': arg_params['encoder_h_weight'],
'encoder_h_bias': arg_params['encoder_h_bias'],
'mu_weight': arg_params['mu_weight'],
'mu_bias': arg_params['mu_bias'],
'logvar_weight':arg_params['logvar_weight'],
'logvar_bias':arg_params['logvar_bias'],
'decoder_z_weight':arg_params['decoder_z_weight'],
'decoder_z_bias':arg_params['decoder_z_bias'],
'decoder_x_weight':arg_params['decoder_x_weight'],
'decoder_x_bias':arg_params['decoder_x_bias'],
'loss_label':label})
x_fit = e.forward()
x_construction = x_fit[0].asnumpy()
# learning images on the test set
f, ((ax1, ax2, ax3, ax4)) = plt.subplots(1,4, sharex='col', sharey='row',figsize=(12,3))
ax1.imshow(np.reshape(image_test[0,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax1.set_title('True image')
ax2.imshow(np.reshape(x_construction[0,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax2.set_title('Learned image')
ax3.imshow(np.reshape(image_test[99,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax3.set_title('True image')
ax4.imshow(np.reshape(x_construction[99,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax4.set_title('Learned image')
plt.show()
# calculate the ELBO which is minus the loss for test set
metric = mx.metric.Loss()
model.score(nd_iter_test, metric)
"""
Explanation: As expected, the ELBO is monotonically increasing over epoch, and we reproduced the results given in the paper Auto-Encoding Variational Bayes. Now we can extract/load the parameters and then feed the network forward to calculate $y$ which is the reconstructed image, and we can also calculate the ELBO for the test set.
End of explanation
"""
from VAE import VAE
"""
Explanation: 4. All together: MXNet-based class VAE
End of explanation
"""
# can initilize weights and biases with the learned parameters as follows:
# init = mx.initializer.Load(params)
# call the VAE, output model contains the learned model and training loss
out = VAE(n_latent=2, x_train=image, x_valid=None, num_epoch=200)
# encode test images to obtain mu and logvar which are used for sampling
[mu,logvar] = VAE.encoder(out,image_test)
# sample in the latent space
z = VAE.sampler(mu,logvar)
# decode from the latent space to obtain reconstructed images
x_construction = VAE.decoder(out,z)
f, ((ax1, ax2, ax3, ax4)) = plt.subplots(1,4, sharex='col', sharey='row',figsize=(12,3))
ax1.imshow(np.reshape(image_test[0,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax1.set_title('True image')
ax2.imshow(np.reshape(x_construction[0,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax2.set_title('Learned image')
ax3.imshow(np.reshape(image_test[146,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax3.set_title('True image')
ax4.imshow(np.reshape(x_construction[146,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax4.set_title('Learned image')
plt.show()
z1 = z[:,0]
z2 = z[:,1]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(z1,z2,'ko')
plt.title("latent space")
#np.where((z1>3) & (z2<2) & (z2>0))
#select the points from the latent space
a_vec = [2,5,7,789,25,9993]
for i in range(len(a_vec)):
ax.plot(z1[a_vec[i]],z2[a_vec[i]],'ro')
ax.annotate('z%d' %i, xy=(z1[a_vec[i]],z2[a_vec[i]]),
xytext=(z1[a_vec[i]],z2[a_vec[i]]),color = 'r',fontsize=15)
f, ((ax0, ax1, ax2, ax3, ax4,ax5)) = plt.subplots(1,6, sharex='col', sharey='row',figsize=(12,2.5))
for i in range(len(a_vec)):
eval('ax%d' %(i)).imshow(np.reshape(x_construction[a_vec[i],:],(28,28)), interpolation='nearest', cmap=cm.Greys)
eval('ax%d' %(i)).set_title('z%d'%i)
plt.show()
"""
Explanation: One can directly call the class VAE to do the training:
VAE(n_latent=5,num_hidden_ecoder=400,num_hidden_decoder=400,x_train=None,x_valid=None,
batch_size=100,learning_rate=0.001,weight_decay=0.01,num_epoch=100,optimizer='sgd',model_prefix=None,
initializer = mx.init.Normal(0.01),likelihood=Bernoulli)
The outputs are the learned model and training loss.
End of explanation
"""
|
catalyst-cooperative/pudl | notebooks/work-in-progress/ferc714-output.ipynb | mit | sns.set()
%matplotlib inline
mpl.rcParams['figure.figsize'] = (10,4)
mpl.rcParams['figure.dpi'] = 150
pd.options.display.max_columns = 100
pd.options.display.max_rows = 100
"""
Explanation: Configure Display Parameters
End of explanation
"""
logger=logging.getLogger()
logger.setLevel(logging.INFO)
handler = logging.StreamHandler(stream=sys.stdout)
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
logger.handlers = [handler]
"""
Explanation: Use Python Logging facilities
Using a logger from the beginning will make the transition into the PUDL package easier.
Creating a logging handler here will also allow you to see the logging output coming from PUDL and other underlying packages.
End of explanation
"""
pudl_settings = pudl.workspace.setup.get_defaults()
display(pudl_settings)
ferc1_engine = sa.create_engine(pudl_settings['ferc1_db'])
display(ferc1_engine)
pudl_engine = sa.create_engine(pudl_settings['pudl_db'])
display(pudl_engine)
"""
Explanation: Define Functions
Define Notebook Parameters
End of explanation
"""
pudl_out = pudl.output.pudltabl.PudlTabl(pudl_engine=pudl_engine)
%%time
ferc714_out = pudl.output.ferc714.Respondents(pudl_out)
annualized = ferc714_out.annualize()
categorized = ferc714_out.categorize()
summarized = ferc714_out.summarize_demand()
fipsified = ferc714_out.fipsify()
counties_gdf = ferc714_out.georef_counties()
categorized.info()
summarized.info()
fipsified.info()
counties_gdf.info()
# This takes 45 minutes so...
#respondents_gdf = ferc714_out.georef_respondents()
#display(respondents_gdf.info())
#respondents_gdf.sample(10)
"""
Explanation: Load Data
End of explanation
"""
|
helgako/cms-dqm | notebooks/soft_pretraining.ipynb | mit | %env THEANO_FLAGS="device=gpu0", "gpuarray.preallocate=0.9", "floatX=float32"
import theano
import theano.tensor as T
from lasagne import *
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import numpy as np
import pandas as pd
import cPickle as pickle
import os
import re
DATA_PATH = 'merged.pickle'
LABELS_PATH = './quality_2010/labels_v2.pickled'
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve, auc, roc_auc_score
from collections import defaultdict
from IPython import display
import time
"""
Explanation: CMS Anomaly Detection
In this experiment we build a network that consists of subnetworks linked by a Fuzzy AND.
Each subnetwork is built on features that correspond to one of the following channels:
- muons
- photons
- particle flows (PF)
- particles from calorimiter (calo)
The ultimate goal is to estimate probability of anomalies occuring in each individual channel by learning to predict global probabilities. This is done by training this network on labels of global anomalies (i.e. is anomaly present somewhere) and then defining output of each subnetwork as score for anomaly in its channel.
The justification of this approach is following.
Consider a set of channels $\mathcal{C}$ listed above and set of anomalies $\mathcal{A}$.
Each anomaly $A \in \mathcal{A}$ corresponds to a subset of channels $C \subseteq \mathcal{C}$ where this anomaly occurs. The main assumptions:
1. each possible anomaly can be detected by at least one subnetwork;
2. any classifier build on features from channels $\bar{C} = \mathcal{C} \setminus C$ can not detect anomaly $A$, e.g. anomaly in the photon channel does not change behaviour of muons.
Thus, from the prospective of detecting anomaly $A$ score of any classifier built on $\bar{C}$ is no more than an independent random variable.
The network and its loss function are defined in the way that the given anomaly $A$ loss is the lower the more subnetworks report anomaly. Since subnetworks from $\bar{C}$ in principle have no predictive power for $A$, average loss reaches its minimum close to the situation when for each kind of anomaly $A$ affecting channels $C$ subnetworks from $C$ report anomaly and the rest report no abnormalities in their channels, i.e. each subnetwork reports presence of anomaly in its channel, which is exactly the goal of this experiment.
However, this ideal result occur only when anomalies have the same weight, but due to the nature of the loss function: if at least one network reports anomaly, which is always true by assumption 1, another subnetwrok can change the score torwards anomaly only slightly. Thus any considerable bias of subnetwork torwards anomalies implies much higher losses under 'everything is good' sutiation than gain from unrelated anomalies:
$$\mathbb{E}[ \mathcal{L}(\mathrm{subnetwork} + \mathrm{bias}) - \mathcal{L}(\mathrm{subnetwork}) \mid \mathrm{no\;anomalies}] \gg \mathbb{E}[ \mathcal{L}(\mathrm{subnetwork}) - \mathcal{L}(\mathrm{subnetwork} + \mathrm{bias}) \mid \mathrm{unrelated\;anomaly}]$$
Prepearing data
End of explanation
"""
with open(DATA_PATH, 'r') as f:
data = pickle.load(f)
with open(LABELS_PATH, 'r') as f:
sub_labels = pickle.load(f)
labels = sub_labels['json_0.txt']
### technical columns
not_features = [
'_luminosityBlock',
'_run'
]
### columns that correspond to actual features
features = sorted(set(data.columns) - set(not_features))
for f in features:
xs = data[f].values
if np.std(xs) > 0.0:
data[f] = (xs - np.mean(xs)) / np.std(xs)
lumi = np.maximum(
np.maximum(data['_instantLumi_minibias'].get_values(), data['_instantLumi_muons'].get_values()),
data['_instantLumi_photons'].get_values()
)
nonempty = np.where(lumi > 0.0)[0]
data = data.iloc[nonempty]
lumi = lumi[nonempty]
labels = labels[nonempty]
for k in sub_labels:
sub_labels[k] = sub_labels[k][nonempty]
lumi_bad = np.sum(lumi[labels == 0.0])
lumi_good = np.sum(lumi[labels == 1.0])
### By normalizing weights we implicitly define equal probabilities for each class
weights = lumi / np.where(labels == 1.0, lumi_good, lumi_bad)
weights *= lumi.shape[0]
w_bad = np.sum(weights[labels == 0.0])
w_good = np.sum(weights[labels == 1.0])
"""
Explanation: The input files contains preselected features from CMS 2010B open data.
The features were generated from original root in following way:
There were selected 3 streams:
MiniBias,
muons,
photons.
In each stream 4 "channels" were selected:
muons
photons
PF (particle flows)
calo (calorimiter)
For each channel from each event 5 quantile particles were selected with redards to thier momentum: quantile $q_i$ corresponds to particle with index closest to $\frac{i}{5}N$, i.e. $q_5$ corresponds to the particle with maximal momentum.
Each particle is described by its physical properties: $\eta, \phi, p_T, f_x, f_y, f_z, m$
Physical features were aggragated by lumisections, producing:
1, 25, 50, 75, 99 percentiles
mean and std
As the result each lumisection is described by percentiles, means and stds of distributions of physical features of particles of particular quantiles within particular channel within particular stream.
Some additional features were added like total momentum of all particles of particular channel within event
End of explanation
"""
### utility functions
def insert(keys, d, f):
key = keys[0]
if len(keys) == 1:
d[key] = f
else:
if not d.has_key(key):
d[key] = dict()
return insert(keys[1:], d[key], f)
def levels(features, n_levels = 5):
dicts = [features]
levels = list()
for level in range(n_levels):
levels.append(
set([ k for d in dicts for k in d ])
)
dicts = [ d[k] for d in dicts for k in d ]
return levels
def get_feature_groups(feature_list, re_exp):
"""
Retuns:
1. hierachical dictionary feature groups -> feature full name
2. feature levels
3. unprocessed features
"""
features = dict()
rest = list()
n_levels = set()
for f in feature_list:
matches = re.findall(re_exp, f)
if len(matches) == 1:
insert(matches[0], features, f)
n_levels.add(len(matches[0]))
elif len(matches) == 0:
rest.append(f)
else:
raise Exception('Very suspicious feature: %s -> %s' % (f, matches))
assert len(n_levels) == 1
return features, levels(features, n_levels=list(n_levels)[0]), rest
def insert_fake_path(d, level, path = 'general'):
if level == 0:
return { path : d }
else:
r = dict()
for k in d:
r[k] = insert_fake_path(d[k], level - 1, path)
return r
"""
Explanation: Grouping features
Feature set has its own intrisic hierarchy. It can be easily seen from their names:
<stream>_<particle type>_<physical feature>_<particle quantile>_<feature quantile>
End of explanation
"""
particle_f_re = re.compile(r'([a-zA-Z]+)[_]([a-zA-Z]+)[_]([a-zA-Z]+)[_]+(q[12345])[_](\w+)')
particle_features, particle_levels, rest = get_feature_groups(features, particle_f_re)
for level in particle_levels:
print ' '.join(list(level))
"""
Explanation: Selecting particles' features:
<stream>_<particle type>_<physical feature>_<particle quantile>_<feature quantile>
End of explanation
"""
particle_type_f_re = re.compile(r'([a-zA-Z]+)[_]([a-zA-Z]+)[_]([a-zA-Z]+)[_]+([a-zA-Z0-9]+)')
particle_type_features, particle_type_levels, rest = get_feature_groups(rest, particle_type_f_re)
for level in particle_type_levels:
print ' '.join(list(level))
particle_type_features = insert_fake_path(particle_type_features, level = 2, path='allParticles')
for level in levels(particle_type_features, n_levels=5):
print ' '.join(list(level))
"""
Explanation: Selecting features that belong to a particle type:
<stream>_<particle type>_<physical feature>_<feature quantile>
End of explanation
"""
event_f_re = re.compile(r'([a-zA-Z]+)[_]([a-zA-Z]+)[_]+(\w+)')
event_features, event_levels, rest = get_feature_groups(rest, event_f_re)
for level in event_levels:
print ' '.join(list(level))
f = insert_fake_path(event_features, level = 1, path='allChannels')
f = insert_fake_path(f, level = 2, path='allParticles')
event_features = f
for level in levels(event_features, n_levels=5):
print ' '.join(list(level))
"""
Explanation: The features above are components of momentum of particles of particular type (channel) within event.
Selecting features specific to events:
<stream>_<physical feature>_<feature quantile>
End of explanation
"""
rest
"""
Explanation: Which are instant luminosity of each event.
End of explanation
"""
stream_f_re = re.compile(r'([a-zA-Z]+)[_]([a-zA-Z]+)')
stream_features, stream_levels, rest = get_feature_groups(rest, stream_f_re)
for level in stream_levels:
print ' '.join(list(level))
"""
Explanation: And finally features specific to lumisection itself:
<stream>_<physical feature>_<feature quantile>
End of explanation
"""
rest
def flatten(a_dict):
for k in a_dict:
if hasattr(a_dict[k], 'keys'):
for path, value in flatten(a_dict[k]):
yield (k, ) + path, value
else:
yield (k, ), a_dict[k]
def merge(dicts):
result = dict()
for d in dicts:
for path, value in flatten(d):
insert(path, result, value)
return result
def flatten_dict(d):
r = dict()
for paths, v in flatten(d):
k = '_'.join(paths)
r[k] = v
return r
def squezze(d, depth = 5, last=2):
dc = d.copy()
if depth - 1 == last:
for k in d:
dc[k] = flatten_dict(d[k])
return d
else:
for k in d:
dc[k] = squezze(d[k], depth-1, last)
return dc
def group(d, level=2):
gd = defaultdict(lambda: list())
for path, k in flatten(d):
gk = path[:level]
gd[gk].append(k)
return gd
feature_hierarchy = merge([
particle_features, particle_type_features, event_features
])
"""
Explanation: Number of events and fration of non-zero features for lumisection (all NA's are replaced with zeros).
End of explanation
"""
grouped = group(feature_hierarchy, level=2)
[ (g, len(fs)) for g, fs in grouped.items() ]
"""
Explanation: All features are grouped by stream-channel.
End of explanation
"""
channels_features = dict()
for k in [('muons', 'muons'), ('photons', 'photons'), ('minibias', 'PF'), ('minibias', 'calo')]:
channels_features[k[1]] = grouped[k]
channels_features['muons'].append('_instantLumi_muons')
channels_features['photons'].append('_instantLumi_photons')
[ (g, len(fs)) for g, fs in channels_features.items() ]
subsytem_descriptions = dict(
[
('json_0.txt', 'inclusive global label')
] + [
('json_%d.txt' % i, desc)
for i, desc in zip(range(15, 23), [
'15:DQM: Rpc GOOD,\nDCS: Rpc',
'16:DQM: Csc GOOD,\nDCS: Csc',
'17:DQM: Dt GOOD,\nDCS: Dt',
'18:DQM: Hcal GOOD,\nDCS: Hcal',
'19:DQM: Ecal GOOD,\nDCS: Ecal',
'20:DQM: Es GOOD,\nDCS: Es',
'21:DQM: Strip GOOD,\nDCS: Strip',
'22:DQM: Pix GOOD,\nDCS: Pix'
])
] + [
('json_%d.txt' % i, desc)
for i, desc in zip(range(11, 15) + range(23, 25), [
'11: DQM: Muon GOOD,\nDCS: Strip, Pix, Dt, Rpc, Csc on',
'12: DQM: Jmet GOOD,\nDCS: Ecal, Hcal on',
'13: DQM: Egam GOOD,\nDCS: Strip, Pix, Ecal on',
'14: DQM: Track GOOD,\nDCS: Strip, Pix on',
'23: DQM: Hlt GOOD,\nDCS: Strip, Pix, Ecal on',
'24: DQM: L1t GOOD,\nDCS: none'
])
]
)
"""
Explanation: For this experiment only the following groups are used:
- muons from muon stream
- photons from photon stream
- Particle Flows from minibias stream
- calo particles from minibias stream
End of explanation
"""
### For simplicity each feature group is put into its own shared variable.
shareds = {}
for k in channels_features:
features = channels_features[k]
shareds[k] = theano.shared(
data[features].get_values().astype('float32'),
name = 'X %s' % k
)
labels_shared = theano.shared(labels.astype('float32'), 'labels')
weights_shared = theano.shared(weights.astype('float32'), 'weights')
batch_indx = T.ivector('batch indx')
def batch_stream(X, batch_size=32):
indx = np.random.permutation(X.shape[0])
n_batches = X.shape[0] / batch_size
for i in xrange(n_batches):
batch_indx = indx[(i * batch_size):(i * batch_size + batch_size)]
yield X[batch_indx]
"""
Explanation: Building netwrok
End of explanation
"""
def build_network(shared, batch_indx, num_units = (50, 10), n_dropout=2, p_dropout=0.25):
n_features = shared.get_value().shape[1]
X_batch = shared[batch_indx]
input_layer = layers.InputLayer(shape=(None, n_features), input_var=X_batch)
net = input_layer
net = layers.DropoutLayer(net, p=0.1, rescale=False)
for i, n in enumerate(num_units):
net = layers.DenseLayer(net, num_units=n, nonlinearity=nonlinearities.sigmoid)
if i < n_dropout:
net = layers.DropoutLayer(net, p=p_dropout, rescale=True)
net = layers.DenseLayer(net, num_units=1, nonlinearity=nonlinearities.sigmoid)
det_prediction = T.flatten(layers.get_output(net, deterministic=True))
train_prediction = T.flatten(layers.get_output(net, deterministic=False))
return net, det_prediction, train_prediction
networks = {}
det_predictions = {}
train_predictions = {}
for k in shareds:
shared = shareds[k]
net, det_prediction, train_prediction = build_network(shared, batch_indx, num_units=(100, 50, 20), p_dropout=0.25)
det_predictions[k] = det_prediction
train_predictions[k] = train_prediction
networks[k] = net
get_get_predictions = {}
get_stochastic_predictions = {}
for k in det_predictions:
get_get_predictions[k] = theano.function([batch_indx], det_predictions[k])
get_stochastic_predictions[k] = theano.function([batch_indx], train_predictions[k])
labels_batch = labels_shared[batch_indx]
weights_batch = weights_shared[batch_indx]
reg = reduce(lambda a, b: T.maximum(a, b), [
regularization.regularize_network_params(networks[k], penalty=regularization.l2)
for k in networks
])
def fuzzy_and(args):
s = reduce(lambda a, b: a + b, args)
return T.exp(s - 4.0)
train_global_prediction = fuzzy_and(train_predictions.values())
det_global_prediction = fuzzy_and(det_predictions.values())
c_reg = T.fscalar('c reg')
learning_rate = T.fscalar('learning rate')
coef_loss = theano.shared(np.array(0.7, dtype=theano.config.floatX)) #constant to regulate amount of “pretraining”
decay = np.array(0.8, dtype=theano.config.floatX) #to decrease coef_loss
log_losses = -((1 - labels_batch) * T.log(1 - train_global_prediction) + labels_batch * T.log(train_global_prediction))
pure_loss = T.mean(weights_batch * log_losses)
loss = pure_loss + c_reg * reg
pure_losses = {}
for k in networks:
log_loss = -((1 - labels_batch) * T.log(1 - train_predictions[k]) + labels_batch * T.log(train_predictions[k]))
pure_losses[k] = T.mean(weights_batch * log_loss)
modified_loss = (1 - coef_loss)*loss + coef_loss*np.sum(pure_losses[k] for k in networks)/4.
"""
Explanation: For each feature group we build a dense neural network.
On the one hand, a network should be capable of capturing non-trivial anomalies,
on the other hand a number of training samples is small. It is the reason why heavy dropout is applied for each layer.
Nevertheless, we should expect low bias due to dropout, since it is believed that not all features should interact
directly within one unit. For example, it is reasonable that momentum features do not mix with angular ones within one unit. Thus structure of weights should be sparse, which is one of the effects of the dropout regularization.
End of explanation
"""
params = reduce(lambda a, b: a + b, [
layers.get_all_params(net)
for net in networks.values()
])
upd = updates.adam(modified_loss, params, learning_rate = learning_rate)
train = theano.function([batch_indx, c_reg, learning_rate], [modified_loss, pure_loss], updates=upd)
get_loss = theano.function([batch_indx], pure_loss)
get_prediction = theano.function([batch_indx], det_global_prediction)
get_train_prediction = theano.function([batch_indx], train_global_prediction)
indx_train, indx_test = train_test_split(np.arange(data.shape[0], dtype='int32'), stratify=labels, test_size=0.1, random_state = 1)
n_epoches = 801
batch_size = 63
n_batches = indx_train.shape[0] / batch_size
lr = 2e-3
c_reg = 6.0e-7
pure_losses = np.zeros(shape=(2, n_epoches), dtype='float32')
validation_losses = np.zeros(shape=(len(networks)+1, n_epoches), dtype='float32')
for epoch in xrange(0,n_epoches):
if epoch%100 == 0:
#save the network's weights
netInfo = {}
for net in networks:
netInfo['network '+str(net)] = networks[net]
netInfo['params '+str(net)] = layers.get_all_param_values(networks[net])
Net_FileName = 'pretraining_loss_'+str(epoch)+'.pkl'
pickle.dump(netInfo, open(os.path.join('models/', Net_FileName), 'wb'),protocol=pickle.HIGHEST_PROTOCOL)
#decrease learning rate and amount of 'pretraining' loss
if epoch%100 == 20:
coef_loss.set_value(coef_loss.get_value() * decay)
lr = lr*0.8
batch_loss_m = 0.
batch_loss_p = 0.
for i, idx in enumerate(batch_stream(indx_train, batch_size=batch_size)):
mod, pure = train(idx, c_reg, lr)
batch_loss_m += mod
batch_loss_p += pure
pure_losses[0,epoch] = batch_loss_p/n_batches
pure_losses[1,epoch] = batch_loss_m/n_batches
sum_pred_test = np.zeros((len(indx_test)))
for k, net in enumerate(networks):
prediction_net = get_get_predictions[net](indx_test)
sum_pred_test += prediction_net
validation_losses[k,epoch] = 1 - roc_auc_score(
labels[indx_test],
prediction_net,
sample_weight=weights[indx_test])
f_and = np.exp(sum_pred_test - 4.)
validation_losses[k+1,epoch] = 1 - roc_auc_score(labels[indx_test],f_and,sample_weight=weights[indx_test])
#plots
display.clear_output(wait=True)
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(16, 6))
axes[0].set_title("Training loss")
axes[0].set_xlabel("#epoch")
axes[0].set_ylabel("loss")
for n in range(2):
axes[0].plot(pure_losses[n][:(epoch + 1)])
axes[0].legend(['pure_loss', 'modified_loss'], loc = 'best')
axes[1].set_title("Test 1-auc")
axes[1].set_xlabel("#epoch")
axes[1].set_ylabel("1-auc")
for n in range(5):
axes[1].plot(validation_losses[n][:(epoch + 1)])
axes[1].legend(networks.keys()+['f_and'], loc = 'best')
plt.show()
epoch
# Net_FileName = 'modified_loss_'+str(900)+'.pkl'
# netInfoload = pickle.load(open(os.path.join('models/',Net_FileName),'rb'))
# for net in networks:
# layers.set_all_param_values(networks[net], netInfoload['params '+str(net)])
"""
Explanation: Modified loss function to accelerate convergence is used:
$ L' = (1-C) * L + C * (L_1 + L_2 + L_3 + L_4) / 4 $,
where: $L$ - old loss (cross-entropy for “fuzzy AND” output),
$L_i$ - 'companion' losses, $C$ - constant to regulate amount of “pretraining” ($C$ ~ 1, $C$ < 1).
Companion losses can be cross-entropy of corresponding subnetwork scores against global labels. So it is simillar to the pretraining of separate networks on global labels. This losses force separate subnetworks to participate more in final prediction when the training process at the beginning.
Every $k$ epochs constant C is decreased, and ensemble performance becomes determinant. So we allow some anomalies not be seen from each channel, in this way ensemble loss heps not to overfit at the end and to obtain simple hyperplane.
End of explanation
"""
plt.figure(figsize=(8, 8))
sum_pred = np.zeros((len(indx_test)))
log_and = np.ones((len(indx_test)))
for k in networks.keys():
common_proba = get_get_predictions[k](indx_test)
sum_pred += common_proba
log_and*= common_proba
plt.plot([0, 1], [0, 1], '--', color='black')
fpr, tpr, _ = roc_curve(labels[indx_test], common_proba, sample_weight=weights[indx_test])
auc_score = auc(fpr, tpr, reorder=True)
plt.plot(fpr, tpr, label='Deterministic output, AUC for %s : %.3lf' % (k, auc_score))
f_and = np.exp(sum_pred - 4.)
fpr, tpr, _ = roc_curve(labels[indx_test], f_and, sample_weight=weights[indx_test])
auc_score = auc(fpr, tpr, reorder=True)
plt.plot(fpr, tpr, label='Deterministic output, AUC fuzzy_and : %.3lf' % auc_score)
plt.legend(loc='lower right')
plt.title('ROC curve for the network', fontsize=24)
plt.xlabel('FPR', fontsize=20)
plt.ylabel('TPR', fontsize=20)
plt.show()
probas = {}
for k in networks.keys():
feautures = channels_features[k]
probas[k] = get_get_predictions[k](indx_test)
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(16, 6))
for j, allgood in enumerate(np.array([0,1])):
where_allgood = np.where(sub_labels['json_0.txt'][indx_test] == allgood)[0]
axes[j].hist(f_and[where_allgood], bins=10, range=(0, 1),
histtype='step', lw=2)
axes[j].legend(loc='upper center')
axes[j].set_title(' AllGood:'+ str(allgood), fontsize=24)
axes[j].set_ylabel('luminosity fraction', fontsize=10)
axes[j].set_xlabel(r'global prediction', fontsize=10)
plt.show()
for k in networks.keys():
proba = probas[k]
plt.figure(figsize=(9, 8))
plt.hist([
proba[labels[indx_test] == 0.0],
proba[labels[indx_test] == 1.0]
],bins=20, range=(0, 1), weights=[
weights[indx_test][labels[indx_test] == 0.0] / np.sum(weights[indx_test][labels[indx_test] == 0.0]),
weights[indx_test][labels[indx_test] == 1.0] / np.sum(weights[indx_test][labels[indx_test] == 1.0])
], histtype='step', label=['Anomalous lumisections', 'Good lumisections'], lw=2)
plt.legend(loc='upper center')
plt.title('%s channel' % k, fontsize=24)
plt.ylabel('luminosity fraction', fontsize=20)
plt.xlabel(r'subnetwork output', fontsize=20)
plt.show()
metric = roc_auc_score
met_name = 'roc_auc_score'
channels = networks.keys()
sub_systems = sorted(sub_labels.keys())
aucs = np.ones(shape=(len(channels), len(sub_systems))) / 2.0
for i, channel in enumerate(channels):
for j, sub_system in enumerate(sub_systems):
try:
aucs[i, j] = metric(sub_labels[sub_system][indx_test], probas[channel], sample_weight=weights[indx_test])
except Exception as e:
print e
fig = plt.figure(figsize=(16, 7))
im = plt.imshow(aucs, interpolation='None', aspect=1)
plt.colorbar(im, shrink=0.75)
plt.xticks(np.arange(len(sub_systems)), [subsytem_descriptions[k] for k in sub_systems], rotation=90)
plt.yticks(np.arange(4), [ "%s" % g for g in channels ])
plt.title(str(met_name)+' of subnetwork scores against ground truth labels by subsystem')
plt.tight_layout()
plt.show()
"""
Explanation: Performance plots
End of explanation
"""
fig, axes = plt.subplots(nrows=4, ncols=2, figsize=(16, 26))
for j, strip in enumerate(np.array([0,1])):
for i, k in enumerate(channels):
proba = probas[k]
where_strip = np.where(sub_labels['json_21.txt'][indx_test] == strip)[0]
axes[i,j].hist([
proba[where_strip][labels[indx_test][where_strip] == 0.0],
proba[where_strip][labels[indx_test][where_strip] == 1.0]
],bins=40, range=(0, 1),
# weights=[
# weights[indx_test][where_strip][labels[indx_test][where_strip] == 0.0] / np.sum(weights[indx_test][where_strip][labels[indx_test][where_strip] == 0.0]),
# weights[indx_test][where_strip][labels[indx_test][where_strip] == 1.0] / np.sum(weights[indx_test][where_strip][labels[indx_test][where_strip] == 1.0])],
histtype='step', label=['Anomalous lumisections', 'Good lumisections'], lw=2)
axes[i,j].legend(loc='upper center')
axes[i,j].set_title('Channel:' + str(k)+' Strip:'+ str(strip), fontsize=24)
axes[i,j].set_ylabel('luminosity fraction', fontsize=10)
axes[i,j].set_xlabel(r'subnetwork output', fontsize=10)
plt.show()
plt.figure(figsize=(8, 8))
plt.plot([0, 1], [0, 1], '--', color='black')
fpr, tpr, _ = roc_curve(sub_labels['json_16.txt'][indx_test], probas['muons'], sample_weight=weights[indx_test])
auc_score = auc(fpr, tpr, reorder=True)
plt.plot(fpr, tpr, label='calo, AUC: %.3lf' % auc_score)
fpr, tpr, th = roc_curve(sub_labels['json_16.txt'][indx_test], probas['photons'], sample_weight=weights[indx_test])
auc_score = auc(fpr, tpr, reorder=True)
plt.plot(fpr, tpr, label='photons, AUC: %.3lf' % auc_score)
fpr, tpr, th = roc_curve(sub_labels['json_16.txt'][indx_test], probas['PF'], sample_weight=weights[indx_test], drop_intermediate=True)
auc_score = auc(fpr, tpr, reorder=True)
plt.plot(fpr, tpr, label='PF, AUC: %.3lf' % auc_score)
fpr, tpr, th = roc_curve(sub_labels['json_16.txt'][indx_test], probas['calo'], sample_weight=weights[indx_test], drop_intermediate=True)
auc_score = auc(fpr, tpr, reorder=True)
plt.plot(fpr, tpr, label='PF, AUC: %.3lf' % auc_score)
plt.legend(loc='lower right')
plt.title('ROC curve with respect to the Csc', fontsize=24)
plt.xlabel('FPR', fontsize=20)
plt.ylabel('TPR', fontsize=20)
plt.show()
fig, axes = plt.subplots(nrows=4, ncols=2, figsize=(16, 26))
for j, strip in enumerate(np.array([0,1])):
for i, k in enumerate(channels):
proba = probas[k]
where_strip = np.where(sub_labels['json_16.txt'][indx_test] == strip)[0]
axes[i,j].hist([
proba[where_strip][labels[indx_test][where_strip] == 0.0],
proba[where_strip][labels[indx_test][where_strip] == 1.0]
],bins=40, range=(0, 1))
# weights=[
# weights[indx_test][where_strip][labels[indx_test][where_strip] == 0.0] / np.sum(weights[indx_test][where_strip][labels[indx_test][where_strip] == 0.0]),
# weights[indx_test][where_strip][labels[indx_test][where_strip] == 1.0] / np.sum(weights[indx_test][where_strip][labels[indx_test][where_strip] == 1.0])],
# histtype='step', label=['Anomalous lumisections', 'Good lumisections'], lw=2)
axes[i,j].legend(loc='upper center')
axes[i,j].set_title('Channel:' + str(k)+' Csc:'+ str(strip), fontsize=24)
axes[i,j].set_ylabel('luminosity fraction', fontsize=10)
axes[i,j].set_xlabel(r'subnetwork output', fontsize=10)
plt.show()
"""
Explanation: With “Soft pretraining” scheme of learning NN we got all ROC AUC scores > 0.5. So there is no anticorrelations as it was expected.
End of explanation
"""
|
janusnic/21v-python | unit_20/parallel_ml/notebooks/05 - Model Selection and Assessment.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Some nice default configuration for plots
plt.rcParams['figure.figsize'] = 10, 7.5
plt.rcParams['axes.grid'] = True
plt.gray()
"""
Explanation: Model Selection and Assessment
Outline of the session:
Model performance evaluation and detection of overfitting with Cross-Validation
Hyper parameter tuning and model selection with Grid Search
Error analysis with learning curves and the Bias-Variance trade-off
Overfitting via Model Selection and the Development / Evaluation set split
End of explanation
"""
from sklearn.datasets import load_digits
digits = load_digits()
list(digits.keys())
print(digits.DESCR)
X, y = digits.data, digits.target
print("data shape: %r, target shape: %r" % (X.shape, y.shape))
print("classes: %r" % list(np.unique(y)))
n_samples, n_features = X.shape
print("n_samples=%d" % n_samples)
print("n_features=%d" % n_features)
def plot_gallery(data, labels, shape, interpolation='nearest'):
for i in range(data.shape[0]):
plt.subplot(1, data.shape[0], (i + 1))
plt.imshow(data[i].reshape(shape), interpolation=interpolation)
plt.title(labels[i])
plt.xticks(()), plt.yticks(())
subsample = np.random.permutation(X.shape[0])[:5]
images = X[subsample]
labels = ['True class: %d' % l for l in y[subsample]]
plot_gallery(images, labels, shape=(8, 8))
"""
Explanation: The Hand Written Digits Dataset
Let's load a simple dataset of 8x8 gray level images of handwritten digits (bundled in the sklearn source code):
End of explanation
"""
from sklearn.decomposition import RandomizedPCA
pca = RandomizedPCA(n_components=2)
X_pca = pca.fit_transform(X)
X_pca.shape
from itertools import cycle
colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k']
markers = ['+', 'o', '^', 'v', '<', '>', 'D', 'h', 's']
for i, c, m in zip(np.unique(y), cycle(colors), cycle(markers)):
plt.scatter(X_pca[y == i, 0], X_pca[y == i, 1],
c=c, marker=m, label=i, alpha=0.5)
_ = plt.legend(loc='best')
"""
Explanation: Let's visualize the dataset on a 2D plane using a projection on the first 2 axis extracted by Principal Component Analysis:
End of explanation
"""
labels = ['Component #%d' % i for i in range(len(pca.components_))]
plot_gallery(pca.components_, labels, shape=(8, 8))
"""
Explanation: We can observe that even in 2D, the groups of digits are quite well separated, especially the digit "0" that is very different from any other (the closest being "6" as it often share most the left hand side pixels). We can also observe that at least in 2D, there is quite a bit of overlap between the "1", "2" and "7" digits.
To better understand the meaning of the "x" and "y" axes of this plot it is also visualize the values of the first two principal components that are used to compute this projection:
End of explanation
"""
from sklearn.decomposition import PCA
pca_big = PCA().fit(X, y)
plt.title("Explained Variance")
plt.ylabel("Percentage of explained variance")
plt.xlabel("PCA Components")
plt.plot(pca_big.explained_variance_ratio_);
"""
Explanation: Has this dataset is small, both in terms of number of samples (1797) and features (64), we can compute the full (untruncated), exact PCA and have a look at the percentage of variance explained by each component of the PCA model:
End of explanation
"""
plt.title("Cumulated Explained Variance")
plt.ylabel("Percentage of explained variance")
plt.xlabel("PCA Components")
plt.plot(np.cumsum(pca_big.explained_variance_ratio_));
"""
Explanation: It might be easier to interpret by plotting the cumulated variance by previous components by using the numpy.cumsum function:
End of explanation
"""
from sklearn.svm import SVC
SVC().fit(X, y).score(X, y)
"""
Explanation: Overfitting
Overfitting is the problem of learning the training data by heart and being unable to generalize by making correct predictions on data samples unseen while training.
To illustrate this, let's train a Support Vector Machine naively on the digits dataset:
End of explanation
"""
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=0)
print("train data shape: %r, train target shape: %r"
% (X_train.shape, y_train.shape))
print("test data shape: %r, test target shape: %r"
% (X_test.shape, y_test.shape))
"""
Explanation: Did we really learn a perfect model that can recognize the correct digit class 100% of the time? Without new data it's impossible to tell.
Let's start again and split the dataset into two random, non overlapping subsets:
End of explanation
"""
svc = SVC(kernel='rbf').fit(X_train, y_train)
train_score = svc.score(X_train, y_train)
train_score
"""
Explanation: Let's retrain a new model on the first subset call the training set:
End of explanation
"""
test_score = svc.score(X_test, y_test)
test_score
"""
Explanation: We can now compute the performance of the model on new, held out data from the test set:
End of explanation
"""
svc
"""
Explanation: This score is clearly not as good as expected! The model cannot generalize so well to new, unseen data.
Whenever the test data score is not as good as the train score the model is overfitting
Whenever the train score is not close to 100% accuracy the model is underfitting
Ideally we want to neither overfit nor underfit: test_score ~= train_score ~= 1.0.
The previous example failed to generalized well to test data because we naively used the default parameters of the SVC class:
End of explanation
"""
svc_2 = SVC(kernel='rbf', C=100, gamma=0.001).fit(X_train, y_train)
svc_2
svc_2.score(X_train, y_train)
svc_2.score(X_test, y_test)
"""
Explanation: Let's try again with another parameterization:
End of explanation
"""
from sklearn.cross_validation import ShuffleSplit
cv = ShuffleSplit(n_samples, n_iter=3, test_size=0.1,
random_state=0)
for cv_index, (train, test) in enumerate(cv):
print("# Cross Validation Iteration #%d" % cv_index)
print("train indices: {0}...".format(train[:10]))
print("test indices: {0}...".format(test[:10]))
svc = SVC(kernel="rbf", C=1, gamma=0.001).fit(X[train], y[train])
print("train score: {0:.3f}, test score: {1:.3f}\n".format(
svc.score(X[train], y[train]), svc.score(X[test], y[test])))
"""
Explanation: In this case the model is almost perfectly able to generalize, at least according to our random train, test split.
Cross Validation
Cross Validation is a procedure to repeat the train / test split several times to as to get a more accurate estimate of the real test score by averaging the values found of the individual runs.
The sklearn.cross_validation package provides many strategies to compute such splits using class that implement the python iterator API:
End of explanation
"""
from sklearn.cross_validation import cross_val_score
svc = SVC(kernel="rbf", C=1, gamma=0.001)
cv = ShuffleSplit(n_samples, n_iter=10, test_size=0.1,
random_state=0)
test_scores = cross_val_score(svc, X, y, cv=cv, n_jobs=2)
test_scores
from scipy.stats import sem
def mean_score(scores):
"""Print the empirical mean score and standard error of the mean."""
return ("Mean score: {0:.3f} (+/-{1:.3f})").format(
np.mean(scores), 2 * sem(scores))
print(mean_score(test_scores))
"""
Explanation: Instead of doing the above manually, sklearn.cross_validation provides a little utility function to compute the cross validated test scores automatically:
End of explanation
"""
%load solutions/05A_large_cross_validation.py
%load solutions/05B_cross_validation_score_histogram.py
"""
Explanation: Exercise:
Perform 50 iterations of cross validation with randomly sampled folds of 500 training samples and 500 test samples randomly sampled from X and y (use sklearn.cross_validation.ShuffleSplit).
Try with SVC(C=1, gamma=0.01)
Plot distribution the test error using an histogram with 50 bins.
Try to increase the training size
Retry with SVC(C=10, gamma=0.005), then SVC(C=10, gamma=0.001) with 500 samples.
Optional: use a smoothed kernel density estimation scipy.stats.kde.gaussian_kde instead of an histogram to visualize the test error distribution.
Hints, type:
from sklearn.cross_validation import ShuffleSplit
ShuffleSplit? # to read the docstring of the shuffle split
plt.hist? # to read the docstring of the histogram plot
End of explanation
"""
n_gammas = 10
n_iter = 5
cv = ShuffleSplit(n_samples, n_iter=n_iter, train_size=500, test_size=500,
random_state=0)
train_scores = np.zeros((n_gammas, n_iter))
test_scores = np.zeros((n_gammas, n_iter))
gammas = np.logspace(-7, -1, n_gammas)
for i, gamma in enumerate(gammas):
for j, (train, test) in enumerate(cv):
clf = SVC(C=10, gamma=gamma).fit(X[train], y[train])
train_scores[i, j] = clf.score(X[train], y[train])
test_scores[i, j] = clf.score(X[test], y[test])
def plot_validation_curves(param_values, train_scores, test_scores):
for i in range(train_scores.shape[1]):
plt.semilogx(param_values, train_scores[:, i], alpha=0.4, lw=2, c='b')
plt.semilogx(param_values, test_scores[:, i], alpha=0.4, lw=2, c='g')
plot_validation_curves(gammas, train_scores, test_scores)
plt.ylabel("score for SVC(C=10, gamma=gamma)")
plt.xlabel("gamma")
plt.text(1e-6, 0.5, "Underfitting", fontsize=16, ha='center', va='bottom')
plt.text(1e-4, 0.5, "Good", fontsize=16, ha='center', va='bottom')
plt.text(1e-2, 0.5, "Overfitting", fontsize=16, ha='center', va='bottom')
plt.title('Validation curves for the gamma parameter');
"""
Explanation: Model Selection with Grid Search
Cross Validation makes it possible to evaluate the performance of a model class and its hyper parameters on the task at hand.
A natural extension is thus to run CV several times for various values of the parameters so as to find the best. For instance, let's fix the SVC parameter to C=10 and compute the cross validated test score for various values of gamma:
End of explanation
"""
from sklearn.learning_curve import validation_curve
n_Cs = 10
Cs = np.logspace(-5, 5, n_Cs)
train_scores, test_scores = validation_curve(
SVC(gamma=1e-3), X, y, 'C', Cs, cv=cv)
plot_validation_curves(Cs, train_scores, test_scores)
plt.ylabel("score for SVC(C=C, gamma=1e-3)")
plt.xlabel("C")
plt.text(1e-3, 0.5, "Underfitting", fontsize=16, ha='center', va='bottom')
plt.text(1e3, 0.5, "Few Overfitting", fontsize=16, ha='center', va='bottom')
plt.title('Validation curves for the C parameter');
"""
Explanation: We can see that, for this model class, on this unscaled dataset: when C=10, there is a sweet spot region for gamma around $10^4$ to $10^3$. Both the train and test scores are high (low errors).
If gamma is too low, train score is low (and thus test scores too as it generally cannot be better than the train score): the model is not expressive enough to represent the data: the model is in an underfitting regime.
If gamma is too high, train score is ok but there is a high discrepency between test and train score. The model is learning the training data and its noise by heart and fails to generalize to new unseen data: the model is in an overfitting regime.
Note: scikit-learn provides tools to compute such curves easily, we can do the same kind analysis to identify good values for C when gamma is fixed to $10^3$:
End of explanation
"""
from sklearn.grid_search import GridSearchCV
#help(GridSearchCV)
from pprint import pprint
svc_params = {
'C': np.logspace(-1, 2, 4),
'gamma': np.logspace(-4, 0, 5),
}
pprint(svc_params)
"""
Explanation: Doing this procedure several for each parameter combination is tedious, hence it's possible to automate the procedure by computing the test score for all possible combinations of parameters using the GridSearchCV helper.
End of explanation
"""
n_subsamples = 500
X_small_train, y_small_train = X_train[:n_subsamples], y_train[:n_subsamples]
gs_svc = GridSearchCV(SVC(), svc_params, cv=3, n_jobs=-1)
%time _ = gs_svc.fit(X_small_train, y_small_train)
gs_svc.best_params_, gs_svc.best_score_
gs_svc.grid_scores_
first_score = gs_svc.grid_scores_[0]
first_score
dict(vars(first_score))
"""
Explanation: As Grid Search is a costly procedure, let's do the some experiments with a smaller dataset:
End of explanation
"""
def display_scores(params, scores, append_star=False):
"""Format the mean score +/- std error for params"""
params = ", ".join("{0}={1}".format(k, v)
for k, v in params.items())
line = "{0}:\t{1:.3f} (+/-{2:.3f})".format(
params, np.mean(scores), sem(scores))
if append_star:
line += " *"
return line
def display_grid_scores(grid_scores, top=None):
"""Helper function to format a report on a grid of scores"""
grid_scores = sorted(grid_scores, key=lambda x: x[1], reverse=True)
if top is not None:
grid_scores = grid_scores[:top]
# Compute a threshold for staring models with overlapping
# stderr:
_, best_mean, best_scores = grid_scores[0]
threshold = best_mean - 2 * sem(best_scores)
for params, mean_score, scores in grid_scores:
append_star = mean_score + 2 * sem(scores) > threshold
print(display_scores(params, scores, append_star=append_star))
display_grid_scores(gs_svc.grid_scores_, top=20)
"""
Explanation: Let's define a couple of helper function to help us introspect the details of the grid search outcome:
End of explanation
"""
gs_svc.score(X_test, y_test)
"""
Explanation: One can see that Support Vector Machine with RBF kernel are very sensitive wrt. the gamma parameter (the badwith of the kernel) and to some lesser extend to the C parameter as well. If those parameter are not grid searched, the predictive accurracy of the support vector machine is almost no better than random guessing!
By default, the GridSearchCV class refits a final model on the complete training set with the best parameters found by during the grid search:
End of explanation
"""
from sklearn.tree import DecisionTreeClassifier
DecisionTreeClassifier()
tree = DecisionTreeClassifier()
tree_params = {
'criterion': ['gini', 'entropy'],
'min_samples_split': [2, 10, 20],
'max_depth': [5, 7, None],
}
cv = ShuffleSplit(n_subsamples, n_iter=50, test_size=0.1)
gs_tree = GridSearchCV(tree, tree_params, n_jobs=-1, cv=cv)
%time gs_tree.fit(X_train[:n_samples], y_train[:n_samples])
display_grid_scores(gs_tree.grid_scores_)
"""
Explanation: Evaluating this final model on the real test set will often yield a better score because of the larger training set, especially when the training set is small and the number of cross validation folds is small (cv=3 here).
Exercise:
Find a set of parameters for an sklearn.tree.DecisionTreeClassifier on the X_small_train / y_small_train digits dataset to reach at least 75% accuracy on the sample dataset (500 training samples)
In particular you can grid search good values for criterion, min_samples_split and max_depth
Which parameter(s) seems to be the most important to tune?
Retry with sklearn.ensemble.ExtraTreesClassifier(n_estimators=30) which is a randomized ensemble of decision trees. Does the parameters that make the single trees work best also make the ensemble model work best?
Hints:
If the outcome of the grid search is too instable (overlapping std errors), increase the number of CV folds with cv constructor parameter. The default value is cv=3. Increasing it to cv=5 or cv=10 often yield more stable results but at the price of longer evaluation times.
Start with a small grid, e.g. 2 values criterion and 3 for min_samples_split only to avoid having to wait for too long at first.
Type:
from sklearn.tree.DecisionTreeClassifier
DecisionTreeClassifier? # to read the docstring and know the list of important parameters
print(DecisionTreeClassifier()) # to show the list of default values
from sklearn.ensemble.ExtraTreesClassifier
ExtraTreesClassifier?
print(ExtraTreesClassifier())
Solution:
End of explanation
"""
unreg_tree = DecisionTreeClassifier(criterion='entropy', max_depth=None,
min_samples_split=2)
unreg_tree.fit(X_small_train, y_small_train)
print("Train score: %0.3f" % unreg_tree.score(X_small_train, y_small_train))
print("Test score: %0.3f" % unreg_tree.score(X_test, y_test))
"""
Explanation: As the dataset is quite small and decision trees are prone to overfitting, we need cross validate many times (e.g. n_iter=50) to get standard error of the mean test score below 0.010.
At that level of precision one can observe that the entropy split criterion yields slightly better predictions than gini. One can also observe that traditional regularization strategies (limiting the depth of the tree or giving a minimum number of samples to allow for a node to split does not work well on this problem.
Indeed, the unregularized decision tree (max_depth=None and min_samples_split=2) is among the top performers while it is clearly overfitting:
End of explanation
"""
reg_tree = DecisionTreeClassifier(criterion='entropy', max_depth=7,
min_samples_split=10)
reg_tree.fit(X_small_train, y_small_train)
print("Train score: %0.3f" % reg_tree.score(X_small_train, y_small_train))
print("Test score: %0.3f" % reg_tree.score(X_test, y_test))
"""
Explanation: Limiting the depth to 7 or setting the minimum number of samples to 20: this regularization add as much bias (hence training error) as it removes variance (as measured by the gap between training and test score) hence does not make it possible to solve the overfitting issue efficiently, for instance:
End of explanation
"""
from sklearn.ensemble import ExtraTreesClassifier
print(ExtraTreesClassifier())
#ExtraTreesClassifier?
trees = ExtraTreesClassifier(n_estimators=30)
cv = ShuffleSplit(n_subsamples, n_iter=5, test_size=0.1)
gs_trees = GridSearchCV(trees, tree_params, n_jobs=-1, cv=cv)
%time gs_trees.fit(X_small_train, y_small_train)
display_grid_scores(gs_trees.grid_scores_)
"""
Explanation: From the grid scores results one can also observe that regularizing too much is clearly detrimental: the models with a depth limited to 5 are clearly inferior to those limited to 7 or not depth limited at all (on this dataset).
To combat overfitting, of decision trees, it is preferable to use an ensemble approach that randomize the learning even further and then average the predictions as we will see with the ExtraTreesClassifier model class:
End of explanation
"""
unreg_trees = ExtraTreesClassifier(n_estimators=50, max_depth=None, min_samples_split=2)
unreg_trees.fit(X_small_train, y_small_train)
print("Train score: %0.3f" % unreg_trees.score(X_small_train, y_small_train))
print("Test score: %0.3f" % unreg_trees.score(X_test, y_test))
"""
Explanation: A couple of remarks:
ExtraTreesClassifier achieve a much better generalization than individual decistion trees (0.97 vs 0.80) even on such a small dataset so they are indeed able to solve the overfitting issue of individual decision trees.
ExtraTreesClassifier are much longer to train than individual trees but the fact that the predictions is averaged makes it no necessary to cross validate as many times to reach a stderr on the order of 0.010.
ExtraTreesClassifier are very robust to the choice of the parameters: most grid search point achieve a good prediction (even when higly regularized) although too much regularization is harmful. We can also note that the split criterion is no longer relevant.
Finally one can also observe that despite the high level of randomization of the individual trees, an ensemble model composed of unregularized trees is not underfitting:
End of explanation
"""
reg_trees = ExtraTreesClassifier(n_estimators=50, max_depth=7, min_samples_split=10)
reg_trees.fit(X_small_train, y_small_train)
print("Train score: %0.3f" % reg_trees.score(X_small_train, y_small_train))
print("Test score: %0.3f" % reg_trees.score(X_test, y_test))
"""
Explanation: More interesting, an ensemble model composed of regularized trees is not underfitting much less than the individual regularized trees:
End of explanation
"""
train_sizes = np.logspace(2, 3, 5).astype(np.int)
train_sizes
"""
Explanation: Plotting Learning Curves for Bias-Variance analysis
In order to better understand the behavior of model (model class + contructor parameters), is it possible to run several cross validation steps for various random sub-samples of the training set and then plot the mean training and test errors.
These plots are called the learning curves.
sklearn does not yet provide turn-key utilities to plot such learning curves but is not very complicated to compute them by leveraging the ShuffleSplit class. First let's define a range of data set sizes for subsampling the training set:
End of explanation
"""
n_iter = 20
train_scores = np.zeros((train_sizes.shape[0], n_iter), dtype=np.float)
test_scores = np.zeros((train_sizes.shape[0], n_iter), dtype=np.float)
"""
Explanation: For each training set sizes we will compute n_iter cross validation iterations. Let's pre-allocate the arrays to store the results:
End of explanation
"""
svc = SVC(C=1, gamma=0.0005)
for i, train_size in enumerate(train_sizes):
cv = ShuffleSplit(n_samples, n_iter=n_iter, train_size=train_size)
for j, (train, test) in enumerate(cv):
svc.fit(X[train], y[train])
train_scores[i, j] = svc.score(X[train], y[train])
test_scores[i, j] = svc.score(X[test], y[test])
"""
Explanation: We can now loop over training set sizes and CV iterations:
End of explanation
"""
mean_train = np.mean(train_scores, axis=1)
confidence = sem(train_scores, axis=1) * 2
plt.fill_between(train_sizes,
mean_train - confidence,
mean_train + confidence,
color = 'b', alpha = .2)
plt.plot(train_sizes, mean_train, 'o-k', c='b', label='Train score')
mean_test = np.mean(test_scores, axis=1)
confidence = sem(test_scores, axis=1) * 2
plt.fill_between(train_sizes,
mean_test - confidence,
mean_test + confidence,
color = 'g', alpha = .2)
plt.plot(train_sizes, mean_test, 'o-k', c='g', label='Test score')
plt.xlabel('Training set size')
plt.ylabel('Score')
plt.xlim(0, X_train.shape[0])
plt.ylim((None, 1.01)) # The best possible score is 1.0
plt.legend(loc='best')
plt.text(250, 0.9, "Overfitting a lot", fontsize=16, ha='center', va='bottom')
plt.text(800, 0.9, "Overfitting a little", fontsize=16, ha='center', va='bottom')
plt.title('Main train and test scores +/- 2 standard errors');
"""
Explanation: We can now plot the mean scores with error bars that reflect the standard errors of the means:
End of explanation
"""
from sklearn.learning_curve import learning_curve
"""
Explanation: Note: learning curves can be computed with there own utility function:
End of explanation
"""
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
clf = SVC().fit(X_train_scaled, y_train) # Look Ma'! Default params!
print("Train score: {0:.3f}".format(clf.score(X_train_scaled, y_train)))
print("Test score: {0:.3f}".format(clf.score(X_test_scaled, y_test)))
"""
Explanation: Interpreting Learning Curves
If the training set error is high (e.g. more than 5% misclassification) at the end of the learning curve, the model suffers from high bias and is said to underfit the training set.
If the testing set error is significantly larger than the training set error, the model suffers from high variance and is said to overfit the training set.
Another possible source of high training and testing error is label noise: the data is too noisy and there is nothing few signal learn from it.
What to do against overfitting?
Try to get rid of noisy features using feature selection methods (or better let the model do it if the regularization is able to do so: for instance l1 penalized linear models)
Try to tune parameters to add more regularization:
Smaller values of C for SVM
Larger values of alpha for penalized linear models
Restrict to shallower trees (decision stumps) and lower numbers of samples per leafs for tree-based models
Try simpler model families such as penalized linear models (e.g. Linear SVM, Logistic Regression, Naive Bayes)
Try the ensemble strategies that average several independently trained models (e.g. bagging or blending ensembles): average the predictions of independently trained models
Collect more labeled samples if the learning curves of the test score has a non-zero slope on the right hand side.
What to do against underfitting?
Give more freedom to the model by relaxing some parameters that act as regularizers:
Larger values of C for SVM
Smaller values of alpha for penalized linear models
Allow deeper trees and lower numbers of samples per leafs for tree-based models
Try more complex / expressive model families:
Non linear kernel SVMs,
Ensemble of Decision Trees...
Construct new features:
bi-gram frequencies for text classifications
feature cross-products (possibly using the hashing trick)
unsupervised features extraction (e.g. triangle k-means, auto-encoders...)
non-linear kernel approximations + linear SVM instead of simple linear SVM
Final Model Assessment
Grid Search parameters tuning can it-self be considered a (meta-)learning algorithm. Hence there is a risk of not taking into account the overfitting of the grid search procedure it-self.
To quantify and mitigate this risk we can nest the train / test split concept one level up:
Maker a top level "Development / Evaluation" sets split:
Development set used for Grid Search and training of the model with optimal parameter set
Hold out evaluation set used only for estimating the predictive performance of the resulting model
For dataset sampled over time, it is highly recommended to use a temporal split for the Development / Evaluation split: for instance, if you have collected data over the 2008-2013 period, you can:
use 2008-2011 for development (grid search optimal parameters and model class),
2012-2013 for evaluation (compute the test score of the best model parameters).
One Final Note About kernel SVM Parameters Tuning
In this session we applied the SVC model with RBF kernel on unormalized features: this is bad! If we had used a normalizer, the default parameters for C and gamma of SVC would directly have led to close to optimal performance:
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.19/_downloads/374e7fb88f562b8ceb7b99b07e106d9b/plot_10_raw_overview.ipynb | bsd-3-clause | import os
import numpy as np
import matplotlib.pyplot as plt
import mne
"""
Explanation: The Raw data structure: continuous data
This tutorial covers the basics of working with raw EEG/MEG data in Python. It
introduces the :class:~mne.io.Raw data structure in detail, including how to
load, query, subselect, export, and plot data from a :class:~mne.io.Raw
object. For more info on visualization of :class:~mne.io.Raw objects, see
tut-visualize-raw. For info on creating a :class:~mne.io.Raw object
from simulated data in a :class:NumPy array <numpy.ndarray>, see
tut_creating_data_structures.
:depth: 2
As usual we'll start by importing the modules we need:
End of explanation
"""
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
"""
Explanation: Loading continuous data
^^^^^^^^^^^^^^^^^^^^^^^
.. sidebar:: Datasets in MNE-Python
There are ``data_path`` functions for several example datasets in
MNE-Python (e.g., :func:`mne.datasets.kiloword.data_path`,
:func:`mne.datasets.spm_face.data_path`, etc). All of them will check the
default download location first to see if the dataset is already on your
computer, and only download it if necessary. The default download
location is also configurable; see the documentation of any of the
``data_path`` functions for more information.
As mentioned in the introductory tutorial <tut-overview>,
MNE-Python data structures are based around
the :file:.fif file format from Neuromag. This tutorial uses an
example dataset <sample-dataset> in :file:.fif format, so here we'll
use the function :func:mne.io.read_raw_fif to load the raw data; there are
reader functions for a wide variety of other data formats
<data-formats> as well.
There are also several other example datasets
<datasets> that can be downloaded with just a few lines
of code. Functions for downloading example datasets are in the
:mod:mne.datasets submodule; here we'll use
:func:mne.datasets.sample.data_path to download the "sample-dataset"
dataset, which contains EEG, MEG, and structural MRI data from one subject
performing an audiovisual experiment. When it's done downloading,
:func:~mne.datasets.sample.data_path will return the folder location where
it put the files; you can navigate there with your file browser if you want
to examine the files yourself. Once we have the file path, we can load the
data with :func:~mne.io.read_raw_fif. This will return a
:class:~mne.io.Raw object, which we'll store in a variable called raw.
End of explanation
"""
print(raw)
"""
Explanation: As you can see above, :func:~mne.io.read_raw_fif automatically displays
some information about the file it's loading. For example, here it tells us
that there are three "projection items" in the file along with the recorded
data; those are :term:SSP projectors <projector> calculated to remove
environmental noise from the MEG signals, and are discussed in a the tutorial
tut-projectors-background.
In addition to the information displayed during loading, you can
get a glimpse of the basic details of a :class:~mne.io.Raw object by
printing it:
End of explanation
"""
raw.crop(tmax=60).load_data()
"""
Explanation: By default, the :samp:mne.io.read_raw_{*} family of functions will not
load the data into memory (instead the data on disk are memory-mapped_,
meaning the data are only read from disk as-needed). Some operations (such as
filtering) require that the data be copied into RAM; to do that we could have
passed the preload=True parameter to :func:~mne.io.read_raw_fif, but we
can also copy the data into RAM at any time using the
:meth:~mne.io.Raw.load_data method. However, since this particular tutorial
doesn't do any serious analysis of the data, we'll first
:meth:~mne.io.Raw.crop the :class:~mne.io.Raw object to 60 seconds so it
uses less memory and runs more smoothly on our documentation server.
End of explanation
"""
n_time_samps = raw.n_times
time_secs = raw.times
ch_names = raw.ch_names
n_chan = len(ch_names) # note: there is no raw.n_channels attribute
print('the (cropped) sample data object has {} time samples and {} channels.'
''.format(n_time_samps, n_chan))
print('The last time sample is at {} seconds.'.format(time_secs[-1]))
print('The first few channel names are {}.'.format(', '.join(ch_names[:3])))
print() # insert a blank line in the output
# some examples of raw.info:
print('bad channels:', raw.info['bads']) # chs marked "bad" during acquisition
print(raw.info['sfreq'], 'Hz') # sampling frequency
print(raw.info['description'], '\n') # miscellaneous acquisition info
print(raw.info)
"""
Explanation: Querying the Raw object
^^^^^^^^^^^^^^^^^^^^^^^
.. sidebar:: Attributes vs. Methods
**Attributes** are usually static properties of Python objects — things
that are pre-computed and stored as part of the object's representation
in memory. Attributes are accessed with the ``.`` operator and do not
require parentheses after the attribute name (example: ``raw.ch_names``).
**Methods** are like specialized functions attached to an object.
Usually they require additional user input and/or need some computation
to yield a result. Methods always have parentheses at the end; additional
arguments (if any) go inside those parentheses (examples:
``raw.estimate_rank()``, ``raw.drop_channels(['EEG 030', 'MEG 2242'])``).
We saw above that printing the :class:~mne.io.Raw object displays some
basic information like the total number of channels, the number of time
points at which the data were sampled, total duration, and the approximate
size in memory. Much more information is available through the various
attributes and methods of the :class:~mne.io.Raw class. Some useful
attributes of :class:~mne.io.Raw objects include a list of the channel
names (:attr:~mne.io.Raw.ch_names), an array of the sample times in seconds
(:attr:~mne.io.Raw.times), and the total number of samples
(:attr:~mne.io.Raw.n_times); a list of all attributes and methods is given
in the documentation of the :class:~mne.io.Raw class.
The Raw.info attribute
~~~~~~~~~~~~~~~~~~~~~~~~~~
There is also quite a lot of information stored in the raw.info
attribute, which stores an :class:~mne.Info object that is similar to a
:class:Python dictionary <dict> (in that it has fields accessed via named
keys). Like Python dictionaries, raw.info has a .keys() method that
shows all the available field names; unlike Python dictionaries, printing
raw.info will print a nicely-formatted glimpse of each field's data. See
tut-info-class for more on what is stored in :class:~mne.Info
objects, and how to interact with them.
End of explanation
"""
print(raw.time_as_index(20))
print(raw.time_as_index([20, 30, 40]), '\n')
print(np.diff(raw.time_as_index([1, 2, 3])))
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Most of the fields of ``raw.info`` reflect metadata recorded at
acquisition time, and should not be changed by the user. There are a few
exceptions (such as ``raw.info['bads']`` and ``raw.info['projs']``), but
in most cases there are dedicated MNE-Python functions or methods to
update the :class:`~mne.Info` object safely (such as
:meth:`~mne.io.Raw.add_proj` to update ``raw.info['projs']``).</p></div>
Time, sample number, and sample index
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. sidebar:: Sample numbering in VectorView data
For data from VectorView systems, it is important to distinguish *sample
number* from *sample index*. See :term:`first_samp` for more information.
One method of :class:~mne.io.Raw objects that is frequently useful is
:meth:~mne.io.Raw.time_as_index, which converts a time (in seconds) into
the integer index of the sample occurring closest to that time. The method
can also take a list or array of times, and will return an array of indices.
It is important to remember that there may not be a data sample at exactly
the time requested, so the number of samples between time = 1 second and
time = 2 seconds may be different than the number of samples between
time = 2 and time = 3:
End of explanation
"""
eeg_and_eog = raw.copy().pick_types(meg=False, eeg=True, eog=True)
print(len(raw.ch_names), '→', len(eeg_and_eog.ch_names))
"""
Explanation: Modifying Raw objects
^^^^^^^^^^^^^^^^^^^^^^^^^
.. sidebar:: len(raw)
Although the :class:`~mne.io.Raw` object underlyingly stores data samples
in a :class:`NumPy array <numpy.ndarray>` of shape (n_channels,
n_timepoints), the :class:`~mne.io.Raw` object behaves differently from
:class:`NumPy arrays <numpy.ndarray>` with respect to the :func:`len`
function. ``len(raw)`` will return the number of timepoints (length along
data axis 1), not the number of channels (length along data axis 0).
Hence in this section you'll see ``len(raw.ch_names)`` to get the number
of channels.
:class:~mne.io.Raw objects have a number of methods that modify the
:class:~mne.io.Raw instance in-place and return a reference to the modified
instance. This can be useful for method chaining_
(e.g., raw.crop(...).filter(...).pick_channels(...).plot())
but it also poses a problem during interactive analysis: if you modify your
:class:~mne.io.Raw object for an exploratory plot or analysis (say, by
dropping some channels), you will then need to re-load the data (and repeat
any earlier processing steps) to undo the channel-dropping and try something
else. For that reason, the examples in this section frequently use the
:meth:~mne.io.Raw.copy method before the other methods being demonstrated,
so that the original :class:~mne.io.Raw object is still available in the
variable raw for use in later examples.
Selecting, dropping, and reordering channels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Altering the channels of a :class:~mne.io.Raw object can be done in several
ways. As a first example, we'll use the :meth:~mne.io.Raw.pick_types method
to restrict the :class:~mne.io.Raw object to just the EEG and EOG channels:
End of explanation
"""
raw_temp = raw.copy()
print('Number of channels in raw_temp:')
print(len(raw_temp.ch_names), end=' → drop two → ')
raw_temp.drop_channels(['EEG 037', 'EEG 059'])
print(len(raw_temp.ch_names), end=' → pick three → ')
raw_temp.pick_channels(['MEG 1811', 'EEG 017', 'EOG 061'])
print(len(raw_temp.ch_names))
"""
Explanation: Similar to the :meth:~mne.io.Raw.pick_types method, there is also the
:meth:~mne.io.Raw.pick_channels method to pick channels by name, and a
corresponding :meth:~mne.io.Raw.drop_channels method to remove channels by
name:
End of explanation
"""
channel_names = ['EOG 061', 'EEG 003', 'EEG 002', 'EEG 001']
eog_and_frontal_eeg = raw.copy().reorder_channels(channel_names)
print(eog_and_frontal_eeg.ch_names)
"""
Explanation: If you want the channels in a specific order (e.g., for plotting),
:meth:~mne.io.Raw.reorder_channels works just like
:meth:~mne.io.Raw.pick_channels but also reorders the channels; for
example, here we pick the EOG and frontal EEG channels, putting the EOG
first and the EEG in reverse order:
End of explanation
"""
raw.rename_channels({'EOG 061': 'blink detector'})
"""
Explanation: Changing channel name and type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. sidebar:: Long channel names
Due to limitations in the :file:`.fif` file format (which MNE-Python uses
to save :class:`~mne.io.Raw` objects), channel names are limited to a
maximum of 15 characters.
You may have noticed that the EEG channel names in the sample data are
numbered rather than labelled according to a standard nomenclature such as
the 10-20 <ten_twenty_> or 10-05 <ten_oh_five_> systems, or perhaps it
bothers you that the channel names contain spaces. It is possible to rename
channels using the :meth:~mne.io.Raw.rename_channels method, which takes a
Python dictionary to map old names to new names. You need not rename all
channels at once; provide only the dictionary entries for the channels you
want to rename. Here's a frivolous example:
End of explanation
"""
print(raw.ch_names[-3:])
channel_renaming_dict = {name: name.replace(' ', '_') for name in raw.ch_names}
raw.rename_channels(channel_renaming_dict)
print(raw.ch_names[-3:])
"""
Explanation: This next example replaces spaces in the channel names with underscores,
using a Python dict comprehension_:
End of explanation
"""
raw.set_channel_types({'EEG_001': 'eog'})
print(raw.copy().pick_types(meg=False, eog=True).ch_names)
"""
Explanation: If for some reason the channel types in your :class:~mne.io.Raw object are
inaccurate, you can change the type of any channel with the
:meth:~mne.io.Raw.set_channel_types method. The method takes a
:class:dictionary <dict> mapping channel names to types; allowed types are
ecg, eeg, emg, eog, exci, ias, misc, resp, seeg, stim, syst, ecog, hbo,
hbr. A common use case for changing channel type is when using frontal EEG
electrodes as makeshift EOG channels:
End of explanation
"""
raw_selection = raw.copy().crop(tmin=10, tmax=12.5)
print(raw_selection)
"""
Explanation: Selection in the time domain
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you want to limit the time domain of a :class:~mne.io.Raw object, you
can use the :meth:~mne.io.Raw.crop method, which modifies the
:class:~mne.io.Raw object in place (we've seen this already at the start of
this tutorial, when we cropped the :class:~mne.io.Raw object to 60 seconds
to reduce memory demands). :meth:~mne.io.Raw.crop takes parameters tmin
and tmax, both in seconds (here we'll again use :meth:~mne.io.Raw.copy
first to avoid changing the original :class:~mne.io.Raw object):
End of explanation
"""
print(raw_selection.times.min(), raw_selection.times.max())
raw_selection.crop(tmin=1)
print(raw_selection.times.min(), raw_selection.times.max())
"""
Explanation: :meth:~mne.io.Raw.crop also modifies the :attr:~mne.io.Raw.first_samp and
:attr:~mne.io.Raw.times attributes, so that the first sample of the cropped
object now corresponds to time = 0. Accordingly, if you wanted to re-crop
raw_selection from 11 to 12.5 seconds (instead of 10 to 12.5 as above)
then the subsequent call to :meth:~mne.io.Raw.crop should get tmin=1
(not tmin=11), and leave tmax unspecified to keep everything from
tmin up to the end of the object:
End of explanation
"""
raw_selection1 = raw.copy().crop(tmin=30, tmax=30.1) # 0.1 seconds
raw_selection2 = raw.copy().crop(tmin=40, tmax=41.1) # 1.1 seconds
raw_selection3 = raw.copy().crop(tmin=50, tmax=51.3) # 1.3 seconds
raw_selection1.append([raw_selection2, raw_selection3]) # 2.5 seconds total
print(raw_selection1.times.min(), raw_selection1.times.max())
"""
Explanation: Remember that sample times don't always align exactly with requested tmin
or tmax values (due to sampling), which is why the max values of the
cropped files don't exactly match the requested tmax (see
time-as-index for further details).
If you need to select discontinuous spans of a :class:~mne.io.Raw object —
or combine two or more separate :class:~mne.io.Raw objects — you can use
the :meth:~mne.io.Raw.append method:
End of explanation
"""
sampling_freq = raw.info['sfreq']
start_stop_seconds = np.array([11, 13])
start_sample, stop_sample = (start_stop_seconds * sampling_freq).astype(int)
channel_index = 0
raw_selection = raw[channel_index, start_sample:stop_sample]
print(raw_selection)
"""
Explanation: <div class="alert alert-danger"><h4>Warning</h4><p>Be careful when concatenating :class:`~mne.io.Raw` objects from different
recordings, especially when saving: :meth:`~mne.io.Raw.append` only
preserves the ``info`` attribute of the initial :class:`~mne.io.Raw`
object (the one outside the :meth:`~mne.io.Raw.append` method call).</p></div>
Extracting data from Raw objects
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
So far we've been looking at ways to modify a :class:~mne.io.Raw object.
This section shows how to extract the data from a :class:~mne.io.Raw object
into a :class:NumPy array <numpy.ndarray>, for analysis or plotting using
functions outside of MNE-Python. To select portions of the data,
:class:~mne.io.Raw objects can be indexed using square brackets. However,
indexing :class:~mne.io.Raw works differently than indexing a :class:NumPy
array <numpy.ndarray> in two ways:
Along with the requested sample value(s) MNE-Python also returns an array
of times (in seconds) corresponding to the requested samples. The data
array and the times array are returned together as elements of a tuple.
The data array will always be 2-dimensional even if you request only a
single time sample or a single channel.
Extracting data by index
~~~~~~~~~~~~~~~~~~~~~~~~
To illustrate the above two points, let's select a couple seconds of data
from the first channel:
End of explanation
"""
x = raw_selection[1]
y = raw_selection[0].T
plt.plot(x, y)
"""
Explanation: You can see that it contains 2 arrays. This combination of data and times
makes it easy to plot selections of raw data (although note that we're
transposing the data array so that each channel is a column instead of a row,
to match what matplotlib expects when plotting 2-dimensional y against
1-dimensional x):
End of explanation
"""
channel_names = ['MEG_0712', 'MEG_1022']
two_meg_chans = raw[channel_names, start_sample:stop_sample]
y_offset = np.array([5e-11, 0]) # just enough to separate the channel traces
x = two_meg_chans[1]
y = two_meg_chans[0].T + y_offset
lines = plt.plot(x, y)
plt.legend(lines, channel_names)
"""
Explanation: Extracting channels by name
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :class:~mne.io.Raw object can also be indexed with the names of
channels instead of their index numbers. You can pass a single string to get
just one channel, or a list of strings to select multiple channels. As with
integer indexing, this will return a tuple of (data_array, times_array)
that can be easily plotted. Since we're plotting 2 channels this time, we'll
add a vertical offset to one channel so it's not plotted right on top
of the other one:
End of explanation
"""
eeg_channel_indices = mne.pick_types(raw.info, meg=False, eeg=True)
eeg_data, times = raw[eeg_channel_indices]
print(eeg_data.shape)
"""
Explanation: Extracting channels by type
~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are several ways to select all channels of a given type from a
:class:~mne.io.Raw object. The safest method is to use
:func:mne.pick_types to obtain the integer indices of the channels you
want, then use those indices with the square-bracket indexing method shown
above. The :func:~mne.pick_types function uses the :class:~mne.Info
attribute of the :class:~mne.io.Raw object to determine channel types, and
takes boolean or string parameters to indicate which type(s) to retain. The
meg parameter defaults to True, and all others default to False,
so to get just the EEG channels, we pass eeg=True and meg=False:
End of explanation
"""
data = raw.get_data()
print(data.shape)
"""
Explanation: Some of the parameters of :func:mne.pick_types accept string arguments as
well as booleans. For example, the meg parameter can take values
'mag', 'grad', 'planar1', or 'planar2' to select only
magnetometers, all gradiometers, or a specific type of gradiometer. See the
docstring of :meth:mne.pick_types for full details.
The Raw.get_data() method
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you only want the data (not the corresponding array of times),
:class:~mne.io.Raw objects have a :meth:~mne.io.Raw.get_data method. Used
with no parameters specified, it will extract all data from all channels, in
a (n_channels, n_timepoints) :class:NumPy array <numpy.ndarray>:
End of explanation
"""
data, times = raw.get_data(return_times=True)
print(data.shape)
print(times.shape)
"""
Explanation: If you want the array of times, :meth:~mne.io.Raw.get_data has an optional
return_times parameter:
End of explanation
"""
first_channel_data = raw.get_data(picks=0)
eeg_and_eog_data = raw.get_data(picks=['eeg', 'eog'])
two_meg_chans_data = raw.get_data(picks=['MEG_0712', 'MEG_1022'],
start=1000, stop=2000)
print(first_channel_data.shape)
print(eeg_and_eog_data.shape)
print(two_meg_chans_data.shape)
"""
Explanation: The :meth:~mne.io.Raw.get_data method can also be used to extract specific
channel(s) and sample ranges, via its picks, start, and stop
parameters. The picks parameter accepts integer channel indices, channel
names, or channel types, and preserves the requested channel order given as
its picks parameter.
End of explanation
"""
data = raw.get_data()
np.save(file='my_data.npy', arr=data)
"""
Explanation: Summary of ways to extract data from Raw objects
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following table summarizes the various ways of extracting data from a
:class:~mne.io.Raw object.
.. cssclass:: table-bordered
.. rst-class:: midvalign
+-------------------------------------+-------------------------+
| Python code | Result |
| | |
| | |
+=====================================+=========================+
| raw.get_data() | :class:NumPy array |
| | <numpy.ndarray> |
| | (n_chans × n_samps) |
+-------------------------------------+-------------------------+
| raw[:] | :class:tuple of (data |
+-------------------------------------+ (n_chans × n_samps), |
| raw.get_data(return_times=True) | times (1 × n_samps)) |
+-------------------------------------+-------------------------+
| raw[0, 1000:2000] | |
+-------------------------------------+ |
| raw['MEG 0113', 1000:2000] | |
+-------------------------------------+ |
| raw.get_data(picks=0, | :class:`tuple` of |
| start=1000, stop=2000, | (data (1 × 1000), |
| return_times=True) | times (1 × 1000)) |
+-------------------------------------+ |
| raw.get_data(picks='MEG 0113', | |
| start=1000, stop=2000, | |
| return_times=True) | |
+-------------------------------------+-------------------------+
| raw[7:9, 1000:2000] | |
+-------------------------------------+ |
| raw[[2, 5], 1000:2000] | :class:tuple of |
+-------------------------------------+ (data (2 × 1000), |
| raw[['EEG 030', 'EOG 061'], | times (1 × 1000)) |
| 1000:2000] | |
+-------------------------------------+-------------------------+
Exporting and saving Raw objects
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
:class:~mne.io.Raw objects have a built-in :meth:~mne.io.Raw.save method,
which can be used to write a partially processed :class:~mne.io.Raw object
to disk as a :file:.fif file, such that it can be re-loaded later with its
various attributes intact (but see precision for an important
note about numerical precision when saving).
There are a few other ways to export just the sensor data from a
:class:~mne.io.Raw object. One is to use indexing or the
:meth:~mne.io.Raw.get_data method to extract the data, and use
:func:numpy.save to save the data array:
End of explanation
"""
sampling_freq = raw.info['sfreq']
start_end_secs = np.array([10, 13])
start_sample, stop_sample = (start_end_secs * sampling_freq).astype(int)
df = raw.to_data_frame(picks=['eeg'], start=start_sample, stop=stop_sample)
# then save using df.to_csv(...), df.to_hdf(...), etc
print(df.head())
"""
Explanation: It is also possible to export the data to a :class:Pandas DataFrame
<pandas.DataFrame> object, and use the saving methods that :mod:Pandas
<pandas> affords. The :class:~mne.io.Raw object's
:meth:~mne.io.Raw.to_data_frame method is similar to
:meth:~mne.io.Raw.get_data in that it has a picks parameter for
restricting which channels are exported, and start and stop
parameters for restricting the time domain. Note that, by default, times will
be converted to milliseconds, rounded to the nearest millisecond, and used as
the DataFrame index; see the scaling_time parameter in the documentation
of :meth:~mne.io.Raw.to_data_frame for more details.
End of explanation
"""
|
mjbommar/cscs-530-w2016 | notebooks/basic-random/003-random-seeds.ipynb | bsd-2-clause | %matplotlib inline
# Imports
import numpy
import numpy.random
import matplotlib.pyplot as plt
"""
Explanation: CSCS530 Winter 2015
Complex Systems 530 - Computer Modeling of Complex Systems (Winter 2015)
Course ID: CMPLXSYS 530
Course Title: Computer Modeling of Complex Systems
Term: Winter 2015
Schedule: Wednesdays and Friday, 1:00-2:30PM ET
Location: 120 West Hall (http://www.lsa.umich.edu/cscs/research/computerlab)
Teachers: Mike Bommarito and Sarah Cherng
View this repository on NBViewer
End of explanation
"""
# Let's make a random draw without seeding/controlling our RNG
for n in range(3):
print("Draw {0}".format(n))
X = numpy.random.uniform(size=10)
print(X)
print(X.mean())
print("=" * 16 + "\n")
"""
Explanation: Random number generation and seeds
Basic reading on random number generation:
http://en.wikipedia.org/wiki/Random_number_generation
On Determinism
The second method uses computational algorithms that can produce long sequences of apparently random results, which are in fact completely determined by a shorter initial value, known as a seed or key. The latter type are often called pseudorandom number generators. These types of generators do not typically rely on sources of naturally occurring entropy, though they may be periodically seeded by natural sources, they are non-blocking i.e. not rate-limited by an external event.
A "random number generator" based solely on deterministic computation cannot be regarded as a "true" random number generator in the purest sense of the word, since their output is inherently predictable if all seed values are known. In practice however they are sufficient for most tasks. Carefully designed and implemented pseudo-random number generators can even be certified for security-critical cryptographic purposes, as is the case with the yarrow algorithm and fortuna (PRNG). (The former being the basis of the /dev/random source of entropy on FreeBSD, AIX, Mac OS X, NetBSD and others. OpenBSD also uses a pseudo-random number algorithm based on ChaCha20 known as arc4random.[5])
On distributions
Random numbers uniformly distributed between 0 and 1 can be used to generate random numbers of any desired distribution by passing them through the inverse cumulative distribution function (CDF) of the desired distribution. Inverse CDFs are also called quantile functions. To generate a pair of statistically independent standard normally distributed random numbers (x, y), one may first generate the polar coordinates (r, θ), where r~χ22 and θ~UNIFORM(0,2π) (see Box–Muller transform).
Without a seeded RNG
End of explanation
"""
# Now let's try again with a fixed seed
seed = 0
# Let's make a random draw without seeding/controlling our RNG
for n in range(3):
print("Draw {0}".format(n))
rs = numpy.random.RandomState(seed)
Y = rs.uniform(size=10)
print(Y)
print(Y.mean())
print("=" * 16 + "\n")
"""
Explanation: With a seeded RNG
End of explanation
"""
|
robertoalotufo/ia898 | 2S2018/11 Teorema da Convolucao.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
from numpy.fft import *
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
f = np.array([[1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,1],
[0,0,0,0,0,0,0,0,0]])
print("Image (f):")
print(f)
h = np.array([[1,2,3],
[4,5,6]])
print("\n Image Kernel (h):")
print(h)
g1 = ia.pconv(f,h)
print("Image Output (pconv):")
print(g1)
"""
Explanation: Teorema da Convolução
Convolução periódica
Antes de falarmos sobre o Teorema da Convolução, precisamos entender a convolução periódica (pconv). Até agora, vimos a convolução linear (conv ou scipy.signal.convolve2d), onde o kernel $h$ tem sua origem no centro e a imagem $f$ tem sua origem no canto superior esquerdo. Na convolução periódica, a origem do kernel $h$ está na origem da imagem $f$. Ambos kernel e imagem são periódicos, com o mesmo período. Como normalmente o kernel $h$ é muito menor que a imagem $f$, ele é preenchido com zeros até o tamanho de $f$.
End of explanation
"""
fr = np.linspace(-1,1,6)
f = np.array([fr,2*fr,fr,fr])
print(f)
hh = np.array([-1,0,+1])
h = np.array([hh,2*hh,hh])
print(h)
g = ia.pconv(f,h)
print(g)
"""
Explanation: Teorema da convolução
O teorema da convolução diz que
$$ F(f * g) = F(f) \cdot F(g) $$
$$ F(f\cdot g) = F(f) * F(g) $$
onde $F$ indica o operador da transformada de Fourier, ou seja, $F(f)$ and $F(g)$ são as transformdas de $f$ e $g$. É importante perceber que a convolução usada aqui é a convolução periódica.
Vamos ilustrar o Teorema da Convolução com um exemplo numérico. Primeiro, vamos calcular a convolução periódica de uma imagem $f$ com um kernel $h$
End of explanation
"""
# Aumentando h para o tamanho de f
aux = np.zeros(f.shape)
r,c = h.shape
aux[:r,:c] = h
# Calculando a Transformada de Fourier da f e h
F = fft2(f)
H = fft2(aux)
# Multiplicando-se as Tranformadas
G = F * H
# Calculando a Transformada inversa
gg = ifft2(G)
print("Result gg: \n",np.around(gg))
"""
Explanation: Agora, vamos calcular a transformada de Fourier $F(f)$ da imagem e $F(h)$ do kernel. Antes de mais nada, precisamos garantir que a imagem $f$ e o kernel $h$ sejam periódicos e tenham o mesmo tamanho.
End of explanation
"""
print('O teorema da convolução funcionou?', np.allclose(gg.real,g))
"""
Explanation: Pelo teorema da convolução, gg e g deveriam ser iguais:
End of explanation
"""
|
4DGenome/Chromosomal-Conformation-Course | Participants/JCarlos/02_Parsing.ipynb | gpl-3.0 | from pytadbit.parsers.genome_parser import parse_fasta
genome_seq = parse_fasta('/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.fa')
maps1 = [
'results/HindIII/01_mapping/mapHindIII_r1/K562_HindIII_1_full_1-end.map',
'results/HindIII/01_mapping/mapHindIII_r1/K562_HindIII_1_frag_1-end.map']
maps2 = [
'results/HindIII/01_mapping/mapHindIII_r2/K562_HindIII_2_full_1-end.map',
'results/HindIII/01_mapping/mapHindIII_r2/K562_HindIII_2_frag_1-end.map']
! mkdir -p results/HindIII/02_parsing
from pytadbit.parsers.map_parser import parse_map
parse_map(maps1,
maps2,
'results/HindIII/02_parsing/reads1.tsv',
'results/HindIII/02_parsing/reads2.tsv',
genome_seq=genome_seq, re_name='HindIII',
verbose=True)
"""
Explanation: Parsing mapped reads
End of explanation
"""
! head -n 50 results/HindIII/02_parsing/reads1.tsv
from pytadbit.mapping import get_intersection
"""
Explanation: The output are the tables with the reads as ID, chrm, pos, lenght, and start-end fragment
End of explanation
"""
! mkdir -p results/HindIII/03_filtering
get_intersection('results/HindIII/02_parsing/reads1.tsv',
'results/HindIII/02_parsing/reads2.tsv',
'results/HindIII/03_filtering/reads12.tsv',
verbose=True)
! head -n50 results/HindIII/03_filtering/reads12.tsv
from pytadbit.mapping.analyze import plot_distance_vs_interactions
"""
Explanation: This function will filter all reads that are not mapped in both sides
End of explanation
"""
plot_distance_vs_interactions('results/HindIII/03_filtering/reads12.tsv',
resolution=100000, max_diff=1000, show=True)
from pytadbit.mapping.analyze import plot_genomic_distribution
"""
Explanation: This function plots the number of interactions as fiunction of distance
Each dot is a bin of 100K interactions, in Y axis is the interaction counts and in X axis is the distance. Hence interactions between close fragments should show the higher counts and interactions from fragments far away should be empty (no read counts)
End of explanation
"""
plot_genomic_distribution (
'results/HindIII/03_filtering/reads12.tsv',
resolution=500000, show=True)
from pytadbit.mapping.analyze import hic_map
hic_map('results/HindIII/03_filtering/reads12.tsv',
resolution=1000000, show=True)
"""
Explanation: This function will plot the genomic distribution, per chromosome of reads counts in 50kb bins
(* files can be saved as csv, tsv or images by 'save' and specify the file format with the extension)
End of explanation
"""
from pytadbit.mapping.analyze import insert_sizes
insert_sizes('results/HindIII/03_filtering/reads12.tsv',show=True,nreads=100000)
from pytadbit.mapping.filter import filter_reads
filter_reads('results/HindIII/03_filtering/reads12.tsv',max_molecule_length=750,min_dist_to_re=500)
"""
Explanation: Plot the interaction matrix and print a summary of the interactions:
Cis interactions
End of explanation
"""
masked = {1: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_self-circle.tsv',
'name': 'self-circle',
'reads': 37383},
2: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_dangling-end.tsv',
'name': 'dangling-end',
'reads': 660146},
3: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_error.tsv',
'name': 'error',
'reads': 37395},
4: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_extra_dangling-end.tsv',
'name': 'extra dangling-end',
'reads': 3773498},
5: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_too_close_from_RES.tsv',
'name': 'too close from RES',
'reads': 3277369},
6: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_too_short.tsv',
'name': 'too short',
'reads': 296853},
7: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_too_large.tsv',
'name': 'too large',
'reads': 1843},
8: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_over-represented.tsv',
'name': 'over-represented',
'reads': 411157},
9: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_duplicated.tsv',
'name': 'duplicated',
'reads': 324490},
10: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_random_breaks.tsv',
'name': 'random breaks',
'reads': 968492}}
from pytadbit.mapping.filter import apply_filter
"""
Explanation: This function will identify all type of reads that can be filtered
End of explanation
"""
apply_filter('results/HindIII/03_filtering/reads12.tsv',
'results/HindIII/03_filtering/reads12_valid.tsv',
masked,filters=[1,2,3,4,9,10])
"""
Explanation: This will filter the reads based in
End of explanation
"""
|
amillner/pyiat | example/pyiat_example.ipynb | gpl-3.0 | d=pd.read_csv('iat_data.csv',index_col=0)
d.head()
#Number of trials per subject
#Note that Subject 1 has too few trials
d.groupby('subjnum').subjnum.count().head()
#Number of subjects in this data set
d.subjnum.unique()
#Conditions
d.condition.unique()
#Blocks
d.block.unique()
#Correct coded as 1, errors coded as 0 in correct column
d.correct.unique()
"""
Explanation: Example data from the Death Implicit Association Test
Nock, M.K., Park, J.M., Finn, C.T., Deliberto, T.L., Dour, H.J., & Banaji, M.R. (2010). Measuring the suicidal mind: Implicit cognition predicts suicidal behavior. Psychological Science, 21(4), 511–517. https://doi.org/10.1177/0956797610364762
pyiat will work with any IAT data.
import data
End of explanation
"""
d1,fs1=pyiat.analyze_iat(d,subject='subjnum',rt='latency',condition='condition',correct='correct'\
,cond1='Death/Not Me,Life/Me',cond2='Life/Not Me,Death/Me'\
,block='block',blocks=[2,3,5,6],fastslow_stats=True)
"""
Explanation: <blockquote>
Blocks 0,1 & 4 - which contain conditions 'Death,Life', 'Not Me,Me', 'Life,Death' are practice blocks, meaning they do not contain relevant data because they do not contrast the different categories.
</blockquote>
<blockquote>
Therefore, we will enter blocks 2,3,5,6 and conditions 'Life/Not Me,Death/Me', 'Death/Not Me,Life/Me' into analyze_iat.
</blockquote>
<blockquote>
We are entering the "correct" column, which contains 1 for correct and 0 for errors. We could enter the "errors" column and then just set the error_or_correct argument to 'error.'
</blockquote>
<blockquote>
Finally, we have the option to return the total number and percentage of trials that are removed because they are either too fast (default : 400ms) or too slow (default : 10000ms). This will return the number and percentage across all subjects and across just subjects that do not receive a flag indicating they had poor performance on some metric.
</blockquote>
pyiat
Return a weighted d-scores. It will also return all error and too fast/too slow trial information and flags indicating poor performance as well as the number of blocks
End of explanation
"""
d1.iloc[:,0:14].head()
"""
Explanation: output
First 14 columns contain the number of trials, - overall, for each condition and for each block - both before and after excluding fast\slow trials
End of explanation
"""
d1.iloc[:,14:21].head()
"""
Explanation: Next 7 columns contain error the number of trials - overall, within each condition and within each block
Error rates are calculated prior to excluding fast\slow trials but there is an option - errors_after_fastslow_rmvd - that if set to True will remove fast/slow trials prior to calculating error rates
End of explanation
"""
d1.iloc[:,21:28].head()
"""
Explanation: Next 7 columns contain pct of too fast trials - overall, within each condition and within each block
End of explanation
"""
d1.iloc[:,28:35].head()
"""
Explanation: Next 7 columns contain pct of too slow trials - overall, within each condition and within each block
End of explanation
"""
d1.iloc[:,35].to_frame().head()
"""
Explanation: Column 35 contains the number of blocks
End of explanation
"""
d1.iloc[:,36:58].head()
"""
Explanation: Next 22 columns contain whether a poor performance criterion\cutoff was flagged - across error rates, too fast rates, too slow rates, and number of blocks
End of explanation
"""
d1.iloc[:,58].to_frame().head()
"""
Explanation: Column 58 contains total number of any poor performance crierion\cuttoff flags participant received. If 0, participant had okay performance.
End of explanation
"""
d1.iloc[:,59:62].head()
"""
Explanation: Columns 59-62 contain D scores for early and late trials and a final overall weighted D score
End of explanation
"""
#Prepare data to enter into r package - need to have blocks be a string and need to divide data into 2 separate
#dataframes for people that received "Death,Me" first and for those that received "Life,Me" first.
d['block_str']=d.block.astype(str)
d1_r_subn=d[(d.condition=='Death/Not Me,Life/Me')&(d.block>4)].subjnum.unique()
d1_r=d[d.subjnum.isin(d1_r_subn)]
d2_r_subn=d[(d.condition=='Life/Not Me,Death/Me')&(d.block>4)].subjnum.unique()
d2_r=d[d.subjnum.isin(d2_r_subn)]
%R -i d1_r
%R -i d2_r
%%R
dscore_first <- cleanIAT(my_data = d1_r,
block_name = "block_str",
trial_blocks = c("2","3", "5", "6"),
session_id = "subjnum",
trial_latency = "latency",
trial_error = "errors",
v_error = 1, v_extreme = 2, v_std = 1)
dscore_second <- cleanIAT(my_data = d2_r,
block_name = "block_str",
trial_blocks = c("2","3", "5", "6"),
session_id = "subjnum",
trial_latency = "latency",
trial_error = "errors",
v_error = 1, v_extreme = 2, v_std = 1)
r_dsc <- rbind(dscore_first, dscore_second)
%R -o dscore_first
%R -o dscore_second
#Then we need to combine the separate dataframes
#One of these the scores are flipped so need to flip back
dscore_second.IAT=dscore_second.IAT*-1
iat_r_dsc=pd.concat([dscore_first,dscore_second])
iat_r_dsc.index=iat_r_dsc.subjnum
iat_r_dsc=iat_r_dsc.sort_index()
"""
Explanation: Compare D scores with R package "iat"
https://cran.r-project.org/web/packages/IAT/
End of explanation
"""
py_r_iat=pd.concat([d1.dscore,iat_r_dsc.IAT],axis=1)
py_r_iat.head()
#Correlation between pyiat (dscore) and R package (IAT) = 1
py_r_iat.corr()
"""
Explanation: pyiat produces same d-scores as R package iat
End of explanation
"""
fs1
"""
Explanation: In the pyiat command above, we entered an argument to return fast-slow stats
This returns total perecentage of too fast and too slow trials across all subjects and only subjects without poor performance flags
End of explanation
"""
d2,fs2=pyiat.analyze_iat(d,subject='subjnum',rt='latency',condition='condition',correct='correct'\
,cond1='Death/Not Me,Life/Me',cond2='Life/Not Me,Death/Me'\
,block='block',blocks=[2,3,5,6],fastslow_stats=True,each_stim=True,stimulus='trial_word')
"""
Explanation: Other options
D scores for each stimulus (i.e. each word)
Requires each_stim=True and name of the column containing the stimuli in the stimulus column
End of explanation
"""
d2.iloc[:,59:].head()
"""
Explanation: D scores for each word as well as standard task-wide error and fast\slow trial output
End of explanation
"""
d3,fs3=pyiat.analyze_iat(d,subject='subjnum',rt='latency',condition='condition',correct='correct'\
,cond1='Death/Not Me,Life/Me',cond2='Life/Not Me,Death/Me'\
,fastslow_stats=True,weighted=False)
"""
Explanation: Unweighted D scores
<blockquote>
The unweighted algorithm does not require the 'block' or 'blocks' arguments.
</blockquote>
End of explanation
"""
d3.iloc[:,24:].head()
"""
Explanation: This produces less output as it does not report any information on a block basis
End of explanation
"""
d4,fs4=pyiat.analyze_iat(d,subject='subjnum',rt='latency',condition='condition',correct='correct'\
,cond1='Death/Not Me,Life/Me',cond2='Life/Not Me,Death/Me'\
,fastslow_stats=True,each_stim=True,stimulus='trial_word',weighted=False)
d4.iloc[:,26:].head()
"""
Explanation: Unweighted D scores for each stimulus
End of explanation
"""
bd=pd.read_csv('biat_data.csv',index_col=0)
bd.head()
biatd1,biatfsl=pyiat.analyze_iat(bd,subject='subn',rt='RT',condition='pair',\
correct='errors',error_or_correct='error'\
,cond2='(unnamed)/Death,Me/Life',cond1='(unnamed)/Life,Me/Death'\
,block='block_num',blocks=[0, 1, 2, 3,4,5],biat=True,biat_rmv_xtrls=4,biat_trl_num='trl_number',fastslow_stats=True)
biatd1.iloc[:,-5:].head()
"""
Explanation: There are a few more options, including (1) setting the too fast\too slow threshold, (2) setting the cutoffs for flags, (3) reporting errors and too fast\slow trial counts instead of percentages (4) printing the output to an excel spreadsheet.
Brief IAT (BIAT)
<blockquote>
pyiat can also produce D scores and poor performance flags for the Brief IAT (BIAT), according to Nosek, et al., 2014 scoring procedures (trials greater than 2 sec are changed to 2 sec, trials less than 400ms are changed to 400ms), with some options. The first x (default 4) trials of each block can be removed or not or you can determine how many trials should be removed from the beginning of each block, if any. You can set the pct flags. One issue with BIAT flags in pyiat is that currently flags for fast and slow trials use the same cutoff pct. Recommended scoring procedures (Nosek et al. 2014) recommend a flag for fast trials but not slow. This is not currently possible in pyiat. However, you can see the pct of slow and fast trials and create your own flags from this information.
</blockquote>
<blockquote>
The BIAT code can take either 2, 4, or 6 blocks (depending what is listed in the blocks argument) and will dynamically adjust to however many blocks it is given.
</blockquote>
<blockquote>
Results of pyiat BIAT D scores were checked against D scores sent to me from U of Virginia.
</blockquote>
End of explanation
"""
biatd1stim,biatfslstim=pyiat.analyze_iat(bd,subject='subn',rt='RT',condition='pair',\
correct='errors',error_or_correct='error'\
,cond2='(unnamed)/Death,Me/Life',cond1='(unnamed)/Life,Me/Death'\
,block='block_num',blocks=[0, 1, 2, 3,4,5],biat=True,biat_rmv_xtrls=4,\
biat_trl_num='trl_number',fastslow_stats=True,each_stim=True,stimulus='word',weighted=False)
#The first subject had only one block and you can see the repeated numbers
biatd1stim.iloc[:,-15:].head()
"""
Explanation: D scores for each stimulus (i.e. each word)
<blockquote>
D scores can be obtained for each word in the BIAT as well but if you choose weighted, it will result in odd\repeated values, assumedly b/c words are presented a single time in a block.
</blockquote>
End of explanation
"""
|
Tjorriemorrie/trading | 21_gae_kelly/bulkloader/Analysis.ipynb | mit | import numpy as np
import pandas as pd
%matplotlib inline
from matplotlib import pyplot as plt
"""
Explanation: H1
Download data
Run:
appcfg.py download_data --url=http://binary-trading.appspot.com/remoteapi --filename=runs.csv --kind="Run" --config_file=config.yaml
Import dependencies
End of explanation
"""
df = pd.read_csv('runs.csv')
print df.columns
"""
Explanation: Load Data
End of explanation
"""
df['is_win'].hist()
"""
Explanation: Profitability
Win/loss ratio
End of explanation
"""
print df[df['step'] > 8]
df['profit'].cumsum().plot()
"""
Explanation: Actual profit
Shows the cumulative profit
End of explanation
"""
df['roi'] = df['profit'] / df['stake']
#print df[df['profit'] > 20]
#print df[df['roi'] < -1]
print df[['payout', 'stake', 'profit', 'roi']].tail()
"""
Explanation: ROI
End of explanation
"""
tf_roi = df[['roi']].groupby(df['time_frame']) # .sort('roi', ascending=False)
#tf_roi
#tf_roi.plot()
#tf_roi.plot(subplots=True)
# tf_roi.plot(secondary_y=('roi', 'count'))
tf_roi.mean().plot(kind='bar', subplots=True, grid=False, yerr=tf_roi.std())
#tf_roi.plot(kind='kde', subplots=True, grid=False)
#tf_roi.boxplot()
tf_roi.count().plot(kind='bar', subplots=True, grid=False)
"""
Explanation: Time Frames
End of explanation
"""
grouped = df.groupby('trade_base')
grouped['roi'].mean().plot(kind='barh', yerr=grouped['roi'].std())
"""
Explanation: Trade Base
End of explanation
"""
grouped = df.groupby(['trade_base', 'trade_aim'])
#for name, group in grouped:
# print(name)
# print(group['roi'].tail())
#print grouped['roi'].mean()
#print grouped['roi'].std()
grouped['roi'].mean().plot(kind='bar', yerr=grouped['roi'].std())
grouped['step'].mean().plot(kind='bar', yerr=grouped['step'].std())
"""
Explanation: Trade Base & Aim
Goal:
Want to see the ROI per selection
Want to see the avg steps per selection (reduce risk)
End of explanation
"""
grouped = df.groupby(['step'])
grouped['roi'].mean().plot(kind='bar', grid=False, yerr=grouped['roi'].std())
"""
Explanation: Step
ROI by step
Goal:
- confirm visualisation should be useless
End of explanation
"""
vals = df['step'].value_counts()
print vals
#print vals.index.values
#print vals.values
coeffs = np.polyfit(vals.index.values, vals.values, 2)
poly = np.poly1d(coeffs)
print poly
vals.plot(kind='bar')
#plt.plot(vals.index.values, [0.5**(x+1) for x in vals.index.values])
df_last = df[df['is_win'] == True]
vals = df_last['step'].value_counts()
print vals
vals.plot(kind='bar')
"""
Explanation: Count by step
Goal:
- to establish probability of ruin
End of explanation
"""
grouped = df.groupby(['step'])
grouped['stake'].mean().plot(kind='bar', yerr=grouped['stake'].std())
"""
Explanation: Stake by step
Goal:
- show the volatility per step
- used in bankroll management
End of explanation
"""
grouped = df.groupby(['step'])
grouped['stake_net'].mean().plot(kind='bar', yerr=grouped['stake_net'].std())
"""
Explanation: Stake of total stake by step
End of explanation
"""
|
sahilm89/lhsmdu | lhsmdu/benchmark/Comparing LHSMDU and MC sampling.ipynb | mit | import numpy as np
import lhsmdu
import matplotlib.pyplot as plt
def simpleaxis(axes, every=False):
if not isinstance(axes, (list, np.ndarray)):
axes = [axes]
for ax in axes:
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
if every:
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.set_title('')
"""
Explanation: Comparing MC and LHS methods for sampling from a uniform distribution
This note compares the moments of the emperical uniform distribution sampled using Latin Hypercube sampling with Multi-Dimensional Uniformity (LHSMDU) with the NumPy random number generator with theoretical moments of a uniform distribution.
End of explanation
"""
seed = 1
np.random.seed(seed)
lhsmdu.setRandomSeed(seed)
numDimensions = 2
numSamples = 100
numIterations = 100
"""
Explanation: Params
End of explanation
"""
theoretical_mean = 0.5
theoretical_std = np.sqrt(1./12)
"""
Explanation: Theoretical values
End of explanation
"""
mc_Mean, lhs_Mean = [], []
mc_Std, lhs_Std = [], []
for iterate in range(numIterations):
a = np.random.random((numDimensions,numSamples))
b = lhsmdu.sample(numDimensions,numSamples)
mc_Mean.append(np.mean(a))
lhs_Mean.append(np.mean(b))
mc_Std.append(np.std(a))
lhs_Std.append(np.std(b))
"""
Explanation: Emperical mean ($\mu$) and standard deviation ($\sigma$) estimates for 100 samples
End of explanation
"""
fig, ax = plt.subplots()
ax.plot(range(numIterations), mc_Mean, 'ko', label='numpy')
ax.plot(range(numIterations), lhs_Mean, 'o', c='orange', label='lhsmdu')
ax.hlines(xmin=0, xmax=numIterations, y=theoretical_mean, linestyles='--', label='theoretical value', zorder=3)
ax.set_xlabel("Iteration #")
ax.set_ylabel("$\mu$")
ax.legend(frameon=False)
simpleaxis(ax)
plt.show()
"""
Explanation: Plotting mean estimates
End of explanation
"""
fig, ax = plt.subplots()
ax.plot(range(numIterations), mc_Std, 'ko', label='numpy')
ax.plot(range(numIterations), lhs_Std, 'o', c='orange', label='lhsmdu')
ax.hlines(xmin=0, xmax=numIterations, y=theoretical_std, linestyles='--', label='theoretical value', zorder=3)
ax.set_xlabel("Iteration #")
ax.set_ylabel("$\sigma$")
ax.legend(frameon=False)
simpleaxis(ax)
plt.show()
"""
Explanation: Plotting standard deviation estimates
End of explanation
"""
mc_Std, lhs_Std = [], []
mc_Mean, lhs_Mean = [], []
numSamples = range(1,numIterations)
for iterate in numSamples:
a = np.random.random((numDimensions,iterate))
b = lhsmdu.sample(numDimensions,iterate)
mc_Mean.append(np.mean(a))
lhs_Mean.append(np.mean(b))
mc_Std.append(np.std(a))
lhs_Std.append(np.std(b))
"""
Explanation: Across different number of samples
End of explanation
"""
fig, ax = plt.subplots()
ax.plot(numSamples, mc_Mean, 'ko', label='numpy')
ax.plot(numSamples, lhs_Mean, 'o', c='orange', label='lhsmdu')
ax.hlines(xmin=0, xmax=numIterations, y=theoretical_mean, linestyles='--', label='theoretical value', zorder=3)
ax.set_xlabel("Number of Samples")
ax.set_ylabel("$\mu$")
ax.legend(frameon=False)
simpleaxis(ax)
plt.show()
"""
Explanation: Plotting mean estimates
End of explanation
"""
fig, ax = plt.subplots()
ax.plot(numSamples, mc_Std, 'ko', label='numpy')
ax.plot(numSamples, lhs_Std, 'o', c='orange', label='lhsmdu')
ax.hlines(xmin=0, xmax=numIterations, y=theoretical_std, linestyles='--', label='theoretical value', zorder=3)
ax.set_xlabel("Number of Samples")
ax.set_ylabel("$\sigma$")
ax.legend(frameon=False)
simpleaxis(ax)
plt.show()
"""
Explanation: Plotting standard deviation estimates
End of explanation
"""
|
freedomtan/tensorflow | tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install tflite-model-maker
"""
Explanation: BERT Question Answer with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_question_answer"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task.
Introduction to BERT Question Answer Task
The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer.
<p align="center"><img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_squad_showcase.png" width="500"></p>
<p align="center">
<em>Answers are spans in the passage (image credit: <a href="https://rajpurkar.github.io/mlx/qa-and-squad/">SQuAD blog</a>) </em>
</p>
As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.
The size of input could be set and adjusted according to the length of passage and question.
End-to-End Overview
The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format.
```python
Chooses a model specification that represents the model.
spec = model_spec.get('mobilebert_qa')
Gets the training data and validation data.
train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False)
Fine-tunes the model.
model = question_answer.create(train_data, model_spec=spec)
Gets the evaluation result.
metric = model.evaluate(validation_data)
Exports the model to the TensorFlow Lite format with metadata in the export directory.
model.export(export_dir)
```
The following sections explain the code in more detail.
Prerequisites
To run this example, install the required packages, including the Model Maker package from the GitHub repo.
End of explanation
"""
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker import QuestionAnswerDataLoader
"""
Explanation: Import the required packages.
End of explanation
"""
spec = model_spec.get('mobilebert_qa_squad')
"""
Explanation: The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail.
Choose a model_spec that represents a model for question answer
Each model_spec object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.
Supported Model | Name of model_spec | Model Description
--- | --- | ---
MobileBERT | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.
MobileBERT-SQuAD | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on SQuAD1.1.
BERT-Base | 'bert_qa' | Standard BERT model that widely used in NLP tasks.
In this tutorial, MobileBERT-SQuAD is used as an example. Since the model is already retrained on SQuAD1.1, it could coverage faster for question answer task.
End of explanation
"""
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
"""
Explanation: Load Input Data Specific to an On-device ML App and Preprocess the Data
The TriviaQA is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.
To load the data, convert the TriviaQA dataset to the SQuAD1.1 format by running the converter Python script with --sample_size=8000 and a set of web data. Modify the conversion code a little bit by:
* Skipping the samples that couldn't find any answer in the context document;
* Getting the original answer in the context without uppercase or lowercase.
Download the archived version of the already converted dataset.
End of explanation
"""
train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False)
"""
Explanation: You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.
<img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_question_answer.png" alt="Upload File" width="800" hspace="100">
If you prefer not to upload your data to the cloud, you can also run the library offline by following the guide.
Use the QuestionAnswerDataLoader.from_squad method to load and preprocess the SQuAD format data according to a specific model_spec. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter version_2_with_negative as True means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, version_2_with_negative is False.
End of explanation
"""
model = question_answer.create(train_data, model_spec=spec)
"""
Explanation: Customize the TensorFlow Model
Create a custom question answer model based on the loaded data. The create function comprises the following steps:
Creates the model for question answer according to model_spec.
Train the question answer model. The default epochs and the default batch size are set according to two variables default_training_epochs and default_batch_size in the model_spec object.
End of explanation
"""
model.summary()
"""
Explanation: Have a look at the detailed model structure.
End of explanation
"""
model.evaluate(validation_data)
"""
Explanation: Evaluate the Customized Model
Evaluate the model on the validation data and get a dict of metrics including f1 score and exact match etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
End of explanation
"""
config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config._experimental_new_quantizer = True
"""
Explanation: Export to TensorFlow Lite Model
Convert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application.
Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
End of explanation
"""
model.export(export_dir='.', quantization_config=config)
"""
Explanation: Export the quantized TFLite model according to the quantization config with metadata. The default TFLite model filename is model.tflite.
End of explanation
"""
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
"""
Explanation: You can use the TensorFlow Lite model file in the bert_qa reference app using BertQuestionAnswerer API in TensorFlow Lite Task Library by downloading it from the left sidebar on Colab.
The allowed export formats can be one or a list of the following:
ExportFormat.TFLITE
ExportFormat.VOCAB
ExportFormat.SAVED_MODEL
By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
End of explanation
"""
model.evaluate_tflite('model.tflite', validation_data)
"""
Explanation: You can also evaluate the tflite model with the evaluate_tflite method. This step is expected to take a long time.
End of explanation
"""
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
"""
Explanation: Advanced Usage
The create function is the critical part of this library in which the model_spec parameter defines the model specification. The BertQAModelSpec class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The create function comprises the following steps:
Creates the model for question answer according to model_spec.
Train the question answer model.
This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc.
Adjust the model
You can adjust the model infrastructure like parameters seq_len and query_len in the BertQAModelSpec class.
Adjustable parameters for model:
seq_len: Length of the passage to feed into the model.
query_len: Length of the question to feed into the model.
doc_stride: The stride when doing a sliding window approach to take chunks of the documents.
initializer_range: The stdev of the truncated_normal_initializer for initializing all weight matrices.
trainable: Boolean, whether pre-trained layer is trainable.
Adjustable parameters for training pipeline:
model_dir: The location of the model checkpoint files. If not set, temporary directory will be used.
dropout_rate: The rate for dropout.
learning_rate: The initial learning rate for Adam.
predict_batch_size: Batch size for prediction.
tpu: TPU address to connect to. Only used if using tpu.
For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new model_spec.
End of explanation
"""
|
gcallah/Indra | notebooks/flocking.ipynb | gpl-3.0 | from models.flocking import set_up
"""
Explanation: How to run the /home/test/indras_net/models/flocking model.
First we import all necessary files.
End of explanation
"""
from indra.agent import Agent, X, Y
from indra.composite import Composite
from indra.display_methods import BLUE, TREE
from indra.env import Env
from indra.registry import get_registration
from indra.space import DEF_HEIGHT, DEF_WIDTH, distance
from indra.utils import get_props
MODEL_NAME = "flocking"
DEBUG = False # turns debugging code on or off
DEBUG2 = False # turns deeper debugging code on or off
BIRD_GROUP = "Birds"
DEF_NUM_BIRDS = 2
DEF_DESIRED_DISTANCE = 2
ACCEPTABLE_DEV = .05
BIRD_MAX_MOVE = 1
HALF_CIRCLE = 180
FULL_CIRCLE = 360
flock = None
the_sky = None
"""
Explanation: We then initialize global variables.
End of explanation
"""
(the_sky, flock) = set_up()
"""
Explanation: Next we call the set_up function to set up the environment, groups, and agents of the model.
End of explanation
"""
the_sky.runN()
"""
Explanation: You can run the model N periods by typing the number you want in the following function and then running it.
End of explanation
"""
the_sky.scatter_graph()
"""
Explanation: You can view the position of all of the agents in space with the following command:
End of explanation
"""
the_sky.line_graph()
"""
Explanation: You can view the line graph through the following command:
End of explanation
"""
|
AcidLeroy/VideoSegment | python/notebooks/unsupervised_clustering.ipynb | gpl-2.0 | from read_video import *
import numpy as np
import matplotlib.pyplot as plt
import cv2
"""
Explanation: Unsupervised Clustering Experiment
Author: Cody W. Eilar cody.eilar@gmail.com
Course: ECE 633
Professor: Dr. Marios Pattichis
This is a simple experiment in which I read a certain number of megabytes from a video an attempt to do unsupervised clusterig. This is by no means a novel experiment, but it is used rather to become familiar with the OpenCV Python bindings, Scikit-learn, and numpy
End of explanation
"""
video_to_read = "/Users/cody/test.mov"
max_buf_size_mb = 500;
%time frame_buffer = ReadVideo(video_to_read, max_buf_size_mb)
frame_buffer.nbytes
print("Matrix shape: {}".format(frame_buffer.shape))
"""
Explanation: Time Critical Path
Reading in the video is going to be a very expensive operation. The following cell will take several seconds to execute because the program reas in max_buf_size_mb megabytes into memory. A conserative number is chosen here because the scikit-learn toolbox calculations will need to use about the same amount of memory to cacluate the clusters.
End of explanation
"""
%matplotlib inline
#If you try to imshow doubles, it will look messed up.
plt.imshow(frame_buffer[0, :, :, :]); # Plot first frame
plt.show()
plt.imshow(frame_buffer[-1, :, :, :]); # Plot last frame
plt.show()
from sklearn import metrics
from sklearn.cluster import KMeans
from sklearn import cluster
from sklearn.datasets import load_digits
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
"""
Explanation: Plot First and Last Frames
End of explanation
"""
buf_s = frame_buffer.shape
K = buf_s[0] # Number of frames
M = buf_s[1]
N = buf_s[2]
chan = buf_s[3] # Color channel
%time scikit_buffer = frame_buffer.reshape([K, M*N*chan])
scikit_buffer.shape
"""
Explanation: Reshape the Data for Clustering
Like R, scikit learn needs to have data grouped in the form of n_samples x n_features. Here I reshape the data without copying any of it. In this case, I want to segment based on the contents of the frames so n_samples = num_frames and n_features = pixel_values. It is very important that data not be copied here, since copying video data is a very expensive operation. I've timed the function to prove to myself that I am just creating a new window into the data (i.e. pointers) and not copying anything.
NOTE: I have included the color channels here as well.
End of explanation
"""
k_means = cluster.KMeans(n_clusters=7, n_init=1, copy_x=False)
%time k_means.fit(scikit_buffer)
"""
Explanation: Begin Heavy Lifting
Up until this point, everything I have setup has been to get video data properly formatted and ready to ship to the clusting algorithm. Since I recorded the short video used in the example, I know exactly how many clusters I think there should be:
Blank screen
e
c
e
6
3
3
So exactly 7 clusters can be inferred from the video data. The code below is also a time critcal path, and takes quite a long time to compute. I've timed this function to prove to myself that it does indeed take quite some time.
End of explanation
"""
labels = k_means.labels_
values = k_means.cluster_centers_.squeeze()
labels
"""
Explanation: Analysis
As I had hoped, contiguous frames were clustered together but not without a few anomalies. Just from looking at the data, we can see that some clusters may have been misclassified, 1 and 5 for example towards the end of the array.
End of explanation
"""
prev = labels[0]
plt_count = 0
for i in range(1, labels.size):
if (plt_count == 5):
break;
if (prev != labels[i]):
plt.subplot(1,2,1);
plt.title(i)
plt.imshow(frame_buffer[i, :, :, :])
plt.subplot(1,2,2);
plt.title(i-1)
plt.imshow(frame_buffer[i-1, :, :, :])
plt.show()
plt_count = plt_count + 1
prev = labels[i]
"""
Explanation: Visualization of the Data
The only good way to visualize this data would be to look at the transitions from one classification to the other, i.e. when current_classification != previous_classification. If the clustering works, we should a distinc difference between the frame on the left and the frame on the right.
End of explanation
"""
|
haraldschilly/nltk-sentiment-analysis-demo | sentiment.ipynb | apache-2.0 | import yaml
from codecs import open
import nltk
"""
Explanation: Sentiment analysis of free-text comments using NLTK
2015-07-04 -- by Harald Schilly -- License: Apache 2.0
The following NLTK demo works for German free-text comments.
It tokenizes the text, cleans it up, does word stemming and then trains a naive bayesian model.
In the end, a few tests show that it did indeed learn something.
End of explanation
"""
# NLTK tokenizer
from nltk.tokenize import WordPunctTokenizer
tokenizer = WordPunctTokenizer()
# NLTK stemmer for german
from nltk.stem.snowball import GermanStemmer
stemmer_de = GermanStemmer()
"""
Explanation: Initialization of tokenizer and stemmer
... just the defaults
End of explanation
"""
data = yaml.load(open("reviews1.yaml", "r", "utf-8"))
for cat, texts in data.items():
print("%s: %d entries" % (cat, len(texts)))
"""
Explanation: Data Acquisition
For the demo, a map of categories to a list of texts is read from a data file.
End of explanation
"""
# for each text, this tokenizing and stemming process is applied
def process_text(text):
words = tokenizer.tokenize(text)
words = [stemmer_de.stem(w) for w in words if len(w) >= 3]
words = [("<QM>" if '?' in w else w) for w in words]
return words
data2 = {}
for cat, texts in data.items():
data2[cat] = []
for text in texts:
data2[cat].append(process_text(text))
"""
Explanation: Processing texts in data
data2 is then a map of categories to a list of tokenized texts.
End of explanation
"""
all_words = []
for texts in data2.values():
[all_words.extend(text) for text in texts]
wordlist = nltk.FreqDist(all_words)
word_features = wordlist.keys()
# 10 most common words
wordlist.most_common(10)
def extract_features(doc):
doc_words = set(doc)
features = {}
for word in word_features:
features["contains %s" % word] = (word in doc_words)
return features
"""
Explanation: Feature extraction
End of explanation
"""
# just a little helper
def get_all_docs():
for cat, texts in data2.items():
for words in texts:
yield (words, cat)
training_set = nltk.classify.apply_features(extract_features, list(get_all_docs()))
"""
Explanation: Training set & Bayes Classifier
NTLK's NaiveBayesClassifier is trained using the training set.
End of explanation
"""
classifier = nltk.NaiveBayesClassifier.train(training_set)
"""
Explanation: This is where the magic happens:
End of explanation
"""
classifier.show_most_informative_features(20)
"""
Explanation: This list of most informative features is an indicator if the training did work well.
End of explanation
"""
t1 = "diese art von bedienung brauchen wir gar nicht."
classifier.classify(extract_features(process_text(t1)))
t2 = "Hervorragende Bedienung, jederzeit gerne wieder!"
classifier.classify(extract_features(process_text(t2)))
t3 = "Ganz schlechtes Service, kann ich nicht empfehlen ..."
classifier.classify(extract_features(process_text(t3)))
t4 = "Wir kommen gerne jederzeit wieder."
classifier.classify(extract_features(process_text(t4)))
t5 = "uns hat es sehr gut gefallen"
classifier.classify(extract_features(process_text(t5)))
t6 = "Wann sperrt ihr morgen auf?"
classifier.classify(extract_features(process_text(t6)))
"""
Explanation: Testing the classifier
End of explanation
"""
def all_probabilities(text):
from math import exp
print(text)
probs = classifier.prob_classify(extract_features(process_text(text)))
for label, lprop in probs._prob_dict.items():
print("%5.1f%% %s" % (100. * exp(lprop), label))
all_probabilities(t1)
all_probabilities(t2)
all_probabilities(t3)
all_probabilities(t4)
all_probabilities(t5)
all_probabilities(t6)
"""
Explanation: All probabilities
List all probabilities for each testing text. Gives an impression how well the classification did work.
End of explanation
"""
|
synthicity/activitysim | activitysim/examples/example_estimation/notebooks/21_stop_frequency.ipynb | agpl-3.0 | import os
import larch # !conda install larch -c conda-forge # for estimation
import pandas as pd
"""
Explanation: Estimating Stop Frequency
This notebook illustrates how to re-estimate a single model component for ActivitySim. This process
includes running ActivitySim in estimation mode to read household travel survey files and write out
the estimation data bundles used in this notebook. To review how to do so, please visit the other
notebooks in this directory.
Load libraries
End of explanation
"""
os.chdir('test')
"""
Explanation: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
End of explanation
"""
modelname = "stop_frequency"
from activitysim.estimation.larch import component_model
model, data = component_model(modelname, return_data=True)
"""
Explanation: Load data and prep model for estimation
End of explanation
"""
spec_segments = [i.primary_purpose for i in data.settings.SPEC_SEGMENTS]
spec_segments
"""
Explanation: Review data loaded from the EDB
The next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.
End of explanation
"""
data.coefficients
"""
Explanation: Coefficients
There is one meta-coefficients dataframe for this component, which contains
parameters for all the matching coefficients in the various segmented
files. When different segments have the same named coefficient with the same
value, it is assumed they should be estimated jointly. If they have the same name
but different values in the coefficient files, then they are re-estimated
independently.
End of explanation
"""
data.spec[0]
"""
Explanation: Utility specification
The utility spec files are unique to each segment model. The estimation mode larch pre-processor
for the stop frequency model modifies the spec files to account for jointly re-estimated
parameters.
End of explanation
"""
data.chooser_data[0]
"""
Explanation: Chooser data
The chooser data is unique to each segment model.
End of explanation
"""
model.estimate(method='SLSQP', options={"maxiter": 1000})
"""
Explanation: Estimate
With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
End of explanation
"""
model.parameter_summary()
"""
Explanation: Estimated coefficients
End of explanation
"""
from activitysim.estimation.larch.stop_frequency import update_segment_coefficients
result_dir = data.edb_directory/"estimated"
update_segment_coefficients(
model, data, result_dir,
output_file="stop_frequency_coefficients_{segment_name}_revised.csv",
);
"""
Explanation: Output Estimation Results
The stop frequency model include seperate coefficient file for each segment,
and has a special writer method to seperate the coefficient by segment
after estimation.
End of explanation
"""
for m, segment in zip(model, data.segments):
m.to_xlsx(
result_dir/f"{modelname}_{segment}_model_estimation.xlsx",
data_statistics=False,
)
"""
Explanation: Write the model estimation report, including coefficient t-statistic and log likelihood
End of explanation
"""
pd.read_csv(result_dir/"stop_frequency_coefficients_work_revised.csv")
"""
Explanation: Next Steps
The final step is to either manually or automatically copy the stop_frequency_coefficients_*_revised.csv files to the configs folder, rename them to stop_frequency_coefficients_*.csv, and run ActivitySim in simulation mode.
End of explanation
"""
|
AllenDowney/ProbablyOverthinkingIt | bear.ipynb | mit | from __future__ import print_function, division
import thinkbayes2
import thinkplot
import numpy as np
from scipy import stats
%matplotlib inline
"""
Explanation: When will I win the Great Bear Run?
This notebook presents an application of Bayesian inference to predicting the outcome of a road race.
Copyright 2015 Allen Downey
MIT License: http://opensource.org/licenses/MIT
End of explanation
"""
data = {
2008: ['Gardiner', 'McNatt', 'Terry'],
2009: ['McNatt', 'Ryan', 'Partridge', 'Turner', 'Demers'],
2010: ['Gardiner', 'Barrett', 'Partridge'],
2011: ['Barrett', 'Partridge'],
2012: ['Sagar'],
2013: ['Hammer', 'Wang', 'Hahn'],
2014: ['Partridge', 'Hughes', 'Smith'],
2015: ['Barrett', 'Sagar', 'Fernandez'],
}
"""
Explanation: Almost every year since 2008 I have participated in the Great Bear Run, a 5K road race in Needham MA. I usually finish in the top 20 or so, and in my age group I have come in 4th, 6th, 4th, 3rd, 2nd, 4th and 4th. In 2015 I didn't run because of a scheduling conflict, but based on the results I estimate that I would have come 4th again.
Here are the people who beat me:
End of explanation
"""
def MakeBinomialPmf(n, p):
ks = range(n+1)
ps = stats.binom.pmf(ks, n, p)
pmf = thinkbayes2.Pmf(dict(zip(ks, ps)))
pmf.Normalize()
return pmf
"""
Explanation: Having come close in 2012, I have to wonder what my chances of winning are.
I'll try out two different models and see how it goes.
First, a quick function to compute binomial distributions:
End of explanation
"""
class Bear1(thinkbayes2.Suite, thinkbayes2.Joint):
def Likelihood(self, data, hypo):
n, p = hypo
like = 1
for year, sobs in data.items():
k = len(sobs)
if k > n:
return 0
like *= stats.binom.pmf(k, n, p)
return like
def Predict(self):
metapmf = thinkbayes2.Pmf()
for (n, p), prob in bear.Items():
pmf = MakeBinomialPmf(n, p)
metapmf[pmf] = prob
mix = thinkbayes2.MakeMixture(metapmf)
return mix
"""
Explanation: The binomial model
The first model is based on the assumption that there is some population of runners who are faster than me, and who might show up for the Great Bear Run in any given year. The parameters of the model are the number of runners, $n$, and their probability of showing up, $p$.
The following class uses this model to estimate the parameters from the data. It extends thinkbayes.Suite, which provides a simple framework for Bayesian inference.
The Likelihood method computes the likelihood of the data for hypothetical values of $n$ and $p$. For each year, it computes the number of runners who beat me, $k$, and returns the probability of $k$ given $n$ and $p$.
I explain Predict below.
End of explanation
"""
hypos = [(n, p) for n in range(15, 70)
for p in np.linspace(0, 1, 101)]
bear = Bear1(hypos)
"""
Explanation: The prior distribution for $n$ is uniform from 15 to 70 (15 is the number of unique runners who have beat me; 70 is an arbitrary upper bound).
The prior distribution for $p$ is uniform from 0 to 1.
End of explanation
"""
bear.Update(data)
"""
Explanation: Next we update bear with the data.
The Update function is provided by thinkbayes.Suite; it computes the likelihood of the data for each hypothesis, multiplies by the prior probabilities, and renormalizes.
The return value is the normalizing constant, which is total probability of the data under the prior (but otherwise not particularly meaningful).
End of explanation
"""
pmf_n = bear.Marginal(0)
thinkplot.PrePlot(5)
thinkplot.Pdf(pmf_n, label='n')
thinkplot.Config(xlabel='Number of runners (n)',
ylabel='PMF', loc='upper right')
pmf_n.Mean()
"""
Explanation: From the joint posterior distribution we can extract the marginal distributions of $n$ and $p$.
The following figure shows the posterior distribution of $n$. The most likely value is 16; that is, we have already seen the entire population of runners. But the mean is almost 35.
At the upper bound, the posterior probability is non-negligible, which suggests that higher values are possible. If I were attached to this model, I might work on refining the prior for $n$.
End of explanation
"""
pmf_p = bear.Marginal(1)
thinkplot.Pdf(pmf_p, label='p')
thinkplot.Config(xlabel='Probability of showing up (p)',
ylabel='PMF', loc='upper right')
pmf_p.CredibleInterval(95)
"""
Explanation: The posterior distribution for $p$ is better behaved. The credible interval is between 4% and 21%.
End of explanation
"""
thinkplot.Contour(bear, pcolor=True, contour=False)
thinkplot.Config(xlabel='Number of runners (n)',
ylabel='Probability of showing up (p)',
ylim=[0, 0.4])
"""
Explanation: The following figure shows the joint distribution of $n$ and $p$. They are inversely related: the more people there are, the less often they each show up.
End of explanation
"""
predict = bear.Predict()
thinkplot.Hist(predict, label='k')
thinkplot.Config(xlabel='# Runners who beat me (k)',
ylabel='PMF', xlim=[-0.5, 12])
predict[0]
"""
Explanation: Finally, we can generate a predictive distribution for the number of people who will finish ahead of me, $k$. For each pair of $n$ and $p$, the distribution of $k$ is binomial. So the predictive distribution is a weighted mixture of binomials (see Bear1.Predict above).
The most likely outcomes are 2 or 3 people ahead of me. The probability that I win my age group is about 5%.
End of explanation
"""
ss = thinkbayes2.Beta(2, 1)
thinkplot.Pdf(ss.MakePmf(), label='S')
thinkplot.Config(xlabel='Probability of showing up (S)',
ylabel='PMF', loc='upper left')
"""
Explanation: A better model
The binomial model is simple, but it ignores potentially useful information: from previous results, we can see that the same people appear more than once. We can use this data to improve our estimate of the total population.
For example, if the same people appear over and over, that's evidence for smaller values of $n$. If the same person seldom appears twice, that's evidence for larger values.
To quantify that effect, we need a model of the sampling process.
In order to displace me, a runner has to
Show up
Outrun me
Be in my age group
For each runner, the probability of displacing me is a product of these factors:
$p_i = SOB$
Some runners have a higher SOB factor than others; we can use previous results to estimate it.
But first we have to think about an appropriate prior. Based on my experience, I conjecture that the prior distribution of $S$ is an increasing function, with many people who run nearly every year, and fewer who run only occasionally:
End of explanation
"""
os = thinkbayes2.Beta(3, 1)
thinkplot.Pdf(os.MakePmf(), label='O')
thinkplot.Config(xlabel='Probability of outrunning me (O)',
ylabel='PMF', loc='upper left')
"""
Explanation: The prior distribution of $O$ is biased toward high values. Of the people who have the potential to beat me, many of them will beat me every time. I am only competitive with a few of them.
(For example, of the 16 people who have beat me, I have only ever beat 2).
End of explanation
"""
bs = thinkbayes2.Beta(1, 1)
thinkplot.Pdf(bs.MakePmf(), label='B')
thinkplot.Config(xlabel='Probability of being in my age group (B)',
ylabel='PMF', loc='upper left')
"""
Explanation: The probability that a runner is in my age group depends on the difference between his age and mine. Someone exactly my age will always be in my age group. Someone 4 years older will be in my age group only once every 5 years (the Great Bear run uses 5-year age groups).
So the distribution of $B$ is uniform.
End of explanation
"""
n = 1000
sample = ss.Sample(n) * os.Sample(n) * bs.Sample(n)
cdf = thinkbayes2.Cdf(sample)
thinkplot.PrePlot(1)
prior = thinkbayes2.Beta(1, 3)
thinkplot.Cdf(prior.MakeCdf(), color='grey', label='Model')
thinkplot.Cdf(cdf, label='SOB sample')
thinkplot.Config(xlabel='Probability of displacing me',
ylabel='CDF', loc='lower right')
"""
Explanation: I used Beta distributions for each of the three factors, so each $p_i$ is the product of three Beta-distributed variates. In general, the result is not a Beta distribution, but maybe we can find a Beta distribution that is a good approximation of the actual distribution.
I'll draw a sample from the distributions of $S$, $O$, and $B$, and multiply them out. It turns out that the result is a good match for a Beta distribution with parameters 1 and 3.
End of explanation
"""
from itertools import chain
from collections import Counter
counter = Counter(chain(*data.values()))
len(counter), counter
"""
Explanation: Now let's look more carefully at the data. There are 16 people who have displaced me during at least one year, several more than once.
The runner with the biggest SOB factor is Rich Partridge, who has displaced me in 4 of 8 years. In fact, he outruns me almost every year, but is not always in my age group.
End of explanation
"""
def MakeBeta(count, num_races, precount=3):
beta = thinkbayes2.Beta(1, precount)
beta.Update((count, num_races-count))
return beta
"""
Explanation: The following function makes a Beta distribution to represent the posterior distribution of $p_i$ for each runner. It starts with the prior, Beta(1, 3), and updates it with the number of times the runner displaces me, and the number of times he doesn't.
End of explanation
"""
num_races = len(data)
betas = [MakeBeta(count, num_races)
for count in counter.values()]
"""
Explanation: Now we can make a posterior distribution for each runner:
End of explanation
"""
[beta.Mean() for beta in betas]
"""
Explanation: Let's check the posterior means to see if they make sense. For Rich Partridge, who has displaced me 4 times out of 8, the posterior mean is 42%; for someone who has displaced me only once, it is 17%.
So those don't seem crazy.
End of explanation
"""
class Bear2(thinkbayes2.Suite, thinkbayes2.Joint):
def ComputePmfs(self, data):
num_races = len(data)
counter = Counter(chain(*data.values()))
betas = [MakeBeta(count, num_races)
for count in counter.values()]
self.pmfs = dict()
low = len(betas)
high = max(self.Values())
for n in range(low, high+1):
self.pmfs[n] = self.ComputePmf(betas, n, num_races)
def ComputePmf(self, betas, n, num_races, label=''):
no_show = MakeBeta(0, num_races)
all_betas = betas + [no_show] * (n - len(betas))
ks = []
for i in range(2000):
ps = [beta.Random() for beta in all_betas]
xs = np.random.random(len(ps))
k = sum(xs < ps)
ks.append(k)
return thinkbayes2.Pmf(ks, label=label)
def Likelihood(self, data, hypo):
n = hypo
k = data
return self.pmfs[n][k]
def Predict(self):
metapmf = thinkbayes2.Pmf()
for n, prob in self.Items():
pmf = bear2.pmfs[n]
metapmf[pmf] = prob
mix = thinkbayes2.MakeMixture(metapmf)
return mix
"""
Explanation: Now we're ready to do some inference. The model only has one parameter, the total number of runners who could displace me, $n$. For the 16 SOBS we have actually observed, we use previous results to estimate $p_i$. For additional hypothetical runners, we update the distribution with 0 displacements out of num_races.
To improve performance, my implementation precomputes the distribution of $k$ for each value of $n$, using ComputePmfs and ComputePmf.
After that, the Likelihood function is simple: it just looks up the probability of $k$ given $n$.
End of explanation
"""
bear2 = Bear2()
thinkplot.PrePlot(3)
pmf = bear2.ComputePmf(betas, 18, num_races, label='n=18')
pmf2 = bear2.ComputePmf(betas, 22, num_races, label='n=22')
pmf3 = bear2.ComputePmf(betas, 26, num_races, label='n=24')
thinkplot.Pdfs([pmf, pmf2, pmf3])
thinkplot.Config(xlabel='# Runners who beat me (k)',
ylabel='PMF', loc='upper right')
"""
Explanation: Here's what some of the precomputed distributions look like, for several values of $n$.
If there are fewer runners, my chance of winning is slightly better, but the difference is small, because fewer runners implies a higher mean for $p_i$.
End of explanation
"""
low = 15
high = 35
bear2 = Bear2(range(low, high))
bear2.ComputePmfs(data)
"""
Explanation: For the prior distribution of $n$, I'll use a uniform distribution from 16 to 35 (this upper bound turns out to be sufficient).
End of explanation
"""
for year, sobs in data.items():
k = len(sobs)
bear2.Update(k)
"""
Explanation: And here's the update, using the number of runners who displaced me each year:
End of explanation
"""
thinkplot.PrePlot(1)
thinkplot.Pdf(bear2, label='n')
thinkplot.Config(xlabel='Number of SOBs (n)',
ylabel='PMF', loc='upper right')
"""
Explanation: Here's the posterior distribution of $n$. It's noisy because I used random sampling to estimate the conditional distributions of $k$. But that's ok because we don't really care about $n$; we care about the predictive distribution of $k$. And noise in the distribution of $n$ has very little effect on $k$.
End of explanation
"""
predict = bear2.Predict()
"""
Explanation: The predictive distribution for $k$ is a weighted mixture of the conditional distributions we already computed:
End of explanation
"""
thinkplot.Hist(predict, label='k')
thinkplot.Config(xlabel='# Runners who beat me (k)', ylabel='PMF', xlim=[-0.5, 12])
predict[0]
"""
Explanation: And here's what it looks like:
End of explanation
"""
|
LorenzoBi/courses | UQ/assignment_2/Untitled.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
from sympy import *
%matplotlib inline
init_printing()
"""
Explanation: Assignment 2
Lorenzo Biasi and Michael Aichmüller
End of explanation
"""
def f(x):
return np.exp(np.sin(x))
def df(x):
return f(x) * np.cos(x)
def absolute_err(f, df, h):
g = (f(h) - f(0)) / h
return np.abs(df(0) - g)
hs = 10. ** -np.arange(15)
epsilons = np.empty(15)
for i, h in enumerate(hs):
epsilons[i] = absolute_err(f, df, h)
"""
Explanation: Exercise 1
We proceed building the alogrithm for testing the accuracy of the numerical derivative
End of explanation
"""
plt.plot(hs, epsilons, 'o')
plt.yscale('log')
plt.xscale('log')
plt.xlabel(r'h')
plt.ylabel(r'$\epsilon(h)$')
plt.grid(linestyle='dotted')
"""
Explanation: a)
End of explanation
"""
x_1 = symbols('x_1')
fun1 = 1 / (1 + 2*x_1) - (1 - x_1) / (1 + x_1)
fun1
"""
Explanation: We can see that until $h = 10^7$ the trend is that the absolute error diminishes, but after that it goes back up. This is due to the fact that when we compute $f(h) - f(0)$ we are using an ill-conditioned operation. In fact these two values are really close to each other.
Exercise 2
a.
We can easily see that when $\|x\| \ll 1$ we have that both $\frac{1 - x }{x + 1}$ and $\frac{1}{2 x + 1}$ are almost equal to 1, so the subtraction is ill-conditined.
End of explanation
"""
fun2 = simplify(fun1)
fun2
"""
Explanation: We can modify the previous expression for being well conditioned around 0. This is well conditoned.
End of explanation
"""
def f1(x):
return 1 / (1 + 2*x) - (1 - x) / (1 + x)
def f2(x):
return 2*x**2/((1 + 2*x)*(1 + x))
hs = 2. ** - np.arange(64)
plt.plot(hs, np.abs(f1(hs) - f2(hs)))
plt.yscale('log')
plt.xscale('log')
plt.xlabel(r'h')
plt.ylabel('differences')
plt.grid(linestyle='dotted')
"""
Explanation: A comparison between the two ways of computing this value. we can clearly see that if we are far from 1 the methods are nearly identical, but the closer you get to 0 the two methods diverge
End of explanation
"""
def f3(x):
return np.sqrt(x + 1/x) - np.sqrt(x - 1 / x)
def f4(x):
return 2 / (np.sqrt(x + 1/x) + np.sqrt(x - 1 / x)) / x
hs = 2 ** np.arange(64)
plt.plot(hs, np.abs(f3(hs) - f4(hs)), 'o')
#plt.plot(hs, , 'o')
plt.yscale('log')
plt.xscale('log')
plt.xlabel(r'h')
plt.ylabel('differences$')
plt.grid(linestyle='dotted')
"""
Explanation: b.
As before we have the subtraction of two really close values, so it is going to be ill conditioned for $x$ really big.
$ \sqrt{x + \frac{1}{x}} - \sqrt{x - \frac{1}{x}} = \sqrt{x + \frac{1}{x}} - \sqrt{x - \frac{1}{x}} \frac{\sqrt{x + \frac{1}{x}} + \sqrt{x - \frac{1}{x}} }{\sqrt{x + \frac{1}{x}} + \sqrt{x - \frac{1}{x}}} = \frac{2}{x(\sqrt{x + \frac{1}{x}} + \sqrt{x - \frac{1}{x}})}$
End of explanation
"""
import itertools
x = [1, 2, 3, 4, 5, 6]
omega = set([p for p in itertools.product(x, repeat=3)])
print(r'Omega has', len(omega), 'elements and they are:')
print(omega)
"""
Explanation: Exercise 3.
a.
If we assume we posess a 6 faced dice, we have at each throw three possible outcome. So we have to take all the combination of 6 numbers repeating 3 times. It is intuitive that our $\Omega$ will be composed by $6^3 = 216$ samples, and will be of type:
$(1, 1, 1), (1, 1, 2), (1, 1, 3), ... (6, 6, 5), (6, 6, 6)$
End of explanation
"""
1/(6**3)
"""
Explanation: Concerning the $\sigma$-algebra we need to state that there does not exist only a $\sigma$-algebra for a given $\Omega$, but it this case a reasonable choice would be the powerset of $\Omega$.
b.
In case of fairness of dice we will have the discrete uniform distribution. And for computing the value of $\rho(\omega)$ we just need to compute the inverse of our sample space $\rho(\omega) = \frac{1}{6^3}$
End of explanation
"""
print('Size of A^c:', 5**3)
print('Size of A: ', 6 ** 3 - 5 ** 3)
36 + 5 * 6 + 5 * 5
x = [1, 2, 3, 4, 5]
A_c = set([p for p in itertools.product(x, repeat=3)])
print('A^c has ', len(A_c), 'elements.\nA^c =', A_c)
print('A has ', len(omega - A_c), 'elements.\nA^c =', omega - A_c)
"""
Explanation: c.
If we want to determine the set $A$ we can take in consideration its complementary $A^c = {\text{Not even one throw is 6}}$. This event is analogous the sample space of a 5-faced dice. So it's dimension will be $5^3$. For computing the size of $A$ we can simply compute $6^3 - 5^3$ and for the event its self we just need to $\Omega \setminus A^c = A$
End of explanation
"""
91 / 216
"""
Explanation: P(A) will be $\frac{91}{216}$
End of explanation
"""
|
choderalab/assaytools | examples/direct-fluorescence-assay/Emcee example with two compenent binding.ipynb | lgpl-2.1 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from time import time
from assaytools.bindingmodels import TwoComponentBindingModel
from assaytools import pymcmodels
"""
Explanation: Bayesian fit for two component binding - simulated data
Comparing sampling with emcee and PyMC
In this notebook we'll be comparing the sampling performance of emcee and pymc on a toy assaytools tools example, where we know the true binding free energy. Of primary concern is the consistency of the sampling methods as well as the compute time.
End of explanation
"""
# The complex affinity in thermal units (Kd = exp(DeltaG))
DeltaG = -15.0
print('The target binding free energy is {0:1f} (thermal units)'.format(DeltaG))
# The protein concentration in M:
Ptot = 1e-9 * np.ones([12],np.float64)
# The set of ligand concentrations in M:
Ltot = 20.0e-6 / np.array([10**(float(i)/2.0) for i in range(12)])
# The concentrations of the complex, free protein and free ligand given the above:
[P, L, PL] = TwoComponentBindingModel.equilibrium_concentrations(DeltaG, Ptot, Ltot)
# Detector noise:
sigma = 10.0
# The background fluorescence:
F_background = 100.0
# Ligand fluorescence in the absence of the protein
F_L_i = F_background + (.4/1e-8)*Ltot + sigma * np.random.randn(len(Ltot))
# The total fluorescence of the complex and free ligand
F_PL_i = F_background + ((1400/1e-9)*PL + sigma * np.random.randn(len(Ltot))) + ((.4/1e-8)*L + sigma * np.random.randn(len(Ltot)))
# Seting the errors from our pipetting instruments:
P_error = 0.35
L_error = 0.08
dPstated = P_error * Ptot
dLstated = L_error * Ltot
# Volume of each well in L:
assay_volume = 100e-6
"""
Explanation: Creating a mock experiment to feed into assaytools
Defining all the parameters of the assay and complex. The sampling methods will be trying to predict the binding free energy, for which we already know the answer.
End of explanation
"""
plt.semilogx(Ltot,F_PL_i, 'ro', label='Complex and free ligand')
plt.semilogx(Ltot,F_L_i, 'ko', label='Ligand')
plt.title('Fluorescence as a function of total ligand concentration', fontsize=14)
plt.xlabel('$[L]_{tot}$ / M', fontsize=16)
plt.ylabel('Fluorescence', fontsize=16)
plt.legend(fontsize=16)
plt.show()
"""
Explanation: Plotting the fluorescence of the experiment that's parametrized above:
End of explanation
"""
def get_var_trace(mcmc_model, var_name):
"""
Exract parameter trace from PyMC MCMC object.
Parameters
----------
mcmc_model: pymc.MCMC.MCMC
PyMC MCMC object
var_name: str
The name of the parameter you wish to extract
Returns
-------
If the variable has been found:
trace: numpy.ndarray
the trace of the parameter of interest.
"""
found = False
for stoch in mcmc_model.stochastics:
if stoch.__name__ == 'DeltaG':
found = True
trace = stoch.trace._trace[0]
if found:
return trace
else:
print('Variable {0} not present in MCMC object'.format(var_name))
"""
Explanation: The sampling examples will attempt to infer the value of the binding free energy, which is set above, using the data that is plotted above.
Before moving on to the sampling part, defining a function that will help viewing the traces of the MCMC simulations.
End of explanation
"""
pymc_model = pymcmodels.make_model(Ptot, dPstated, Ltot, dLstated,
top_complex_fluorescence=F_PL_i,
top_ligand_fluorescence=F_L_i,
use_primary_inner_filter_correction=True,
use_secondary_inner_filter_correction=True,
assay_volume=assay_volume, DG_prior='uniform')
mcmc_model, pymc_model = pymcmodels.run_mcmc_emcee(pymc_model, nwalkers=200, nburn=10, niter=500)
"""
Explanation: Sampling with emcee
End of explanation
"""
var_name = 'DeltaG'
trace_emcee = get_var_trace(mcmc_model, var_name)
print('Mean {0} = {1:2f} +/- {2}'.format(var_name, trace_emcee.mean(), trace_emcee.std()))
plt.plot(trace_emcee)
plt.title('Emcee trace of binding free energy', fontsize=14)
plt.xlabel('Iteration', fontsize=16)
plt.ylabel(var_name, fontsize=16)
plt.show()
"""
Explanation: Viewing the trace of the Delta G
Defining a quick function to make it easy to view the traces:
End of explanation
"""
nrepeats = 9
var_name = 'DeltaG'
traces_emcee = []
for r in range(nrepeats):
pymc_model = pymcmodels.make_model(Ptot, dPstated, Ltot, dLstated,
top_complex_fluorescence=F_PL_i,
top_ligand_fluorescence=F_L_i,
use_primary_inner_filter_correction=True,
use_secondary_inner_filter_correction=True,
assay_volume=assay_volume, DG_prior='uniform')
t0 = time()
mcmc_model, pymc_model = pymcmodels.run_mcmc_emcee(pymc_model, nwalkers=200, nburn=100, niter=1000)
print('\n Time for MCMC run {0} = {1:2f} seconds'.format(r, time() - t0))
traces_emcee.append(get_var_trace(mcmc_model, var_name))
coords = [(0,0), (0,1), (0,2), (1,0), (1,1), (1,2), (2,0), (2,1), (2,2)]
bins = np.arange(-20, 0)
f, axarr = plt.subplots(3, 3, figsize=(10, 10))
for t, c in zip(traces_emcee, coords):
hist, edges = np.histogram(t, bins=bins, normed=True)
centers = edges[0:-1] + np.diff(edges) / 2.0
axarr[c].bar(centers, hist)
axarr[c].set_title('Histogram of {0}'.format(var_name))
axarr[c].set_xlabel('Free energy (thermal units)')
axarr[c].set_ylabel('Frequency')
axarr[c].axvline(DeltaG, color='red', ls='--')
plt.tight_layout()
plt.show()
"""
Explanation: Veiwing the sampling consistency
End of explanation
"""
pymc_model = pymcmodels.make_model(Ptot, dPstated, Ltot, dLstated,
top_complex_fluorescence=F_PL_i,
top_ligand_fluorescence=F_L_i,
use_primary_inner_filter_correction=True,
use_secondary_inner_filter_correction=True,
assay_volume=assay_volume, DG_prior='uniform')
mcmc_model = pymcmodels.run_mcmc(pymc_model, nthin=20, nburn=100, niter=100000, map=True)
var_name = 'DeltaG'
trace_pymc = get_var_trace(mcmc_model, var_name)
print('Mean {0} = {1:2f} +/- {2}'.format(var_name, trace_pymc.mean(), trace_pymc.std()))
plt.plot(trace_pymc )
plt.title('Emcee trace of binding free energy', fontsize=14)
plt.xlabel('Iteration', fontsize=16)
plt.ylabel(var_name, fontsize=16)
plt.show()
"""
Explanation: Sampling with PyMC
Viewing a single trace
End of explanation
"""
nrepeats = 9
var_name = 'DeltaG'
traces_pymc = []
for r in range(nrepeats):
pymc_model = pymcmodels.make_model(Ptot, dPstated, Ltot, dLstated,
top_complex_fluorescence=F_PL_i,
top_ligand_fluorescence=F_L_i,
use_primary_inner_filter_correction=True,
use_secondary_inner_filter_correction=True,
assay_volume=assay_volume, DG_prior='uniform')
t0 = time()
mcmc_model = pymcmodels.run_mcmc(pymc_model, nthin=20, nburn=100, niter=100000, map=True)
print('Time for MCMC run {0} = {1:2f} seconds'.format(r, time() - t0))
traces_pymc.append(get_var_trace(mcmc_model, var_name))
coords = [(0,0), (0,1), (0,2), (1,0), (1,1), (1,2), (2,0), (2,1), (2,2)]
bins = np.arange(-20, 0)
f, axarr = plt.subplots(3, 3, figsize=(10, 10))
for t, c in zip(traces_pymc, coords):
hist, edges = np.histogram(t, bins=bins, normed=True)
centers = edges[0:-1] + np.diff(edges) / 2.0
axarr[c].bar(centers, hist)
axarr[c].set_title('Histogram of {0}'.format(var_name))
axarr[c].set_xlabel('Free energy (thermal units)')
axarr[c].set_ylabel('Frequency')
axarr[c].axvline(DeltaG, color='red', ls='--')
plt.tight_layout()
plt.show()
"""
Explanation: Veiwing the sampling consistency
End of explanation
"""
|
mzwiessele/mzparam | tutorial/ParamzSimpleRosen.ipynb | bsd-3-clause | import paramz, numpy as np
from scipy.optimize import rosen_der, rosen
"""
Explanation: Paramz Tutorial
A simple introduction into Paramz based gradient based optimization of parameterized models.
Paramz is a python based parameterized modelling framework, that handles parameterization, printing, randomizing and many other parameter based operations to be done to a parameterized model.
In this example we will make use of the rosenbrock function of scipy. We will write a paramz model calling the scipy rosen function as an objective function and its gradients and use it to show the features of Paramz.
End of explanation
"""
x = np.array([-1,1])
"""
Explanation: The starting position of the rosen function is set to be
$$ x_0 = [-1,1] $$
End of explanation
"""
class Rosen(paramz.Model): # Inherit from paramz.Model to ensure all model functionality.
def __init__(self, x, name='rosen'): # Initialize the Rosen model with a numpy array `x` and name `name`.
super(Rosen, self).__init__(name=name) # Call to super to make sure the structure is set up.
self.x = paramz.Param('position', x) # setup a Param object for the position parameter.
self.link_parameter(self.x) # Tell the model that the parameter `x` exists.
"""
Explanation: For paramz to understand your model there is three steps involved:
Step One: Initialization of the Model
Initialize your model using the __init__() function. The init function contains a call to the super class to make sure paramz can setup the model structure. Then we setup the parameters contained for this model and lastly we tell the model that we have those parameters by linking them to self.
End of explanation
"""
r = Rosen(x)
try:
print(r)
except NotImplementedError as e:
print(e)
"""
Explanation: The class created above only holds the information about the parameters, we still have to implement the objective function to optimize over. For now the class can be instantiated but is not functional yet.
End of explanation
"""
class Rosen(paramz.Model):
def __init__(self, x, name='rosen'):
super(Rosen, self).__init__(name=name)
self.x = paramz.Param('position', x)
self.link_parameter(self.x)
def objective_function(self): # The function to overwrite for the framework to know about the objective to optimize
return rosen(self.x) # Call the rosenbrock function of scipy as objective function.
"""
Explanation: Step Two: Adding the Objective Function
The optimization of a gradient based mathematical model is based on an objective function to optimize over. The paramz framework expects the objective_function to be overwridden, returning the current objective of the model. It can make use of all parameters inside the model and you can rely on the parameters to be updated when the objective function is called. This function does not take any parameters.
End of explanation
"""
class Rosen(paramz.Model):
def __init__(self, x, name='rosen'):
super(Rosen, self).__init__(name=name)
self.x = paramz.Param('position', x)
self.link_parameter(self.x)
def objective_function(self):
return self._obj
def parameters_changed(self): # Overwrite the parameters_changed function for model updates
self._obj = rosen(self.x) # Lazy evaluation of the rosen function only when there is an update
self.x.gradient[:] = rosen_der(self.x) # Compuataion and storing of the gradients for the position parameter
"""
Explanation: Step Three: Adding Update Routine for Parameter Changes
This model is now functional, except optimization. The gradients are not initialized and an optimization will stagnate, as there are no gradients to consider. The optimization of parameters requires the gradients of the parameters to be updated. For this, we provide an inversion of control based approach, in which to update parameters and set gradients of parameters. The gradients for parameters are saved in the gradient of the parameter itself. The model handles the distribution and collection of correct gradients to the optimizer itself.
To implement the parameters_changed(self) function we overwrite the function on the class. This function has the expensive bits of computation in it, as it is only being called if an update is absolutely necessary. We also compute the objective for the current parameter set and store it as a variable, so that a call to objective_function() can be done in a lazy function and to prevent computational overhead:
End of explanation
"""
r = Rosen(x)
"""
Explanation: Model Usage
Having implemented a paramz model with its necessary functions, the whole set of functionality of paramz is available for us. We will instantiate a rosen model class to play around with.
End of explanation
"""
print(r)
"""
Explanation: This rosen model is a fully working parameterized model for gradient based optimization of the rosen function of scipy.
Printing and Naming
All Parameterized and Param objects are named and can be accessed by name. This ensures a cleaner model creation and printing, when big models are created. In our simple example we only have a position and the model name itself: rosen.
End of explanation
"""
r
"""
Explanation: Or use the notebook representation:
End of explanation
"""
r.x
"""
Explanation: Note the model just printing the shape (in the value column) of the parameters, as parameters can be any sized arrays or matrices (with arbitrary numbers of dimensions).
We can print the actual values of the parameters directly, either by programmatically assigned variable
End of explanation
"""
r.position
"""
Explanation: Or by name:
End of explanation
"""
r.x.name = 'pos'
r
"""
Explanation: We can redefine the name freely, as long as it does not exist already:
End of explanation
"""
try:
r.position
except AttributeError as v:
print("Attribute Error: " + str(v))
"""
Explanation: Now r.position will not be accessible anymore!
End of explanation
"""
print("Objective before change: {}".format(r._obj))
r.x[0] = 1
print("Objective after change: {}".format(r._obj))
"""
Explanation: Setting Parameters and Automated Updates
Param objects represent the parameters for the model. We told the model in the initialization that the position parameter (re-)named pos is a parameter of the model. Thus the model will listen to changes of the parameter values and update on any changes. We will set one element of the parameter and see what happens to the model:
End of explanation
"""
2 * r.x
"""
Explanation: Note that we never actually told the model to update. It listened to changes to any of its parameters and updated accordingly. This update chain is based on the hierarchy of the model structure. Specific values of parameters can be accessed through indexing, just like indexing numpy arrays. In fact Param is a derivative of ndarray and inherits all its traits. Thus, Param can be used in any calculation involved with numpy. Importantly, when using a Param parameter inside a computation, it will be returning a normal numpy array. This prevents unwanted side effects and pointer errors.
End of explanation
"""
r.x[:] = [100,5] # Set to a difficult starting position to show the messages of the optimization.
r.optimize(messages=1) # Call the optimization and show the progress.
"""
Explanation: Optimization
The optimization routine for the model can be accessed by the optimize() function. A call to optimize will setup the optimizer, do the iteration through getting and setting the parameters in an optimal 'in memory' fashion. By supplying messages=1 as an optional parameter we can print the progress of the optimization itself.
End of explanation
"""
r.x
"""
Explanation: To show the values of the positions itself, we directly print the Param object:
End of explanation
"""
np.random.seed(100)
r.randomize()
r.x
r.x.randomize()
r.x
"""
Explanation: We could also randomize the model by using the convenience function randomize(), on the part we want to randomize. It can be any part of the model, also the whole model can be randomized:
End of explanation
"""
r.x.checkgrad(verbose=1)
"""
Explanation: Gradient Checking
Importantly when implementing gradient based optimization is to make sure, that the gradients implemented match the numerical gradients of the objective function. This can be achieved using the checkgrad() function in paramz. It does a triangle numerical gradient estimate around the current position of the parameter. The verbosity of the gradient checker can be adjusted using the verbose option. If verbose is False, only one bool will be returned, specifying whether the gradients check the numerical gradients or not. The option of verbose=True returns a full list of every parameter, checking each parameter individually. This can be called on each subpart of the model again.
Here we can either directly call it on the parameter:
End of explanation
"""
r.checkgrad()
r.checkgrad(verbose=1)
"""
Explanation: Or on the whole model (verbose or not):
End of explanation
"""
r.x[[0]].checkgrad(verbose=1)
"""
Explanation: Or on individual parameters, note that numpy indexing is used:
End of explanation
"""
r.x[[0]].constrain_bounded(-10,-1)
r.x[[1]].constrain_positive()
"""
Explanation: Constraining Parameter Spaces
In many optimization scenarios it is necessary to constrain parameters to only take on certain ranges of values, may it be bounded in a region (between two numbers), fixed or constrained to only be positive or negative numbers. This can be achieved in paramz by applying a transformation to a parameter. For convenience the most common constraints are placed in specific functions, found by r.constrain_<tab>:
Each parameter can be constrain individually, by subindexing the Param object or Parameterized objects as a whole. Note that indexing functions like numpy indexing, so we need to make sure to keep the array structure when indexing singular elements. Next we bound $x_0$ to be constrained between $-10$ and $-1$ and $x_1$ to be constrained to only positive values:
End of explanation
"""
r
"""
Explanation: The printing will contain the constraints, either directly on the object, or it lists the constraints contained within a parameter. If a parameter has multiple constraints spread across the Param object all constraints contained in the whole Param object are indicated with {<partial constraint>}:
End of explanation
"""
r.x
"""
Explanation: To show the individual constraints, we look at the Param object of interest directly:
End of explanation
"""
list(r.constraints.items())
"""
Explanation: The constraints (and other indexed properties) are held by each parameter as a dictionary with the name. For example the constraints are held in a constraints dictionary, where the keys are the constraints, and the values are the indices this constraint refers to. You can either ask for the constraints of the whole model:
End of explanation
"""
list(r.x.constraints.items())
"""
Explanation: Or the constraints of individual Parameterized objects:
End of explanation
"""
class DoubleRosen(paramz.Model):
def __init__(self, x1, x2, name='silly_double'):
super(DoubleRosen, self).__init__(name=name) # Call super to initiate the structure of the model
self.r1 = Rosen(x1) # Instantiate the underlying Rosen classes
self.r2 = Rosen(x2)
# Tell this model, which parameters it has. Models are just the same as parameters:
self.link_parameters(self.r1, self.r2)
def objective_function(self):
return self._obj # Lazy evaluation of the objective
def parameters_changed(self):
self._obj = self.r1._obj + self.r2._obj # Just add both objectives together to optimize both models.
"""
Explanation: The constraints of subparts of the model are only views into the actual constaints held by the root of the model hierarchy.
Models Inside Models
The hierarchy of a Paramz model is a tree, where the nodes of the tree are Parameterized objects and the leaves are Param objects. The Model class is Parameterized itself and, thus can serve as a child itself. This opens the possibility for combining models together in a bigger model. As a simple example, we will just add two rosen models together into a single model:
End of explanation
"""
dr = DoubleRosen(np.random.normal(size=2), np.random.normal(size=2))
"""
Explanation: The keen eyed will have noticed, that we did not set any gradients in the above definition. That is because the underlying rosen models handle their gradients directly!
End of explanation
"""
dr.checkgrad(verbose=1)
"""
Explanation: All options listed above are availible for this model now. No additional steps need to be taken!
End of explanation
"""
dr.r1.constrain_negative()
dr.r1.x[[0]].fix()
dr.r2.x[[1]].constrain_bounded(-30, 5)
dr.r2.x[[0]].constrain_positive()
dr
"""
Explanation: To show the different ways of how constraints are displayed, we constrain different parts of the model and fix parts of it too:
End of explanation
"""
dr.r2.checkgrad(verbose=1)
"""
Explanation: First, we can see, that because two models with the same name were added to dr, the framework renamed the second model to have a unique name. This only happens when two childs of one parameter share the same name. If the two childs not under the same parameter share names, it is just fine, as you can see in the name of x in both models: position.
Second, the constraints are displayed in curly brackets {} if they do not span all underlying parameters. If a constraint, however, spans all parameters, it is shown without curly brackets, such as -ve for the first rosen model.
We can now just like before perform all actions paramz support on this model, as well as on sub models. For example we can check the gradients of only one part of the model:
End of explanation
"""
dr.r1
"""
Explanation: Or print only one model:
End of explanation
"""
print(dr.constraints)
"""
Explanation: We can showcase that constraints are mapped to each parameter directly. We can either access the constraints of the whole model directly:
End of explanation
"""
print(dr.r2.constraints)
"""
Explanation: Or for parameters directly:
End of explanation
"""
dr.param_array
"""
Explanation: Note, that the constraints are remapped to directly index the parameters locally. This directly leeds up to the in memory handling of parameters. The root node of the hierarchy holds one parameter array param_array comprising all parameters. The same goes for the gradient gradient:
End of explanation
"""
dr.r2.param_array
"""
Explanation: Each child parameter (and subsequent parameters) have their own view into the memory of the root node:
End of explanation
"""
print(dr.param_array)
print(dr.optimizer_array)
"""
Explanation: When changing the param_array of a parameter it directly edits the memory of the root node. This is a big part of the optimization of paramz, as getting and setting parameters works directly in memory and does not need any python routines (such as loops or traversal) functionality.
The constraints as described above, directly index the param_array of their Parameterized or Param object. That is why the remapping exists.
This param_array has its counterpart for the optimizer, which holds the remapped parameters by the constraints. The constraints are transformation mappings, which transform model parameters param_array into optimizer parameters optimizer_array. This optimizer array is presented to the optimizer and the constraints framework handles the mapping directly.
End of explanation
"""
dr._transform_gradients(dr.gradient)
"""
Explanation: Note, that the optimizer array does only contain three values. This is because the first element of the the first rosen model is fixed and is not presented to the optimizer. The transformed gradients can be computed by the root node directly:
End of explanation
"""
|
deepmind/xmanager | codelab.ipynb | apache-2.0 | !git clone https://github.com/deepmind/xmanager.git ~/xmanager
!pip install ~/xmanager
"""
Explanation: XManager codelab notebook
This notebook will take you through running an XManager experiment on Google Cloud Platform (GCP).
A stand-alone Jupyter Notebook can be created via GCP's Vertex AI Notebooks
JupyterLab can be installed on your machine following Jupyter's installation.
Install any prerequisites
Create a GCP project if one does not already exist.
Install Docker if it is not already installed.
Download and install XManager
End of explanation
"""
from google import auth
credentials = auth.default()[0]
project = auth.default()[1]
print('GCP Project:', project)
"""
Explanation: Set default GCP values
The following gets the GCP project.
End of explanation
"""
from IPython.display import display
import ipywidgets
import os
def bucket_changed(change):
os.environ['GOOGLE_CLOUD_BUCKET_NAME'] = change.new
GOOGLE_CLOUD_BUCKET_NAME = ipywidgets.Text(
description='GOOGLE_CLOUD_BUCKET_NAME:',
style={'description_width': 'initial'},
layout=ipywidgets.Layout(width='50%'),
)
GOOGLE_CLOUD_BUCKET_NAME.observe(bucket_changed, names='value')
display(GOOGLE_CLOUD_BUCKET_NAME)
from xmanager import xm
from xmanager import xm_local
# This code block sets FLAGS to use default values to avoid an absl.flags.UnparsedFlagAccessError.
# Normally XManager flags are set via the command-line with `xmanager train.py -- --key=value`
from absl import flags
flags.FLAGS([''])
flags.FLAGS.xm_wrap_late_bindings = True
"""
Explanation: Use gcloud auth application-default login if the above command results in an error or the project is incorrect.
XManager requires a Google Cloud Storage Bucket. Create one if one does not already exist and enter it in the box below.
End of explanation
"""
import itertools
import os
from xmanager import xm
from xmanager import xm_local
"""
Explanation: Launching an experiment
This code block imports dependencies used in later steps.
End of explanation
"""
async with xm_local.create_experiment(experiment_title='my-first-experiment') as experiment:
print(f'Local Experiment created with experiment_id={experiment.experiment_id}')
"""
Explanation: An experiment can be broken down into 5 steps:
Creating the experiment.
Defining the executable specification.
Defining the execution environment.
Creating the jobs.
Defining the hyperparameters.
Creating the experiment
Give the experiment a name. The create_experiment method will also create a unique integer id for the experiment and save this experiment to a database.
End of explanation
"""
[executable] = experiment.package([
xm.python_container(
executor_spec=xm_local.Vertex.Spec(),
path=os.path.expanduser('~/xmanager/examples/cifar10_torch'),
entrypoint=xm.ModuleName('cifar10'),
)
])
"""
Explanation: Defining the executable specification
Define the job that will run in the experiment. A PythonContainer is an example of a executable specificaiton. This executable specification tells XManager to package everything inside the PythonContainer.path as a container and use PythonContainer.entrypoint as the main module. Because we cloned XManager to ~/xmanager in an early step, we can use one of the examples, ~/xmanager/examples/cifar10_torch as the path.
We also need to declare where the executable should be staged. This step will upload the executable specification to the correct storage option that is best suited for the execution environment. For example, if the execution environment is Vertex AI, the executable must be stored in Google Container Registry. The Vertex.Spec() specification will upload the specification to Google Container Registry, where it will be accessible by Vertex AI.
End of explanation
"""
executor = xm_local.Vertex(xm.JobRequirements(T4=1))
"""
Explanation: Defining the execution environment
Declare where the job will run and what compute requirements are necessary to run one job. To run on AI Vertex, we must use the xm_local.Vertex executor. Each job should use 1 NVidia T4 GPU, so we must pass in a xm.JobRequirements to the executor.
End of explanation
"""
async with xm_local.create_experiment(experiment_title='cifar10') as experiment:
experiment.add(xm.Job(
executable=executable,
executor=executor,
args={'batch_size': 64, 'learning_rate': 0.01},
))
"""
Explanation: Launching the jobs
Finally, we can create an experiment and add experiment units to it. To add a single job to the experiment, create a xm.Job object that combine the executable, compute requirements, and custom arguments hyperparameters, and the job to the experiment.
End of explanation
"""
inputs = {
'batch_size': [64, 128],
'learning_rate': [0.01, 0.001],
}
hyperparameters = list(dict(zip(inputs, x)) for x in itertools.product(*inputs.values()))
from pprint import pprint
pprint(hyperparameters)
"""
Explanation: Defining the hyperparameters
In research, it is often required to run the experimental setup multiple times with different hyperparameter values. This is called hyperparameter optimization. The simplest form of hyperparameter optimization is called grid search or parameter sweep, which is an exhaustive search through all possible Cartesian products of hyperparameter values. Grid search trials can be constructed using itertools.
End of explanation
"""
async with xm_local.create_experiment(experiment_title='cifar10') as experiment:
for hparams in trials:
experiment.add(xm.Job(
executable=executable,
executor=executor,
args=hparams,
))
"""
Explanation: To perform the grid search, loop over all the hyperparameters, passing a different hyperparameter configuration to the args parameter of each job. Add each job to the experiment.
End of explanation
"""
[e.experiment_id for e in xm_local.list_experiments()]
"""
Explanation: Tracking job status
You can list all of your previous experiments.
End of explanation
"""
# TODO: Use experiment.work_units instead of private member.
for i, unit in enumerate(experiment._experiment_units):
print(f'[{i}] Completed: {unit.get_status().is_completed}, Failed: {unit.get_status().is_failed}')
"""
Explanation: Some execution environments allow you to track the status of jobs in an experiment. Vertex AI is one of the execution environments that supports job-tracking.
End of explanation
"""
async with xm_local.create_experiment(experiment_title='cifar10') as experiment:
[executable] = experiment.package([
xm.python_container(
executor_spec=xm_local.Vertex.Spec(),
path=os.path.expanduser('~/xmanager/examples/cifar10_torch'),
entrypoint=xm.ModuleName('cifar10'),
)
])
batch_sizes = [64, 128]
learning_rates = [0.01, 0.001]
trials = list(
dict([('batch_size', bs), ('learning_rate', lr)])
for (bs, lr) in itertools.product(batch_sizes, learning_rates)
)
for hyperparameters in trials:
experiment.add(xm.Job(
executable=executable,
executor=xm_local.Vertex(requirements=xm.JobRequirements(T4=1)),
args=hyperparameters,
))
"""
Explanation: End to end
Combining everything above into a single code-block, the launch script looks like this:
End of explanation
"""
|
martinjrobins/hobo | examples/stats/log-priors.ipynb | bsd-3-clause | import pints
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Inference: Log priors
This example notebook illustrates some of the functionality that is available for LogPrior objects that are currently available within PINTS.
End of explanation
"""
uniform_log_prior = pints.UniformLogPrior(-10, 15)
print('U(0|a=-10, b=15) = ' + str(uniform_log_prior([0])))
"""
Explanation: The uniform prior, $U\sim(a, b)$, here with $a=-10$ and $b=15$. When this function is called, its log density is returned.
End of explanation
"""
values = np.linspace(-20, 20, 1000)
log_prob = [uniform_log_prior([x]) for x in values]
prob = np.exp(log_prob)
plt.figure(figsize=(10,4))
plt.xlabel('theta')
plt.ylabel('Density')
plt.plot(values, prob)
plt.show()
"""
Explanation: To plot the density, we take the exponential of its log density.
End of explanation
"""
uniform_log_prior = pints.UniformLogPrior([2, -7], [4, -5])
"""
Explanation: To specify a multidimensional uniform prior, use the same function. Here we specify, $\theta_1\sim U(2, 4)$ and $\theta_2\sim U(-7,-5)$.
End of explanation
"""
values = np.linspace(-10, -4, 1000)
log_prob = [uniform_log_prior([3, x]) for x in values]
prob = np.exp(log_prob)
plt.figure(figsize=(10,4))
plt.xlabel('theta[2]')
plt.ylabel('Density')
plt.plot(values, prob)
plt.show()
"""
Explanation: Plot $p(\theta_2|\theta_1 = 3)$.
End of explanation
"""
beta_log_prior1 = pints.BetaLogPrior(1, 1)
beta_log_prior2 = pints.BetaLogPrior(5, 3)
beta_log_prior3 = pints.BetaLogPrior(3, 5)
beta_log_prior4 = pints.BetaLogPrior(10, 10)
values = np.linspace(0, 1, 1000)
prob1 = np.exp([beta_log_prior1([x]) for x in values])
prob2 = np.exp([beta_log_prior2([x]) for x in values])
prob3 = np.exp([beta_log_prior3([x]) for x in values])
prob4 = np.exp([beta_log_prior4([x]) for x in values])
plt.figure(figsize=(10,4))
plt.xlabel('theta')
plt.ylabel('Density')
plt.plot(values, prob1)
plt.plot(values, prob2)
plt.plot(values, prob3)
plt.plot(values, prob4)
plt.legend(['beta(1, 1)', 'beta(5, 3)', 'beta(3, 5)', 'beta(10, 10)'])
plt.show()
"""
Explanation: If you have a prior constrained to lie $\in[0,1]$, you can use a beta prior.
End of explanation
"""
print('beta(-0.5|a=1, b=1) = ' + str(beta_log_prior1([-0.5])))
"""
Explanation: Specifying a value outside the support of the distribution returns $-\infty$ for the log density.
End of explanation
"""
print('mean = ' + str(beta_log_prior3.mean()))
"""
Explanation: Each prior has a mean function that allows you to quickly check what parameterisation is being used.
End of explanation
"""
truncnorm_log_prior = pints.TruncatedGaussianLogPrior(2.0, 1.0, 0.0, 4.25)
values = np.linspace(-1, 6, 1000)
prob = np.exp([truncnorm_log_prior([x]) for x in values])
plt.figure(figsize=(10,4))
plt.xlabel('theta')
plt.ylabel('Density')
plt.plot(values, prob)
plt.show()
"""
Explanation: Alternatively, if you need a prior constrained to lie $\in[a,b]$, but for which a Gaussian distribution might otherwise be appropriate, you can use the truncated Gaussian prior (also known as a truncated normal).
End of explanation
"""
n = 10000
student_t_log_prior = pints.StudentTLogPrior(10, 8, 5)
samples = student_t_log_prior.sample(n)
plt.hist(samples, 20)
plt.xlabel('theta')
plt.ylabel('Frequency')
plt.show()
"""
Explanation: Each prior also has a sample function which allows generation of independent samples from each distribution. Using this we can sample from a Student-t density, with input dimensions (location, degrees of freedom, scale).
End of explanation
"""
log_prior1 = pints.GaussianLogPrior(6, 3)
log_prior2 = pints.InverseGammaLogPrior(5, 5)
log_prior3 = pints.LogNormalLogPrior(-1, 1)
composed_log_prior = pints.ComposedLogPrior(log_prior1, log_prior2, log_prior3)
# calling
composed_log_prior([-3, 1, 6])
"""
Explanation: For models with multiple parameters, we can specify different distributions for each dimension using ComposedLogPrior.
End of explanation
"""
print('mean = ' + str(composed_log_prior.mean()))
n = 10
samples = composed_log_prior.sample(1000)
plt.hist(samples[:, 0], alpha=0.5)
plt.hist(samples[:, 1], alpha=0.5)
plt.hist(samples[:, 2], alpha=0.5)
plt.legend(['Gaussian(6, 3)', 'InverseGamma(5, 5)', 'LogNormal(-1, 1)'])
plt.xlabel('theta')
plt.ylabel('Frequency')
plt.show()
"""
Explanation: Functions like sample and mean also work for ComposedLogPrior objects.
End of explanation
"""
two_d_gaussian_log_prior = pints.MultivariateGaussianLogPrior([0, 10], [[1, 0.5], [0.5, 3]])
# Contour plot of pdf
x = np.linspace(-3, 3, 100)
y = np.linspace(4, 15, 100)
X, Y = np.meshgrid(x, y)
Z = np.exp([[two_d_gaussian_log_prior([i, j]) for i in x] for j in y])
plt.contour(X, Y, Z)
plt.xlabel('theta[2]')
plt.ylabel('theta[1]')
plt.show()
"""
Explanation: We also have multivariate priors in PINTS. For example, the multivariate Gaussian.
End of explanation
"""
mean = [-5.5, 6.7, 3.2]
covariance = [[3.4, -0.5, -0.7], [-0.5, 2.7, 1.4], [-0.7, 1.4, 5]]
log_prior = pints.MultivariateGaussianLogPrior(mean, covariance)
n = 1000
samples = log_prior.sample(n)
plt.scatter(samples[:, 1], samples[:, 2])
plt.show()
"""
Explanation: Converting prior samples to be uniform within unit cube
Some inference methods only work when samples are uniformly distributed in unit cube. PINTS contains methods to convert prior samples to those from the unit cube (often but not only using the cumulative density function (CDF)).
Here we show how this function works for the multivariate Gaussian (a case of when a different transformation to the CDF is applied).
First, we show samples from the prior.
End of explanation
"""
u = []
for i in range(n):
u.append(log_prior.convert_to_unit_cube(samples[i]))
u = np.vstack(u)
plt.scatter(u[:, 1], u[:, 2])
plt.show()
"""
Explanation: Next, we show those samples after they have been converted to be uniform on the unit cube.
End of explanation
"""
theta = []
for i in range(n):
theta.append(log_prior.convert_from_unit_cube(u[i]))
theta = np.vstack(theta)
plt.scatter(theta[:, 1], theta[:, 2])
plt.show()
"""
Explanation: And we can convert them back again.
End of explanation
"""
|
mnnit-workspace/Logical-Rhythm-17 | Class-4/Introduction to Pandas and Exploring Iris Dataset.ipynb | mit | # importing pandas package with alias pd
import pandas as pd
#create a data frame - dictionary is used here where keys get converted to column names and values to row values.
data = pd.DataFrame({'Country': ['Russia','Colombia','Chile','Equador','Nigeria'],
'Rank':[121,40,100,130,11]})
data
# describe() method computes summary statistics of integer / double variables
data.describe()
# info() gives a more detailed statistics about data in dataframe
data.info()
data = pd.DataFrame({'group':['a', 'a', 'a', 'b','b', 'b', 'c', 'c','c'],'ounces':[4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
# head(n) gives first n rows of dataframe
data.head(3)
#Let's sort the data frame by ounces - inplace = True will make changes to the data
data.sort_values(by=['ounces'],ascending=True,inplace=False)
# Sorting on multiple columns
data.sort_values(by=['group','ounces'],ascending=[True,False],inplace=False)
"""
Explanation: Introduction to Pandas
End of explanation
"""
data = pd.DataFrame({'k1':['one']*3 + ['two']*4, 'k2':[3,2,1,3,3,4,4]})
data.sort_values(by='k2')
#remove duplicates
data.drop_duplicates()
"""
Explanation: Often, we get data sets with duplicate rows, which is nothing but noise. Therefore, before training the model, we need to make sure we get rid of such inconsistencies in the data set. Let's see how we can remove duplicate rows.
End of explanation
"""
data.drop_duplicates(subset='k1')
"""
Explanation: Here, we removed duplicates based on matching row values across all columns. Alternatively, we can also remove duplicates based on a particular column. Let's remove duplicate values from the k1 column
End of explanation
"""
import numpy as np
data = pd.DataFrame({'food': ['bacon', 'pulled pork', 'bacon', 'Pastrami','corned beef', 'Bacon', 'pastrami', 'honey ham','nova lox'],
'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
# Creates a new column protien and assigns 9 random values to it
data = data.assign(protien = np.random.random(9))
data
# Lets remove the added columns
data.drop('protien',axis='columns',inplace=True)
data
"""
Explanation: Lets see how we can add a new column to our DataFrame
End of explanation
"""
dates = pd.date_range('20130101',periods=6)
df = pd.DataFrame(np.random.randn(6,4),index=dates,columns=list('ABCD'))
df
#get first n rows from the data frame
df[:3]
#slice based on date range
df['20130101':'20130104']
#slicing based on column names
df.loc[:,['A','B']]
df.loc['20130102':'20130103',['A','B']]
#slicing based on index of columns
df.iloc[3] #returns 4th row (index is 3rd)
#returns a specific range of rows
df.iloc[2:4, 0:2] # Selects rows from 2:4 and columns from 0:2
# Comapring
df['B'] > 1
# Boolean indexing based on column values as well. This helps in filtering a data set based on a pre-defined condition
df[df['B'] > 1]
"""
Explanation: Lets see how we can use slice on DataFrames
End of explanation
"""
#list all columns where A is greater than C
df.query('A > C')
#using OR condition
df.query('A < B | C > A')
"""
Explanation: We can also use a query method to select columns based on a criterion
End of explanation
"""
# Reading the csv file using pandas into a DataFrame
iris = pd.read_csv('Iris.csv')
iris.head(10)
iris.describe()
iris.info()
iris['Species'].value_counts()
"""
Explanation: Exploring the Iris dataset
End of explanation
"""
# We can plot things is using the .plot extension from Pandas dataframes
# We'll use this to make a scatterplot of the Iris features.
%matplotlib inline
iris.plot(kind="scatter", x="SepalLengthCm", y="SepalWidthCm")
# Importing some visualization libraries
import warnings
warnings.filterwarnings("ignore")
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
sns.set(style="white", color_codes=True)
# One piece of information missing in the plots above is what species each plant is
# We'll use seaborn's FacetGrid to color the scatterplot by species
sns.FacetGrid(iris, hue="Species", size=5) \
.map(plt.scatter, "SepalLengthCm", "SepalWidthCm") \
.add_legend()
# We can look at an individual feature in Seaborn through a boxplot
sns.boxplot(x="Species", y="PetalLengthCm", data=iris)
# Another useful seaborn plot is the pairplot, which shows the bivariate relation
# between each pair of features
#
# From the pairplot, we'll see that the Iris-setosa species is separataed from the other
# two across all feature combinations
sns.pairplot(iris.drop("Id", axis=1), hue="Species", size=3)
# Box plots can also be made using DataFrame
iris.drop("Id", axis=1).boxplot(by="Species",figsize=(12, 12))
"""
Explanation: Data Visualization
End of explanation
"""
target_map = {'Iris-setosa':0, 'Iris-versicolor':1,'Iris-virginica':2 }
# Use the pandas apply method to numerically encode our attrition target variable
iris['Species'] = iris['Species'].apply(lambda x: target_map[x])
iris
# importing alll the necessary packages to use the various classification algorithms
from sklearn.linear_model import LogisticRegression # for Logistic Regression algorithm
from sklearn.cross_validation import train_test_split #to split the dataset for training and testing
from sklearn import metrics #for checking the model accuracy
from sklearn.tree import DecisionTreeClassifier #for using Decision Tree Algoithm
from sklearn.ensemble import RandomForestClassifier # A combine model of many decision trees
"""
Explanation: Machine Learning Algorithms and Decision Boundaries
Since our Species column which contains these three class labels contain categorical data, the first thing to do would be to encode them numerically as follows:
End of explanation
"""
X = iris[['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm']]
Y = iris['Species']
plt.figure(figsize=(10,8))
sns.heatmap(X.corr(),annot=True,cmap='cubehelix_r') #draws heatmap with input as the correlation matrix calculted by(iris.corr())
plt.show()
"""
Explanation: Correlation among features
When we train any algorithm, the number of features and their correlation plays an important role. If there are features and many of the features are highly correlated, then training an algorithm with all the featues will reduce the accuracy. Thus features selection should be done carefully
End of explanation
"""
train, test = train_test_split(iris, test_size = 0.3, random_state=1212)# in this our main data is split into train and test
# the attribute test_size=0.3 splits the data into 70% and 30% ratio. train=70% and test=30%
print(train.shape)
print(test.shape)
train_X = train[['SepalLengthCm','SepalWidthCm','PetalLengthCm','PetalWidthCm']]# taking the training data features
train_y = train.Species# output of our training data
test_X = test[['SepalLengthCm','SepalWidthCm','PetalLengthCm','PetalWidthCm']] # taking test data features
test_y = test.Species #output value of test data
model = LogisticRegression()
model.fit(train_X,train_y)
prediction = model.predict(test_X)
print('The accuracy of the Logistic Regression is',metrics.accuracy_score(prediction,test_y))
model = DecisionTreeClassifier()
model.fit(train_X,train_y)
prediction = model.predict(test_X)
print('The accuracy of the Decision Tree is',metrics.accuracy_score(prediction,test_y))
"""
Explanation: Observation
The Sepal Width and Length are not correlated The Petal Width and Length are highly correlated
We will use all the features for training the algorithm and check the accuracy.
Then we will use 1 Petal Feature and 1 Sepal Feature to check the accuracy of the algorithm as we are using only 2 features that are not correlated. Thus we can have a variance in the dataset which may help in better accuracy. We will check it later
Steps To Be followed When Applying an Algorithm
Split the dataset into training and testing dataset. The testing dataset is generally smaller than training one as it will help in training the model better.
Select any algorithm based on the problem (classification or regression) whatever you feel may be good.
Then pass the training dataset to the algorithm to train it.
Then pass the testing data to the trained algorithm to predict the outcome.
We then check the accuracy by passing the predicted outcome and the actual output to the model.
Splitting The Data into Training And Testing Dataset
End of explanation
"""
petal=iris[['PetalLengthCm','PetalWidthCm','Species']]
sepal=iris[['SepalLengthCm','SepalWidthCm','Species']]
train_p,test_p=train_test_split(petal,test_size=0.3,random_state=0) #petals
train_x_p=train_p[['PetalWidthCm','PetalLengthCm']]
train_y_p=train_p.Species
test_x_p=test_p[['PetalWidthCm','PetalLengthCm']]
test_y_p=test_p.Species
train_s,test_s=train_test_split(sepal,test_size=0.3,random_state=0) #Sepal
train_x_s=train_s[['SepalWidthCm','SepalLengthCm']]
train_y_s=train_s.Species
test_x_s=test_s[['SepalWidthCm','SepalLengthCm']]
test_y_s=test_s.Species
model = LogisticRegression()
model.fit(train_x_p,train_y_p)
prediction=model.predict(test_x_p)
print('The accuracy of the Logistic Regression using Petals is:',metrics.accuracy_score(prediction,test_y_p))
model.fit(train_x_s,train_y_s)
prediction=model.predict(test_x_s)
print('The accuracy of the Logistic Regression using Sepals is:',metrics.accuracy_score(prediction,test_y_s))
model=DecisionTreeClassifier()
model.fit(train_x_p,train_y_p)
prediction=model.predict(test_x_p)
print('The accuracy of the Decision Tree using Petals is:',metrics.accuracy_score(prediction,test_y_p))
model.fit(train_x_s,train_y_s)
prediction=model.predict(test_x_s)
print('The accuracy of the Decision Tree using Sepals is:',metrics.accuracy_score(prediction,test_y_s))
"""
Explanation: We used all the features of iris in above models. Now we will use Petals and Sepals Seperately
End of explanation
"""
from sklearn.preprocessing import StandardScaler
import plotly.graph_objs as go
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
from plotly import tools
X = iris.iloc[:, :2] # Take only the first two features.
y = iris.Species
h = .02 # step size in the mesh
X = StandardScaler().fit_transform(X)
# Implement 3 Logistic Regression models with varying values of C
clf = LogisticRegression(C=0.01)
clf.fit(X, y)
clf2 = LogisticRegression(C=1)
clf2.fit(X, y)
clf3 = LogisticRegression(C=100)
clf3.fit(X, y)
# Define our usual decision surface bounding plots
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h)
, np.arange(y_min, y_max, h))
y_ = np.arange(y_min, y_max, h)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
trace1 = go.Heatmap(x=xx[0], y=y_, z=Z,
colorscale='Viridis',
showscale=True)
trace2 = go.Scatter(x=X[:, 0], y=X[:, 1],
mode='markers',
showlegend=False,
marker=dict(size=10,
color=y,
colorscale='Viridis',
line=dict(color='black', width=1))
)
layout= go.Layout(
autosize= True,
title= 'Logistic Regression (C=0.01)',
hovermode= 'closest',
showlegend= False)
data = [trace1, trace2]
fig = go.Figure(data=data, layout=layout)
py.iplot(fig)
Z = clf2.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
#Z = Z.reshape((xx.shape[0], xx.shape[1], 3))
trace3 = go.Heatmap(x=xx[0], y=y_,
z=Z,
colorscale='Viridis',
showscale=True)
trace4 = go.Scatter(x=X[:, 0], y=X[:, 1],
mode='markers',
showlegend=False,
marker=dict(size=10,
color=y,
colorscale='Viridis',
line=dict(color='black', width=1))
)
layout= go.Layout(
autosize= True,
title= 'Logistic Regression (C=1)',
hovermode= 'closest',
showlegend= False)
data = [trace3, trace4]
fig2 = go.Figure(data=data,layout= layout)
for i in map(str, range(1, 3)):
x = 'xaxis' + i
y = 'yaxis' + i
fig['layout'][x].update(showgrid=False, zeroline=False,
showticklabels=False, ticks='', autorange=True)
fig['layout'][y].update(showgrid=False, zeroline=False,
showticklabels=False, ticks='', autorange=True)
py.iplot(fig2)
del X, y # remove the earlier X and y
X = iris.iloc[:, :2] # Take only the first two features.
y = iris.Species
h = .02 # step size in the mesh
X = StandardScaler().fit_transform(X)
Z = clf3.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
trace5 = go.Heatmap(x=xx[0], y=y_,
z=Z,
colorscale='Viridis',
showscale=True)
trace6 = go.Scatter(x=X[:, 0], y=X[:, 1],
mode='markers',
showlegend=False,
marker=dict(size=10,
color=y,
colorscale='Viridis',
line=dict(color='black', width=1))
)
layout= go.Layout(
autosize= True,
title= 'Logistic Regression (C=100)',
hovermode= 'closest',
showlegend= False)
data = [trace5, trace6]
fig3 = go.Figure(data=data,layout= layout)
py.iplot(fig3)
"""
Explanation: Plotting Decision Surface
End of explanation
"""
|
ece579/ece579_f17 | recitation4/problems/sparkSQL.ipynb | mit | from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
from pyspark.sql import Row
csv_data = raw.map(lambda l: l.split(","))
row_data = csv_data.map(lambda p: Row(
duration=int(p[0]),
protocol_type=p[1],
service=p[2],
flag=p[3],
src_bytes=int(p[4]),
dst_bytes=int(p[5])
)
)
"""
Explanation: DataFrame
A DataFrame is a Dataset organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood. DataFrames can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs
We want to convert our raw data into a table. But first we have to parse it and assign desired rows and headers, something like csv format.
End of explanation
"""
kdd_df = sqlContext.createDataFrame(row_data)
kdd_df.registerTempTable("KDDdata")
# Select tcp network interactions with more than 2 second duration and no transfer from destination
tcp_interactions = sqlContext.sql("SELECT duration, dst_bytes FROM KDDdata WHERE protocol_type = 'tcp' AND duration > 2000 AND dst_bytes = 0")
tcp_interactions.show(10)
# Complete the query to filter data with duration > 2000, dst_bytes = 0.
# Then group the filtered elements by protocol_type and show the total count in each group.
# Refer - https://spark.apache.org/docs/latest/sql-programming-guide.html#dataframegroupby-retains-grouping-columns
kdd_df.select("protocol_type", "duration", "dst_bytes").filter(kdd_df.duration>2000)#.more query...
def transform_label(label):
'''
Create a function to parse input label
such that if input label is not normal
then it is an attack
'''
row_labeled_data = csv_data.map(lambda p: Row(
duration=int(p[0]),
protocol_type=p[1],
service=p[2],
flag=p[3],
src_bytes=int(p[4]),
dst_bytes=int(p[5]),
label=transform_label(p[41])
)
)
kdd_labeled = sqlContext.createDataFrame(row_labeled_data)
'''
Write a query to select label,
group it and then count total elements
in that group
'''
# query
"""
Explanation: Once we have our RDD of Row we can infer and get a schema. We can operate on this schema with SQL queries.
End of explanation
"""
kdd_labeled.select("label", "protocol_type", "dst_bytes").groupBy("label", "protocol_type", kdd_labeled.dst_bytes==0).count().show()
"""
Explanation: We can use other dataframes for filtering our data efficiently.
End of explanation
"""
|
microsoft/dowhy | docs/source/example_notebooks/dowhy_causal_discovery_example.ipynb | mit | import dowhy
from dowhy import CausalModel
from rpy2.robjects import r as R
%load_ext rpy2.ipython
import numpy as np
import pandas as pd
import graphviz
import networkx as nx
np.set_printoptions(precision=3, suppress=True)
np.random.seed(0)
"""
Explanation: Causal Discovery example
The goal of this notebook is to show how causal discovery methods can work with DoWhy. We use discovery methods from Causal Discovery Tool (CDT) repo. As we will see, causal discovery methods are not fool-proof and there is no guarantee that they will recover the correct causal graph. Even for the simple examples below, there is a large variance in results. These methods, however, may be combined usefully with domain knowledge to construct the final causal graph.
End of explanation
"""
def make_graph(adjacency_matrix, labels=None):
idx = np.abs(adjacency_matrix) > 0.01
dirs = np.where(idx)
d = graphviz.Digraph(engine='dot')
names = labels if labels else [f'x{i}' for i in range(len(adjacency_matrix))]
for name in names:
d.node(name)
for to, from_, coef in zip(dirs[0], dirs[1], adjacency_matrix[idx]):
d.edge(names[from_], names[to], label=str(coef))
return d
def str_to_dot(string):
'''
Converts input string from graphviz library to valid DOT graph format.
'''
graph = string.replace('\n', ';').replace('\t','')
graph = graph[:9] + graph[10:-2] + graph[-1] # Removing unnecessary characters from string
return graph
"""
Explanation: Utility function
We define a utility function to draw the directed acyclic graph.
End of explanation
"""
data_mpg = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data-original',
delim_whitespace=True, header=None,
names = ['mpg', 'cylinders', 'displacement',
'horsepower', 'weight', 'acceleration',
'model year', 'origin', 'car name'])
data_mpg.dropna(inplace=True)
data_mpg.drop(['model year', 'origin', 'car name'], axis=1, inplace=True)
print(data_mpg.shape)
data_mpg.head()
"""
Explanation: Experiments on the Auto-MPG dataset
In this section, we will use a dataset on the technical specification of cars. The dataset is downloaded from UCI Machine Learning Repository. The dataset contains 9 attributes and 398 instances. We do not know the true causal graph for the dataset and will use CDT to discover it. The causal graph obtained will then be used to estimate the causal effect.
1. Load the data
End of explanation
"""
from cdt.causality.graph import LiNGAM, PC, GES
graphs = {}
labels = [f'{col}' for i, col in enumerate(data_mpg.columns)]
functions = {
'LiNGAM' : LiNGAM,
'PC' : PC,
'GES' : GES,
}
for method, lib in functions.items():
obj = lib()
output = obj.predict(data_mpg)
adj_matrix = nx.to_numpy_matrix(output)
adj_matrix = np.asarray(adj_matrix)
graph_dot = make_graph(adj_matrix, labels)
graphs[method] = graph_dot
# Visualize graphs
for method, graph in graphs.items():
print("Method : %s"%(method))
display(graph)
"""
Explanation: Causal Discovery with Causal Discovery Tool (CDT)
We use the CDT library to perform causal discovery on the Auto-MPG dataset. We use three methods for causal discovery here -LiNGAM, PC and GES. These methods are widely used and do not take much time to run. Hence, these are ideal for an introduction to the topic. Other neural network based methods are also available in CDT and the users are encouraged to try them out by themselves.
The documentation for the methods used are as follows:
- LiNGAM [link]
- PC [link]
- GES [link]
End of explanation
"""
for method, graph in graphs.items():
if method != "LiNGAM":
continue
print('\n*****************************************************************************\n')
print("Causal Discovery Method : %s"%(method))
# Obtain valid dot format
graph_dot = str_to_dot(graph.source)
# Define Causal Model
model=CausalModel(
data = data_mpg,
treatment='mpg',
outcome='weight',
graph=graph_dot)
# Identification
identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)
print(identified_estimand)
# Estimation
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression",
control_value=0,
treatment_value=1,
confidence_intervals=True,
test_significance=True)
print("Causal Estimate is " + str(estimate.value))
"""
Explanation: As you can see, no two methods agree on the graphs. PC and GES effectively produce an undirected graph whereas LiNGAM produces a directed graph. We use only the LiNGAM method in the next section.
Estimate causal effects using Linear Regression
Now let us see whether these differences in the graphs also lead to signficant differences in the causal estimate of effect of mpg on weight.
End of explanation
"""
from cdt.data import load_dataset
data_sachs, graph_sachs = load_dataset("sachs")
data_sachs.dropna(inplace=True)
print(data_sachs.shape)
data_sachs.head()
"""
Explanation: As mentioned earlier, due to the absence of directed edges, no backdoor, instrmental or frontdoor variables can be found out for PC and GES. Thus, causal effect estimation is not possible for these methods. However, LiNGAM does discover a DAG and hence, its possible to output a causal estimate for LiNGAM. The estimate is still pretty far from the original estimate of -70.466 (which can be calculated from the graph).
Experiments on the Sachs dataset
The dataset consists of the simultaneous measurements of 11 phosphorylated proteins and phospholipids derived from thousands of individual primary immune system cells, subjected to both general and specific molecular interventions (Sachs et al., 2005).
The specifications of the dataset are as follows -
- Number of nodes: 11
- Number of arcs: 17
- Number of parameters: 178
- Average Markov blanket size: 3.09
- Average degree: 3.09
- Maximum in-degree: 3
- Number of instances: 7466
The original causal graph is known for the Sachs dataset and we compare the original graph with the ones discovered using CDT in this section.
1. Load the data
End of explanation
"""
labels = [f'{col}' for i, col in enumerate(data_sachs.columns)]
adj_matrix = nx.to_numpy_matrix(graph_sachs)
adj_matrix = np.asarray(adj_matrix)
graph_dot = make_graph(adj_matrix, labels)
display(graph_dot)
"""
Explanation: Ground truth of the causal graph
End of explanation
"""
from cdt.causality.graph import LiNGAM, PC, GES
graphs = {}
graphs_nx = {}
labels = [f'{col}' for i, col in enumerate(data_sachs.columns)]
functions = {
'LiNGAM' : LiNGAM,
'PC' : PC,
'GES' : GES,
}
for method, lib in functions.items():
obj = lib()
output = obj.predict(data_sachs)
graphs_nx[method] = output
adj_matrix = nx.to_numpy_matrix(output)
adj_matrix = np.asarray(adj_matrix)
graph_dot = make_graph(adj_matrix, labels)
graphs[method] = graph_dot
# Visualize graphs
for method, graph in graphs.items():
print("Method : %s"%(method))
display(graph)
"""
Explanation: Causal Discovery with Causal Discovery Tool (CDT)
We use the CDT library to perform causal discovery on the Auto-MPG dataset. We use three methods for causal discovery here -LiNGAM, PC and GES. These methods are widely used and do not take much time to run. Hence, these are ideal for an introduction to the topic. Other neural network based methods are also available in CDT and the users the encourages to try them out by themselves.
The documentation for the methods used in as follows:
- LiNGAM [link]
- PC [link]
- GES [link]
End of explanation
"""
for method, graph in graphs.items():
if method != "LiNGAM":
continue
print('\n*****************************************************************************\n')
print("Causal Discovery Method : %s"%(method))
# Obtain valid dot format
graph_dot = str_to_dot(graph.source)
# Define Causal Model
model=CausalModel(
data = data_sachs,
treatment='PIP2',
outcome='PKC',
graph=graph_dot)
# Identification
identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)
print(identified_estimand)
# Estimation
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression",
control_value=0,
treatment_value=1,
confidence_intervals=True,
test_significance=True)
print("Causal Estimate is " + str(estimate.value))
"""
Explanation: As you can see, no two methods agree on the graphs. Next we study the causal effects of these different graphs
Estimate effects using Linear Regression
Now let us see whether these differences in the graphs also lead to signficant differences in the causal estimate of effect of PIP2 on PKC.
End of explanation
"""
from cdt.metrics import SHD, SHD_CPDAG, SID, SID_CPDAG
from numpy.random import randint
for method, graph in graphs_nx.items():
print("***********************************************************")
print("Method: %s"%(method))
tar, pred = graph_sachs, graph
print("SHD_CPDAG = %f"%(SHD_CPDAG(tar, pred)))
print("SHD = %f"%(SHD(tar, pred, double_for_anticausal=False)))
print("SID_CPDAG = [%f, %f]"%(SID_CPDAG(tar, pred)))
print("SID = %f"%(SID(tar, pred)))
"""
Explanation: From the causal estimates obtained, it can be seen that the three estimates differ in different aspects. The graph obtained using LiNGAM contains a backdoor path and instrumental variables. On the other hand, the graph obtained using PC contains a backdoor path and a frontdoor path. However, despite these differences, both obtain the same mean causal estimate.
The graph obtained using GES contains only a backdoor path with different backdoor variables and obtains a different causal estimate than the first two cases.
Graph Validation
We compare the graphs obtained with the true causal graph using the causal discovery methods using 2 graph distance metrics - Structural Hamming Distance (SHD) and Structural Intervention Distance (SID). SHD between two graphs is, in simple terms, the number of edge insertions, deletions or flips in order to transform one graph to another graph. SID, on the other hand, is based on a graphical criterion only and quantifies the closeness between two DAGs in terms of their corresponding causal inference statements.
End of explanation
"""
import itertools
from numpy.random import randint
from cdt.metrics import SHD, SHD_CPDAG, SID, SID_CPDAG
# Find combinations of pair of methods to compare
combinations = list(itertools.combinations(graphs_nx, 2))
for pair in combinations:
print("***********************************************************")
graph1 = graphs_nx[pair[0]]
graph2 = graphs_nx[pair[1]]
print("Methods: %s and %s"%(pair[0], pair[1]))
print("SHD_CPDAG = %f"%(SHD_CPDAG(graph1, graph2)))
print("SHD = %f"%(SHD(graph1, graph2, double_for_anticausal=False)))
print("SID_CPDAG = [%f, %f]"%(SID_CPDAG(graph1, graph2)))
print("SID = %f"%(SID(graph1, graph2)))
"""
Explanation: The graph similarity metrics show that the scores are the lowest for the LiNGAM method of graph extraction. Hence, of the three methods used, LiNGAM provides the graph that is most similar to the original graph.
Graph Refutation
Here, we use the same SHD and SID metric to find out how different the discovered graph are from each other.
End of explanation
"""
|
jbocharov-mids/W207-Machine-Learning | Regression.ipynb | apache-2.0 | # This tells matplotlib not to try opening a new window for each plot.
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import time
from numpy.linalg import inv
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
from sklearn import preprocessing
np.set_printoptions(precision=4, suppress=True)
"""
Explanation: Experiment with Gradient Descent for Linear Regression.
Enough irises. Let's use the Boston housing data set and try training some regression models. This notebook features the <a href="http://stanford.edu/~mwaskom/software/seaborn/index.html">seaborn package</a> which makes all plots prettier and has some nice built-in features like the correlation plot you'll see below. You'll probably have to install it (ideally with pip).
End of explanation
"""
boston = load_boston()
X, Y = boston.data, boston.target
plt.hist(Y, 50)
plt.xlabel('Median value (in $1000)')
print boston.DESCR
"""
Explanation: Load the Boston housing data. This data set is pretty out-of-date since it was collected in the 1970s. Each of the 506 entries represents a local district in the Boston area. The standard target variable is the median home value in the district (in 1000s of dollars). Let's print out the description along with a histogram of the target. Notice that the distribution of median value is roughly Gaussian with a significant outlier -- there are around 15 very wealthy districts.
End of explanation
"""
# Shuffle the data, but make sure that the features and accompanying labels stay in sync.
np.random.seed(0)
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, Y = X[shuffle], Y[shuffle]
# Split into train and test.
train_data, train_labels = X[:350], Y[:350]
test_data, test_labels = X[350:], Y[350:]
"""
Explanation: As usual, let's create separate training and test data.
End of explanation
"""
# Combine all the variables (features and target) into a single matrix so we can easily compute all the correlations.
# Is there a better way to do this??
train_labels_as_matrix = np.array([train_labels]).T
all_data = np.hstack((train_data, train_labels_as_matrix))
all_labels = np.append(boston.feature_names, 'VALUE')
# Use seaborn to create a pretty correlation heatmap.
fig, ax = plt.subplots(figsize=(9, 9))
cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.corrplot(all_data, names=all_labels, annot=True, sig_stars=False,
diag_names=False, cmap=cmap, ax=ax)
fig.tight_layout()
"""
Explanation: Before we start making any predictions, let's get some intuition about the data by examining the correlations. Seaborn makes it easy to visualize a correlation matrix. Note, for example, that value and crime rate are negatively correlated: districts with lower crime rates tend to be higher valued.
End of explanation
"""
# eta is the learning rate; smaller values will tend to give slower but more precise convergence.
# num_iters is the number of iterations to run.
def gradient_descent(train_data, target_data, eta, num_iters):
# Add a 1 to each feature vector so we learn an intercept.
X = np.c_[np.ones(train_data.shape[0]), train_data]
# m = number of samples, k = number of features
m, k = X.shape
# Initially, set all the parameters to 1.
theta = np.ones(k)
# Keep track of costs after each step.
costs = []
for iter in range(0, num_iters):
# Get the current predictions for the training examples given the current estimate of theta.
hypothesis = np.dot(X, theta)
# The loss is the difference between the predictions and the actual target values.
loss = hypothesis - target_data
# In standard linear regression, we want to minimize the sum of squared losses.
cost = np.sum(loss ** 2) / (2 * m)
costs.append(cost)
# Compute the gradient.
gradient = np.dot(X.T, loss) / m
# Update theta, scaling the gradient by the learning rate.
theta = theta - eta * gradient
return theta, costs
# Run gradient descent and plot the cost vs iterations.
theta, costs = gradient_descent(train_data[:,0:1], train_labels, .01, 500)
plt.plot(costs)
plt.xlabel('Iteration'), plt.ylabel('Cost')
plt.show()
"""
Explanation: Ok. Let's implement gradient descent. It's more efficient to implement it with vector calculations, though this may be a bit more difficult to understand at first glance. Try to think through each step and make sure you understand how it works.
End of explanation
"""
def OLS(X, Y):
# Add the intercept.
X = np.c_[np.ones(X.shape[0]), X]
# We use np.linalg.inv() to compute a matrix inverse.
return np.dot(inv(np.dot(X.T, X)), np.dot(X.T, Y))
ols_solution = OLS(train_data[:,0:1], train_labels)
lr = LinearRegression(fit_intercept=True)
lr.fit(train_data[:,0:1], train_labels)
print 'Our estimated theta: %.4f + %.4f*CRIM' %(theta[0], theta[1])
print 'OLS estimated theta: %.4f + %.4f*CRIM' %(ols_solution[0], ols_solution[1])
print 'sklearn estimated theta: %.4f + %.4f*CRIM' %(lr.intercept_, lr.coef_[0])
"""
Explanation: Let's compare our results to sklearn's regression as well as the algebraic solution to "ordinary least squares". Try increasing the number of iterations above to see whether we get closer.
End of explanation
"""
num_feats = 5
theta, costs = gradient_descent(train_data[:,0:num_feats], train_labels, .01, 10)
plt.plot(map(np.log, costs))
plt.xlabel('Iteration'), plt.ylabel('Log Cost')
plt.show()
"""
Explanation: Ok, let's try fitting a model that uses more of the variables. Let's run just a few iterations and check the cost function.
End of explanation
"""
start_time = time.time()
theta, costs = gradient_descent(train_data[:,0:num_feats], train_labels, .001, 100000)
train_time = time.time() - start_time
plt.plot(map(np.log, costs))
plt.xlabel('Iteration'), plt.ylabel('Log Cost')
plt.show()
print 'Training time: %.2f secs' %train_time
print 'Our estimated theta:', theta
print 'OLS estimated theta:', OLS(train_data[:,0:num_feats], train_labels)
"""
Explanation: The cost is increasing and fast! This can happen when the learning rate is too large. The updated parameters skip over the optimum and the cost ends up larger than it was before. Let's reduce the learning rate and try again.
End of explanation
"""
plt.figure(figsize=(15, 3))
for feature in range(num_feats):
plt.subplot(1, num_feats, feature+1)
plt.hist(train_data[:,feature])
plt.title(boston.feature_names[feature])
"""
Explanation: This is getting pretty slow, and it looks like it hasn't yet converged (see the last value of theta, especially). The scale of the parameters can also make convergence difficult. Let's examine the distributions of the features we're using.
End of explanation
"""
scaler = preprocessing.StandardScaler()
scaler.fit(train_data)
scaled_train_data = scaler.transform(train_data)
scaled_test_data = scaler.transform(test_data)
plt.figure(figsize=(15, 3))
for feature in range(5):
plt.subplot(1, 5, feature+1)
plt.hist(scaled_train_data[:,feature])
plt.title(boston.feature_names[feature])
"""
Explanation: Clearly, the distribution of the feature values varies a great deal. Let's apply the standard scaler -- subtract the mean, divide by the standard deviation -- for each feature. This is built in as a preprocessor in sklearn. We run the fit() function on the training data and then apply the transformation to both train and test data. We don't fit on the test data because this would be cheating -- we shouldn't know in advance the mean and variance of the feature values in the test data, so we assume they are the same as the training data.
End of explanation
"""
start_time = time.time()
theta, costs = gradient_descent(scaled_train_data[:,0:5], train_labels, .01, 5000)
train_time = time.time() - start_time
plt.plot(map(np.log, costs))
plt.xlabel('Iteration'), plt.ylabel('Log Cost')
plt.show()
print 'Training time: %.2f secs' %train_time
print 'Our estimated theta:', theta
print 'OLS estimated theta:', OLS(scaled_train_data[:,0:5], train_labels)
"""
Explanation: Ok, let's try gradient descent again. We can increase the learning rate and decrease the number of iterations.
End of explanation
"""
# Create an augmented training set that has 2 copies of the crime variable.
augmented_train_data = np.c_[scaled_train_data[:,0], scaled_train_data]
# Run gradient descent and OLS and compare the results.
theta, costs = gradient_descent(augmented_train_data[:,0:6], train_labels, .01, 5000)
print 'Our estimated theta:', theta
print 'OLS estimated theta:',
try: print OLS(augmented_train_data[:,0:6], train_labels)
except: print 'ERROR, singular matrix not invertible'
"""
Explanation: Why do we even bother with gradient descent when the closed form solution works just fine?
There are many answers. First, gradient descent is a general-purpose tool; we are applying it to the least squares problem which happens to have a closed form solution, but most other machine learning objectives do not have this luxury.
Also, if we want to add regularization, we can no longer use the algebraic formula.
Here's one more reason: You can't take the inverse of a singular matrix. That is, if two of our features are co-linear, the inverse function will fail. Gradient descent doesn't have this problem and should learn instead to share the weight appropriately between the two co-linear features. Let's test this by simply adding a copy of the crime feature.
End of explanation
"""
|
diging/methods | 0. Metadata/0.0. Visualizing Metadata.ipynb | gpl-3.0 | import rdflib
import networkx as nx
import os
rdf_path = 'data/example.rdf'
"""
Explanation: 0.0. Visualizing Metadata
RDF (Resource Description Framework) is a data model for information on the internet. It can be used to describe just about anything, but is usually applied to bibliographic collections: representing metadata about published documents.
RDF XML is a grammar/serialization format for representing RDF. There are other ways of representing RDF, like N-triples, Turtle, and JSON-LD.
One of the core concepts of RDF is representing (meta)data as a graph. Every element of the RDF document (a file containing RDF statements) is a node in the graph: articles, people, journals, literals (like volume numbers), etc. These nodes are called resources. Resources are linked together in triples: tri-partite statements consisting of a subject, a predicate, and an object.
In this notebook, we'll convert a simple Zotero RDF/XML document into a GraphML graph (a graph serialization format) and visualize that graph using Cytoscape.
-------- --------------------------------------------
Note This exercise is intended only to introduce you to RDF and graphs, and isn't something that you are likely to do as part of an analysis. There is a sample RDF/XML file included in the data subdirectory, describing a single document. Use that file to start. You can try this with your own RDF if you want, but even a moderate number of documents will lead to extremely large and unweildy graphs. So, be careful.
-------- --------------------------------------------
End of explanation
"""
with open(rdf_path, 'r') as f:
corrected = f.read().replace('rdf:resource rdf:resource',
'link:link rdf:resource')
# The corrected graph will be saved to a file with `_corrected`
# added to the name. E.g. if the original RDF document was
# called `example.rdf`, the new file will be called
# `example_corrected.rdf`.
base, name = os.path.split(rdf_path)
corrected_name = '%s_corrected_.%s' % tuple(name.split('.'))
corrected_rdf_path = os.path.join(base, corrected_name)
with open(corrected_rdf_path, 'w') as f:
f.write(corrected)
"""
Explanation: Correct Zotero RDF
Zotero isn't exactly a pro at creating valid RDF/XML. The code cell below fixes a known issue with Zotero RDF documents.
End of explanation
"""
rdf_graph = rdflib.Graph()
rdf_graph.load(corrected_rdf_path)
"""
Explanation: Parse RDF
We use the rdflib Python package to parse the corrected RDF document. The code-cell below creates an empty RDF graph, and then reads the triples from the corrected RDF document created above.
End of explanation
"""
graph = nx.DiGraph() # Metadata is `directed`.
for s, p, o in rdf_graph.triples((None, None, None)):
# The .toPython() method converts rdflib objects into objects
# that any Python module can understand (e.g. str, int, float).
graph.add_edge(s.toPython(),
o.toPython(),
attr_dict={'predicate': p.toPython()})
print 'Added %i nodes and %i edges to the graph' % (graph.order(),
graph.size())
"""
Explanation: Create a GraphML file
GraphML is a popular graph serialization format. My favorite graph visualization tool, Cytoscape, can read GraphML. The NetworkX Python package makes it easy to create GraphML files.
End of explanation
"""
graphml_path = 'output/example.graphml'
nx.write_graphml(graph, graphml_path)
"""
Explanation: The code-cell below will create a new GraphML file that we can import in Cytoscape.
End of explanation
"""
|
LimeeZ/phys292-2015-work | assignments/assignment09/IntegrationEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
"""
Explanation: Integration Exercise 1
Imports
End of explanation
"""
def trapz(f, a, b, N):
"""Integrate the function f(x) over the range [a,b] with N points."""
N = N+1
a = a
b = b
h = (b-a)/N
k1 = np.arange(a, b, N)
k2 = np.arange(a,b,)
def trapz(f, a, b, N):
"""Integrate the function f(x) over the range [a,b] with N points."""
N = N+1
a = a
b = b
h = (b-a)/N
k = np.arange(1,N)
return h*(0.5*f(a) + 0.5*f(b) + f(a+k*h).sum())
f = lambda x: x**2
g = lambda x: np.sin(x)
I = trapz(f, 0, 1, 1000)
assert np.allclose(I, 0.33333349999999995)
J = trapz(g, 0, np.pi, 1000)
assert np.allclose(J, 1.9999983550656628)
"""
Explanation: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral:
$$ I(a,b) = \int_a^b f(x) dx $$
by dividing the interval $[a,b]$ into $N$ subdivisions of length $h$:
$$ h = (b-a)/N $$
Note that this means the function will be evaluated at $N+1$ points on $[a,b]$. The main idea of the trapezoidal rule is that the function is approximated by a straight line between each of these points.
Write a function trapz(f, a, b, N) that performs trapezoidal rule on the function f over the interval $[a,b]$ with N subdivisions (N+1 points).
End of explanation
"""
RYANISAWESOME = integrate.quad(f,0,1)[0]
RYANISSTILLAWESOME = integrate.quad(g,0,np.pi)[0]
error1 = np.abs(I - RYANISAWESOME) / RYANISAWESOME
error2 = np.abs(J - RYANISSTILLAWESOME) / RYANISSTILLAWESOME
print("{0} vs scipy.integrate.quad error = {1}".format("I",error1))
print("{0} vs scipy.integrate.quad error = {1}".format("J",error2))
assert True # leave this cell to grade the previous one
"""
Explanation: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
End of explanation
"""
|
ajkavanagh/pyne-sqlalchemy-2015-04 | notebook/ORM Examples.ipynb | gpl-3.0 | from sqlalchemy import create_engine
engine = create_engine('sqlite:///:memory:')
"""
Explanation: SQL Alchemy ORM Examples
So, these are the same as the CORE expression language, but using the ORM toolkit
Create an in memory SQLite database engine
End of explanation
"""
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
"""
Explanation: Create some tables using ORM declarative
End of explanation
"""
from sqlalchemy import Column, Integer, String, MetaData, ForeignKey
class User(Base):
__tablename__ = 'user'
id_user = Column(Integer, primary_key=True)
name = Column(String)
age = Column(Integer)
def __repr__(self):
return "<User(id_user={}, name={}, age={})".format(self.id_user, self.name, self.age)
from sqlalchemy.orm import relationship, backref
class Item(Base):
__tablename__ = 'item'
id_item = Column(Integer, primary_key=True)
id_user = Column(Integer, ForeignKey('user.id_user'))
thing = Column(String)
user = relationship("User", backref=backref('items', order_by=id_item))
def __repr__(self):
return "<Item(id_item={}, id_user={}, thing={})".format(self.id_item, self.id_user, self.thing)
User.__table__
"""
Explanation: declarative_base is a factory that makes a class on which to define ORM classes. We use it to create our models like this:
End of explanation
"""
Base.metadata.create_all(engine)
"""
Explanation: Now create the tables in the engine
This is the equivalent of metadata.create_all(engine).
End of explanation
"""
u = User(id_user=1, name="Billy", age=40)
print(u)
"""
Explanation: Create a User instance - this is just in Python memory - not in the DB!
End of explanation
"""
from sqlalchemy.orm import sessionmaker
Session = sessionmaker(bind=engine)
session = Session()
"""
Explanation: Sessions
And now for something different. We need to talk about sessions.
End of explanation
"""
people = [
(1, 'Bob', '20'),
(2, 'Sally', '25'),
(3, 'John', '30')]
for (id_user, name, age) in people:
u = User(id_user=id_user, name=name, age=age)
session.add(u)
"""
Explanation: Let's add a some users
End of explanation
"""
u1 = session.query(User).get(1)
print(u1)
"""
Explanation: Let's query for a user
End of explanation
"""
count = session.query(User).count()
print(count)
"""
Explanation: So this is similar to:
python
user_tuple = connection.execute(select([user]).where(user.c.id_user == 1)).fetchone()
And we can also count the users in the table:
End of explanation
"""
items = (
(1, 1, 'Peanuts'),
(2, 1, 'VW'),
(3, 1, 'iPad'),
(4, 2, 'Raisins'),
(5, 2, 'Fiat'),
(6, 2, 'Nexus 10'),
(7, 2, 'Timex'),
(8, 3, 'Caviar'),
(9, 3, 'Porche'),
(10, 3, 'Surface Pro'),
(11, 3, 'Rolex'),
(12, 3, 'Boat'),
(13, 3, 'Plane'))
for (id_item, id_user, thing) in items:
i = Item(id_item=id_item, id_user=id_user, thing=thing)
session.add(i)
print(session.query(Item).count())
"""
Explanation: Let's add the items to the database too
End of explanation
"""
john = session.query(User).get(1)
print(john)
for i in john.items:
print(i)
item1 = john.items[0]
print(item1)
print(item1.user)
"""
Explanation: Inspecting the data
As we're in the domain model now, we need to look at things like objects. Let's look at the John User() item and see what's there:
End of explanation
"""
for (u, i) in session.query(User, Item).filter(User.id_user == Item.id_user).all():
print(u, i)
"""
Explanation: Let's list out all of the users and items:
End of explanation
"""
u = session.query(User).join(Item).filter(Item.thing.ilike('timex')).one()
print(u)
"""
Explanation: Let's find the user who has a Timex:
End of explanation
"""
from sqlalchemy import func
results = session.query(User, func.count(Item.id_item)).join(Item).group_by(Item.id_user).all()
for r in results:
print(r)
"""
Explanation: How about func.count() and friends?
End of explanation
"""
|
scoaste/showcase | movie-lens/MovielensRecommendations.ipynb | mit | from hdfs import InsecureClient
from pyspark import SparkContext, SparkConf
import urllib
import zipfile
# the all important Spark context
conf = (SparkConf()
.setMaster('yarn-client')
.setAppName('Movielens Prediction Model')
)
sc = SparkContext(conf=conf)
# set to True to redownload the data and retrain the prediction model
retrain_model = True
# data source URLs
dataset_url = 'http://files.grouplens.org/datasets/movielens'
small_dataset_url = dataset_url + '/ml-latest-small.zip'
complete_dataset_url = dataset_url + '/ml-latest.zip'
# data local file system destination names
datasets_path = '/home/ste328/pyspark/movielens'
small_dataset_path = datasets_path + '/ml-latest-small'
complete_dataset_path = datasets_path + '/ml-latest'
small_dataset_zip = small_dataset_path + '.zip'
complete_dataset_zip = complete_dataset_path + '.zip'
# data HDFS paths
datasets_hdfs_path = '/user/ste328/spark/movielens'
# HDFS client
client = InsecureClient('http://devctlvhadapp02.iteclientsys.local:50070', user='ste328')
"""
Explanation: Movielens predictions using pyspark and mllib
Define imports and some initial variables
End of explanation
"""
if(retrain_model):
# Retrieve the data archives to local storage
(small_dataset_filename, small_dataset_headers) = urllib.urlretrieve(small_dataset_url, small_dataset_zip)
(complete_dataset_filename, complete_dataset_headers) = urllib.urlretrieve(complete_dataset_url, complete_dataset_zip)
print small_dataset_filename
print complete_dataset_filename
# Unzip the files
with zipfile.ZipFile(small_dataset_filename, 'r') as z:
z.extractall(datasets_path)
with zipfile.ZipFile(complete_dataset_filename, 'r') as z:
z.extractall(datasets_path)
# Copy the unzipped files to HDFS
small_dataset_hdfs_path = client.upload(datasets_hdfs_path, small_dataset_path, overwrite=True)
complete_dataset_hdfs_path = client.upload(datasets_hdfs_path, complete_dataset_path, overwrite=True)
else:
small_dataset_hdfs_path = '/user/ste328/spark/movielens/ml-latest-small'
complete_dataset_hdfs_path = '/user/ste328/spark/movielens/ml-latest'
print small_dataset_hdfs_path
print complete_dataset_hdfs_path
"""
Explanation: Retrieve the latest movie data and write it to the local file system
End of explanation
"""
if(retrain_model):
# ('userId', 'movieId', 'rating', 'timestamp')
small_ratings_raw_data = sc.textFile(small_dataset_hdfs_path + '/ratings.csv')
small_ratings_raw_data_header = small_ratings_raw_data.take(1)[0]
small_ratings_data = small_ratings_raw_data\
.filter(lambda line: line != small_ratings_raw_data_header)\
.map(lambda line: line.split(","))\
.map(lambda tokens: (int(tokens[0]), int(tokens[1]), float(tokens[2])))\
.cache().coalesce(1000, shuffle=True)
print small_ratings_data.take(1)
"""
Explanation: Read in the small data
End of explanation
"""
if(retrain_model):
# training 60%, validation 20%, test 20%
(training_RDD, validation_RDD, test_RDD) = small_ratings_data.randomSplit([6, 2, 2], seed=0L)
# remove 'rating' for validation and test predictions
validation_for_predict_RDD = validation_RDD.map(lambda x: (x[0], x[1]))
test_for_predict_RDD = test_RDD.map(lambda x: (x[0], x[1]))
print training_RDD.take(1)
print validation_RDD.take(1)
print test_RDD.take(1)
print validation_for_predict_RDD.take(1)
print test_for_predict_RDD.take(1)
"""
Explanation: Split the small ratings data into training, validation, & test
End of explanation
"""
from pyspark.mllib.recommendation import ALS
import math
import numpy as np
if(retrain_model):
seed = 5L
iterations = 15
regularization_parameters = np.linspace(0.1, 0.25, 4, dtype=float)
ranks = np.linspace(2, 5, 4, dtype=int)
min_error = float('inf') #infinity
best_rank = -1
best_regularization_parameter = -1
for regularization_parameter in regularization_parameters:
for rank in ranks:
model = ALS.train(training_RDD, rank, seed=seed, iterations=iterations, lambda_=regularization_parameter)
predictions = model.predictAll(validation_for_predict_RDD).map(lambda r: ((r[0], r[1]), r[2]))
rates_and_preds = validation_RDD.map(lambda r: ((r[0], r[1]), r[2])).join(predictions)
error = math.sqrt(rates_and_preds.map(lambda r: (r[1][0] - r[1][1])**2).mean())
print 'For regularization parameter %s and rank %s the RMSE is %s' % (regularization_parameter, rank, error)
if error < min_error:
min_error = error
best_rank = rank
best_regularization_parameter = regularization_parameter
print 'The best model was trained with regularization parameter %s and rank %s' % (best_regularization_parameter, best_rank)
"""
Explanation: Train the predictions model using the Alternating Least Squares (ALS) algorithm
End of explanation
"""
if(retrain_model):
model = ALS.train(training_RDD, best_rank, seed=seed, iterations=iterations, lambda_=best_regularization_parameter)
predictions = model.predictAll(test_for_predict_RDD).map(lambda r: ((r[0], r[1]), r[2]))
rates_and_preds = test_RDD.map(lambda r: ((r[0], r[1]), r[2])).join(predictions)
error = math.sqrt(rates_and_preds.map(lambda r: (r[1][0] - r[1][1])**2).mean())
print 'For testing data the RMSE is %s' % (error)
"""
Explanation: Test the best ranked model
End of explanation
"""
if(retrain_model):
small_ratings_data.unpersist()
%reset_selective -f small_ratings_raw_data
%reset_selective -f small_ratings_raw_data_header
%reset_selective -f small_ratings_data
%reset_selective -f training_RDD
%reset_selective -f validation_RDD
%reset_selective -f test_RDD
%reset_selective -f validation_for_predict_RDD
%reset_selective -f test_for_predict_RDD
%reset_selective -f model
%reset_selective -f predictions
%reset_selective -f rates_and_preds
"""
Explanation: Reset variables and uncache the small dataset to conserve memory.
End of explanation
"""
# ('userId', 'movieId', 'rating', 'timestamp')
complete_ratings_raw_data = sc.textFile(complete_dataset_hdfs_path + '/ratings.csv')
complete_ratings_raw_data_header = complete_ratings_raw_data.take(1)[0]
# Create more partitions for this RDD to save on memory usage.
complete_ratings_data_RDD = complete_ratings_raw_data\
.filter(lambda line: line != complete_ratings_raw_data_header)\
.map(lambda line: line.split(","))\
.map(lambda tokens: (int(tokens[0]), int(tokens[1]), float(tokens[2]))).cache().coalesce(1000, shuffle=True)
print "There are %s recommendations in the complete dataset" % (complete_ratings_data_RDD.count())
print complete_ratings_data_RDD.take(1)
"""
Explanation: Read in the complete ratings dataset
(NOTE: make sure Spark is running in YARN cluster or client mode (using Python 2.7) or likely Java will run out of heap.)
End of explanation
"""
# ('movieId', 'title', 'genres')
complete_movies_raw_data = sc.textFile(complete_dataset_hdfs_path + '/movies.csv')
complete_movies_raw_data_header = complete_movies_raw_data.take(1)[0]
# Create more partitions for this RDD to save on memory usage.
complete_movies_data_RDD = complete_movies_raw_data\
.filter(lambda line: line != complete_movies_raw_data_header)\
.map(lambda line: line.split(","))\
.map(lambda tokens: (int(tokens[0]), tokens[1])).cache().coalesce(1000, shuffle=True)
print "There are %s movies in the complete dataset" % (complete_movies_data_RDD.count())
print complete_movies_data_RDD.take(1)
"""
Explanation: Read in the complete movies dataset
End of explanation
"""
def get_counts_and_averages(movieID_and_ratings_tuple):
num_ratings = len(movieID_and_ratings_tuple[1])
# (movieId, (count, average))
return movieID_and_ratings_tuple[0], (num_ratings, float(sum(movieID_and_ratings_tuple[1]))/num_ratings)
# ('userId', 'movieId', 'rating', 'timestamp')
movie_ID_with_ratings_RDD = complete_ratings_data_RDD.map(lambda x: (x[1], x[2])).groupByKey()
movie_ID_with_ratings_aggregates_RDD = movie_ID_with_ratings_RDD.map(get_counts_and_averages)
movie_ID_with_ratings_aggregates_RDD.take(1)
complete_movies_with_aggregates_RDD = complete_movies_data_RDD.join(movie_ID_with_ratings_aggregates_RDD)
complete_movies_with_aggregates_RDD.take(1)
"""
Explanation: Count and average the ratings and join them to the movies (for prediction selection)
End of explanation
"""
retrain_model = True # why is this variable lost?
if(retrain_model):
training_RDD, test_RDD = complete_ratings_data_RDD.randomSplit([7, 3], seed=0L)
complete_model = ALS.train(training_RDD, best_rank, seed=seed, iterations=iterations,
lambda_=best_regularization_parameter)
"""
Explanation: Train the final commender model using the complete ratings dataset
End of explanation
"""
from pyspark.mllib.recommendation import MatrixFactorizationModel as mfm
model_path = '/user/ste328/spark/movielens/models/als'
if(retrain_model):
client.delete(model_path, True)
complete_model.save(sc, model_path)
%reset_selective -f complete_model
complete_model = mfm.load(sc, model_path)
"""
Explanation: Save and reload the recommendation model
End of explanation
"""
user_ID = 470
# ('userId', 'movieId', 'rating', 'timestamp')
user_unrated_movies_RDD = complete_ratings_data_RDD\
.filter(lambda x: x[0] != user_ID)\
.map(lambda x: (user_ID, x[1]))\
.distinct()
user_unrated_movies_RDD.take(1)
"""
Explanation: Get sample recommendations for a user
Get a tuple of user ID and movie ID for movies not rated by this sample user
End of explanation
"""
user_movie_predictions_RDD = complete_model.predictAll(user_unrated_movies_RDD)
user_movie_predictions_RDD.take(1)
"""
Explanation: Get predictions
End of explanation
"""
movie_predictions_RDD = user_movie_predictions_RDD\
.map(lambda x: (x.product, x.rating))\
.join(complete_movies_with_aggregates_RDD)
movie_predictions_RDD.take(1)
"""
Explanation: Join the movie predictions with their titles, ratings counts, and ratings average.
End of explanation
"""
# (0=movie prediction, 1=rating prediction, 2=title, 3=count, 4=average)
movie_predictions_flat_RDD = movie_predictions_RDD\
.map(lambda x: (x[0], x[1][0], x[1][1][0], x[1][1][1][0], x[1][1][1][1]))\
.filter(lambda x: x[3] >= 25 and x[1] > x[4])\
.takeOrdered(10, key = lambda x: -x[4])
print '\n'.join(map(str,movie_predictions_flat_RDD))
"""
Explanation: Flatten out the nested tuples, take only movie predictions with more than 25 ratings and where the predicted rating is > the average rating, and take the 10 best average rated movies
End of explanation
"""
|
geilerloui/deep-learning | image-classification/dlnd_image_classification.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
x_min = np.min(x)
x_max = np.max(x)
norm_x = (x - x_min) / (x_max - x_min)
return norm_x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
from sklearn import preprocessing
lb_encoding = None
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
global lb_encoding
if lb_encoding is not None:
return lb_encoding.transform(x)
else:
lb = preprocessing.LabelBinarizer()
lb_encoding = lb.fit(x)
print(lb_encoding.transform(x))
return lb_encoding.transform(x)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
# dim = list()
# [dim.append(i) for i in image_shape]
# dimension = [None] + dim
# better way of doing this:
# * = unpacking argument lists
dimension = [None] +[*image_shape]
return tf.placeholder(tf.float32, dimension, name='x')
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, [None, n_classes], name='y')
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, name='keep_prob')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
#Weights and bias
weight = tf.Variable(tf.truncated_normal(
[conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[3], conv_num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros(conv_num_outputs))
# Apply convolution
conv_layer = tf.nn.conv2d(x_tensor, weight, strides=[1, conv_strides[0], conv_strides[1],1], padding='SAME')
# Add bias
conv_layer = tf.nn.bias_add(conv_layer, bias)
# Apply activation function
conv_layer = tf.nn.relu(conv_layer)
# Apply Max Pooling
conv_layer = tf.nn.max_pool(
conv_layer,
ksize = [1, pool_ksize[0], pool_ksize[1], 1],
strides = [1, pool_strides[0], pool_strides[1], 1],
padding = 'SAME')
return conv_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
print(x_tensor.shape)
width = int(x_tensor.shape[1])
height = int(x_tensor.shape[2])
depth = int(x_tensor.shape[3])
image_flat_size = width * height * depth
return tf.reshape(x_tensor, [-1, image_flat_size])
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
dense = tf.layers.dense(inputs=x_tensor,
units=num_outputs,
activation=tf.nn.relu
)
return dense
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
dense = tf.layers.dense(inputs=x_tensor,
units=num_outputs)
return dense
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs=10
conv_ksize=(2,2)
conv_strides=(2,2)
pool_ksize=(2,2)
pool_strides=(2,2)
conv1 = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv1 = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv1 = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
conv1 = flatten(conv1)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
conv2 = fully_conn(conv1, 10)
conv2 = tf.layers.dropout(
inputs=conv2, rate=keep_prob)
conv2 = fully_conn(conv1, 10)
conv2 = tf.layers.dropout(
inputs=conv2, rate=keep_prob)
conv2 = fully_conn(conv1, 10)
conv2 = tf.layers.dropout(
inputs=conv2, rate=keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = output(conv2, 10)
# TODO: return output
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# TODO: Implement Function
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability
})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
loss = session.run(cost, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.0})
valid_acc = session.run(accuracy, feed_dict={
x: valid_features,
y: valid_labels,
keep_prob: 1.0})
print("Loss: {:>10.4f} Accuracy: {:.6f}".format(loss, valid_acc))
pass
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 150
batch_size = 128
keep_probability = 0.8
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
Yu-Group/scikit-learn-sandbox | jupyter/29_iRF_demo_sklearn.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
from sklearn.datasets import load_breast_cancer
import numpy as np
from functools import reduce
# Needed for the scikit-learn wrapper function
from sklearn.tree import irf_utils
from sklearn.ensemble import RandomForestClassifier
from math import ceil
# Import our custom utilities
from imp import reload
from utils import irf_jupyter_utils
reload(irf_jupyter_utils)
"""
Explanation: Demo of the scikit-learn fork iRF
The following is a demo of the scikit learn iRF code
Typical Setup
Import the required dependencies
In particular irf_utils and irf_jupyter_utils
End of explanation
"""
load_breast_cancer = load_breast_cancer()
X_train, X_test, y_train, y_test, rf = irf_jupyter_utils.generate_rf_example(n_estimators=20,
feature_weight=None)
"""
Explanation: Step 1: Fit the Initial Random Forest
Just fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn
End of explanation
"""
print("Training feature dimensions", X_train.shape, sep = ":\n")
print("\n")
print("Training outcome dimensions", y_train.shape, sep = ":\n")
print("\n")
print("Test feature dimensions", X_test.shape, sep = ":\n")
print("\n")
print("Test outcome dimensions", y_test.shape, sep = ":\n")
print("\n")
print("first 2 rows of the training set features", X_train[:2], sep = ":\n")
print("\n")
print("first 2 rows of the training set outcomes", y_train[:2], sep = ":\n")
"""
Explanation: Check out the data
End of explanation
"""
all_rf_tree_data = irf_utils.get_rf_tree_data(
rf=rf, X_train=X_train, X_test=X_test, y_test=y_test)
"""
Explanation: Step 2: Get all Random Forest and Decision Tree Data
Extract in a single dictionary the random forest data and for all of it's decision trees
This is as required for RIT purposes
End of explanation
"""
np.random.seed(12)
all_rit_tree_data = irf_utils.get_rit_tree_data(
all_rf_tree_data=all_rf_tree_data,
bin_class_type=1,
M=100,
max_depth=2,
noisy_split=False,
num_splits=2)
"""
Explanation: STEP 3: Get the RIT data and produce RITs
End of explanation
"""
# Print the feature ranking
print("Feature ranking:")
feature_importances_rank_idx = all_rf_tree_data['feature_importances_rank_idx']
feature_importances = all_rf_tree_data['feature_importances']
for f in range(X_train.shape[1]):
print("%d. feature %d (%f)" % (f + 1
, feature_importances_rank_idx[f]
, feature_importances[feature_importances_rank_idx[f]]))
"""
Explanation: Perform Manual CHECKS on the irf_utils
These should be converted to unit tests and checked with nosetests -v test_irf_utils.py
Step 4: Plot some Data
List Ranked Feature Importances
End of explanation
"""
# Plot the feature importances of the forest
feature_importances_std = all_rf_tree_data['feature_importances_std']
plt.figure()
plt.title("Feature importances")
plt.bar(range(X_train.shape[1])
, feature_importances[feature_importances_rank_idx]
, color="r"
, yerr = feature_importances_std[feature_importances_rank_idx], align="center")
plt.xticks(range(X_train.shape[1]), feature_importances_rank_idx)
plt.xlim([-1, X_train.shape[1]])
plt.show()
"""
Explanation: Plot Ranked Feature Importances
End of explanation
"""
# Now plot the trees individually
irf_jupyter_utils.draw_tree(decision_tree = all_rf_tree_data['rf_obj'].estimators_[0])
"""
Explanation: Decision Tree 0 (First) - Get output
Check the output against the decision tree graph
End of explanation
"""
#irf_jupyter_utils.pretty_print_dict(inp_dict = all_rf_tree_data['dtree0'])
# Count the number of samples passing through the leaf nodes
sum(all_rf_tree_data['dtree0']['tot_leaf_node_values'])
"""
Explanation: Compare to our dict of extracted data from the tree
End of explanation
"""
#irf_jupyter_utils.pretty_print_dict(inp_dict = all_rf_tree_data['dtree0']['all_leaf_paths_features'])
"""
Explanation: Check output against the diagram
End of explanation
"""
all_rf_weights, all_K_iter_rf_data, \
all_rf_bootstrap_output, all_rit_bootstrap_output, \
stability_score = irf_utils.run_iRF(X_train=X_train,
X_test=X_test,
y_train=y_train,
y_test=y_test,
K=5,
n_estimators=20,
B=30,
random_state_classifier=2018,
propn_n_samples=.2,
bin_class_type=1,
M=20,
max_depth=5,
noisy_split=False,
num_splits=2,
n_estimators_bootstrap=5)
stability_score
"""
Explanation: Run the iRF function
We will run the iRF with the following parameters
Data:
breast cancer binary classification data
random state (for reproducibility): 2018
Weighted RFs
K: 5 iterations
number of trees: 20
Bootstrap RFs
proportion of bootstrap samples: 20%
B: 30 bootstrap samples
number of trees (bootstrap RFs): 5 iterations
RITs (on the bootstrap RFs)
M: 20 RITs per forest
filter label type: 1-class only
Max Depth: 5
Noisy Split: False
Number of splits at Node: 2 splits
Running the iRF is easy - single function call
All of the bootstrap, RIT complexity is covered through the key parameters passed through
in the main algorithm (as listed above)
This function call returns the following data:
all RF weights
all the K RFs that are iterated over
all of the B bootstrap RFs that are run
all the B*M RITs that are run on the bootstrap RFs
the stability score
This is a lot of data returned!
Will be useful when we build the interface later
Let's run it!
End of explanation
"""
irf_jupyter_utils._get_histogram(stability_score, sort = True)
"""
Explanation: Examine the stability scores
End of explanation
"""
for k in range(5):
iteration = "rf_iter{}".format(k)
feature_importances_std = all_K_iter_rf_data[iteration]['feature_importances_std']
feature_importances_rank_idx = all_K_iter_rf_data[iteration]['feature_importances_rank_idx']
feature_importances = all_K_iter_rf_data[iteration]['feature_importances']
plt.figure(figsize=(8, 6))
title = "Feature importances; iteration = {}".format(k)
plt.title(title)
plt.bar(range(X_train.shape[1])
, feature_importances[feature_importances_rank_idx]
, color="r"
, yerr = feature_importances_std[feature_importances_rank_idx], align="center")
plt.xticks(range(X_train.shape[1]), feature_importances_rank_idx, rotation='vertical')
plt.xlim([-1, X_train.shape[1]])
plt.show()
"""
Explanation: That's interesting - feature 22, 27, 20, 23 keep popping up!
We should probably look at the feature importances to understand if there is a useful correlation
Examine feature importances
In particular, let us see how they change over the K iterations of random forest
End of explanation
"""
irf_jupyter_utils.pretty_print_dict(all_K_iter_rf_data['rf_iter4']['rf_validation_metrics'])
# Now plot the trees individually
irf_jupyter_utils.draw_tree(decision_tree = all_K_iter_rf_data['rf_iter4']['rf_obj'].estimators_[0])
"""
Explanation: Some Observations
Note that after 5 iterations, the most important features were found to be 22, 27, 7, and 23
Now also recall that the most stable interactions were found to be '22_27', '7_22', '7_22_27', '23_27', '7_27', '22_23_27'
Given the overlap between these two plots, the results are not unreasonable here.
Explore iRF Data Further
We can look at the decision paths of the Kth RF
Let's look at the final iteration RF - the key validation metrics
End of explanation
"""
irf_jupyter_utils.pretty_print_dict(
all_K_iter_rf_data['rf_iter4']['dtree0']['all_leaf_paths_features'])
"""
Explanation: We can get this data quite easily in a convenient format
End of explanation
"""
irf_jupyter_utils.pretty_print_dict(
all_K_iter_rf_data['rf_iter4']['dtree0']['all_leaf_node_values'])
"""
Explanation: This checks nicely against the plotted diagram above.
In fact - we can go further and plot some interesting data from the Decision Trees
- This can help us understand variable interactions better
End of explanation
"""
irf_jupyter_utils._hist_features(all_K_iter_rf_data['rf_iter4'], n_estimators = 20, \
title = 'Frequency of features along decision paths : iteration = 4')
"""
Explanation: We can also look at the frequency that a feature appears along a decision path
End of explanation
"""
all_K_iter_rf_data.keys()
print(all_K_iter_rf_data['rf_iter0']['feature_importances'])
"""
Explanation: The most common features that appeared were 27,22,23, and 7. This matches well with the feature importance plot above.
Run some Sanity Checks
Run iRF for just 1 iteration - should be the uniform sampling version
This is just a sanity check: the feature importances from iRF after 1 iteration should match the feature importance from running a standard RF
End of explanation
"""
rf = RandomForestClassifier(n_estimators=20, random_state=2018)
rf.fit(X=X_train, y=y_train)
print(rf.feature_importances_)
"""
Explanation: Compare to the original single fitted random forest
End of explanation
"""
#all_rf_weights['rf_weight1']
#all_K_iter_rf_data
#all_rf_bootstrap_output
#all_rit_bootstrap_output
#stability_score
"""
Explanation: And they match perfectly as expected.
End of explanation
"""
|
ioggstream/python-course | ansible-101/notebooks/06_bastion_and_ssh.ipynb | agpl-3.0 | cd /notebooks/exercise-06/
"""
Explanation: Bastion hosts
There are many reasons for using bastion hosts:
security access eg in cloud environment
vpn eg via windows hosts
The latter case is quite boring as ansible doesn't support windows as a client platform.
A standard approach is:
have a ssh server or a proxy installed on the bastion
connecto the bastion to the remote network (eg. via vpn)
configure ssh options in ansible to connect thru the bastion
We'll do this via two configuration files:
a standard ssh_config where we put the passthru configuration
a simple ansible.cfg referencing ssh_config
This approach allows us:
to test the standard ssh connection thru the bastion without messing with ansible
keep ansible.cfg simple in case we want to reuse them from the intranet (Eg. without traversing the bastion)
End of explanation
"""
!cat ssh_config
"""
Explanation: ssh_config
Instead of continuously passing options to ssh, we can use -F ssh_config and put configurations there.
End of explanation
"""
fmt=r'{{.NetworkSettings.IPAddress}}'
!docker -H tcp://172.17.0.1:2375 inspect ansible101_bastion_1 --format {fmt} # pass variables *before* commands ;)
"""
Explanation: If we don't use it, we can turn off GSSApiAuthentication which attempts may slow down the connection.
Unsecure by design
Inhibit PKI authentication is insecure by design:
passwords will surely ends in cleartext files
people ends doing things like the following
```
the password is sent to the bastion via a
cleartext file.
Match Host 172.25.0.*
ProxyCommand sshpass -f cleartext-bastion-password ssh -F config jump@bastion -W %h:%p
```
Connect to the bastion
Test connectivity to the bastion. Check your host ips and modify ssh_config accordingly.
Replace ALL bastion occurrencies, including the one below the BEWARE note
End of explanation
"""
# Use this cell to create the pin file and then encrypt the vault
# Use this cell to test/run the playbook. You can --limit the execution to the bastion host only.
!ssh -Fssh_config bastion hostname
"""
Explanation: Exercise
Write the ssh-copy-id.yml playbook to install an ssh key to the bastion.
Bastion credentials are:
user: root
password root
Try to do it without watching the previous exercises:
modify the empty ansible.cfg
referencing a pin file
passing [ssh_connection] arguments to avoid ssh key mismatches
pointing to the local inventory
store credentials in the encrypted vault.yml.
provide an inventory file
You can reuse the old id_ansible key or:
create a new one and adjust the reference in ssh_config
Hint:
if you provide an IdentityFile, password authentication won't work on the bastion node;
you must copy ssh id file using password authentication and eventually clean up your known_host file
End of explanation
"""
fmt=r'{{.NetworkSettings.IPAddress}}'
!docker -H tcp://172.17.0.1:2375 inspect ansible101_web_1 --format {fmt} # pass variables *before* commands ;)
!ssh -F ssh_config root@172.17.0.4 ip -4 -o a # get host ip
"""
Explanation: ansible.cfg and ssh_config
In the previous exercise, we used the [ssh_connection] stanza to configure ssh connections.
We can instead just set
[ssh_connection]
ssh_args = -F ssh_config
Write everything in ssh_config.
Connecting via bastion in ansible enforcing multiple references to ssh_config
Exercise
Uncomment the last lines of ssh_config and try to use bastion for connecting to the other hosts
End of explanation
"""
|
hektor-monteiro/python-notebooks | aula-6_graficos.ipynb | gpl-2.0 | # essa instrução faz com que os gráficos apareçam no notebook
%matplotlib inline
import matplotlib.pyplot as plt
y = [ 1.0, 2.4, 1.7, 0.3, 0.6, 1.8 ]
plt.plot(y)
plt.show()
# em geral teremos dados em x e y
import matplotlib.pyplot as plt
import numpy as np
x = [ 0.5, 1.0, 2.0, 4.0, 7.0, 10.0 ]
y = [ 1.0, 2.4, 1.7, 0.3, 0.6, 1.8 ]
plt.plot(x,y)
plt.plot(np.array(x)+1.5,y)
plt.show()
"""
Explanation: Gráficos
vamos aprender as ferramentas de Python para graficar dados numéricos
O principal pacote de gráficos que vamos usar é o matplotlib:
http://matplotlib.org/
este pacote apresenta inúmeras facilidades, em particular sua galeria de gráficos:
http://matplotlib.org/gallery.html
Nesta galeria são encontrados exemplos dos mais variados tipos de gráficos com o código fonte para uso
três tipos de gráficos são especialmente úteis em física: gráficos de linha , gráficos de dispersão e gráficos de densidade (tipo curvas de nível ou contornos)
no caso de gráficos, em geral é mais úti importar toda a biblioteca de rotinas:
End of explanation
"""
%matplotlib notebook
import matplotlib.pyplot as plt
x = [ 0.5, 1.0, 2.0, 4.0, 7.0, 10.0 ]
y = [ 1.0, 2.4, 1.7, 0.3, 0.6, 1.8 ]
plt.plot(x,y)
plt.show()
"""
Explanation: o uso de plt.show() é sempre necessário para mostrar o grafico. Python usa essa estratégia para facilitar a plotagem de varios graficos
No exemplo abaixo, o gráfico é plotado em uma janela fora do notebook. Este é o procedimento padrão em Python.
A janela permite controlar outros atributos dos gráficos
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0,10,100) # cria um array com 100 elemntos de 0 a 10
y = np.sin(x)
plt.plot(x,y)
plt.show()
"""
Explanation: vejamos um exemplo mais interessante onde plotamos uma dada função
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
# lendo o arquivo
data = np.loadtxt("dados.txt",float)
print data.shape, type(data)
# alocando as variáveis
x = data[:,0]
y = data[:,1]
# plotando o gráfico
plt.plot(x,y)
plt.show()
"""
Explanation: Observe como usamos a função linspace do numpy para gerar um array de valores de x, e a função seno do numpy, que é uma versão especial que trabalha com arrays, calculando o seno de cada elemento
note que o grafico não é uma curva somente. Calculamos o seno de 100 pontos, plotamos estes pontos e a função plot traça uma linha conectando-os
lendo dados de um arquivo
Uma das coisas mais comuns em física é a obtenção de dados experimentais que serão posteriormente analisados através de gráficos.
Python tem diversas ferramentas para lidar com esse tipo de situação
Caso se tenha os dados em um arquivo ascii simples, o método mais eficiente de importar os dados é usando a função loadtxt do numpy
A função loadtxt é muito flexível e tem diversas opções. Para mais detalhes vejam: http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html
No entanto é razoavelmente inteligente para interpretar automaticamente os formatos de arquivos mais usuais
veja o exemplo abaixo:
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
# lendo o arquivo
data = np.loadtxt("dados.txt",float)
plt.plot(data[:,0],data[:,1])
plt.show()
"""
Explanation: poderíamos ter feito a mesma coisa sem definir novas variáveis:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
xpoints = []
ypoints = []
for x in np.linspace(0,10,100):
xpoints.append(x)
ypoints.append(np.sin(x))
plt.plot(xpoints,ypoints)
plt.show()
"""
Explanation: Uma técnica interessante que pode ser feita em Python para construção de gráficos é o uso de listas que vão sendo atualizadas a medida que se executam os cálculos.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0,10,20)
y1 = np.sin(x)
y2 = np.cos(x)
plt.plot(x,y1,"ob--")
plt.plot(x,y2,"r--")
plt.ylim(1.5,-1.5)
plt.xlabel("eixo x $\Omega$")
plt.ylabel("y=sin(x) ou y=cos(x) $\int \pi$")
plt.show()
"""
Explanation: existem muitas opções para definir melhor os gráficos:
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
data = np.loadtxt("stars.txt",float)
x = data[:,0]
y = data[:,1]
plt.scatter(x,y)
plt.xlabel("Temperature")
plt.ylabel("Magnitude")
plt.xlim(0,13000)
plt.ylim(-5,20)
plt.show()
import matplotlib.pyplot as plt
import numpy as np
data = np.loadtxt("stars.txt",float)
x = data[:,0]
y = data[:,1]
plt.figure(1)
plt.scatter(x,y,marker='+') # como mudar o simbolo com função scatter
plt.xlabel("Temperature")
plt.ylabel("Magnitude")
plt.xlim(0,13000)
plt.ylim(-5,20)
plt.figure(2)
#plt.scatter(x,y,marker='o',c=x) # como mudar o simbolo com função scatter
#plt.scatter(x,y,marker='o',s=y) # como mudar o simbolo com função scatter
plt.scatter(x,y,marker='o',s=y,c=x) # como mudar o simbolo com função scatter
plt.xlabel("Temperature")
plt.ylabel("Magnitude")
plt.xlim(0,13000)
plt.ylim(-5,20)
plt.show()
# capacidades interessantes da função scatter
import matplotlib.pyplot as plt
import numpy as np
x = np.random.rand(30)
y = np.random.rand(30)
prop1 = 50./(x**2 + y**2)
prop2 = np.sqrt(x**2 + y**2)
plt.subplot(321)
#plt.scatter(x,y)
plt.scatter(x, y, s=prop1, c=prop2, marker="^",alpha=0.1)
"""
Explanation: como mostrado acima, podemos variar o estilo das linhas plotadas
para fazer isso um terceiro argumento é adicionado a instrução plot que é uma cadeia de caracteres
a primeira letra diz qual a cor a ser usada possibilidades são: r, g , b , c , m , Y , K , W e , para o vermelho , verde , azul, ciano , magenta, amarelo, preto, e branco, respectivamente.
o segundo caractere diz qual o tipo de linha deve ser usado:
| linestyle | descrição |
| ---------------- |:------------------:|
|'-' or 'solid' | solid line |
|'--' or 'dashed' | dashed line |
|'-.' or 'dash_dot'| dash-dotted line|
|':' or 'dotted' | dotted line |
|'None' | draw nothing |
|' ' | draw nothing |
|'' | draw nothing |
Muitas vezes não queremos plotar curvas do tipo y=f(x), mas um conjunto de dados experimentais onde ambos x e y foram medidos
para isso usamos os graficos do tipo scatter
End of explanation
"""
import matplotlib.pyplot as plt # carrega as funções para gráficos do matplotlib
import numpy as np
import matplotlib.image as mpimg # usado para importar uma imagem padrão do matplotlib
img=mpimg.imread('stinkbug.png')
print type(img),img.shape, img.dtype
# para plotar a imagem usamos imshow
# como a imagem em princípio esta no formato RGB, temos 3 canais, por isso a forma do array é (375, 500, 3)
plt.imshow(img[:,:,0], cmap="hot")
plt.colorbar()
"""
Explanation: Outra função importante é imshow para mostrar imagens, em geral definidas por matrizes bidimensionais de dados
End of explanation
"""
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
# Dados de exemplo gerados aleatóriamente
mu = 100 # média da distribuição
sigma = 15 # desvio padrão da distribuição
x = mu + sigma * np.random.randn(1000) # amostragem da distribuição normal especificada
# numero de bins a serem usados no histograma
num_bins = 50
# fazendo o histograma dos dados
n, bins, patches = plt.hist(x, num_bins, normed=1, facecolor='green', alpha=0.8)
# mostre a distribuição gaussiana original
y = mlab.normpdf(bins, mu, sigma)
plt.plot(bins, y, 'r--')
plt.xlabel('Smarts')
plt.ylabel('Probability')
plt.title(r'Histogram of IQ: $\mu=100$, $\sigma=15$')
# Tweak spacing to prevent clipping of ylabel
plt.subplots_adjust(left=0.15)
plt.show()
"""
Explanation: Um tipo de gráfico muito utilizado em física, especialmente experimental é o histograa. Veja abaixo um exemplo básico de como fazer este tipo de gráfico com python
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
# dados simulados
x = np.arange(0.1, 4, 0.5)
y = np.exp(-x)
# exemplo simulado de erros que variam com a variável x
error = 0.1 + 0.2 * x
# exemplo de erros assimétricos
lower_error = 0.4 * error
upper_error = error
asymmetric_error = [lower_error, upper_error]
plt.figure(1)
#plt.errorbar(x, y, yerr=error)
#plt.errorbar(x, y, yerr=error, fmt='-ob')
plt.errorbar(x, y, yerr=error,fmt='ob', ecolor='g', capthick=10)
plt.title('erros simetricos')
plt.figure(2)
plt.errorbar(x, y, xerr=asymmetric_error, fmt='or')
plt.title('erros assimetricos')
#plt.set_yscale('log')
plt.show()
import matplotlib.pyplot as plt
import numpy as np
data = np.loadtxt("circular.txt",float)
print data.shape
plt.imshow(data)
plt.show()
"""
Explanation: Outro tipo importante são gráficos onde mostramos os erros de um conjunto de medidas
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
data = np.loadtxt("circular.txt",float)
plt.imshow(data,cmap='hot')
plt.show()
import matplotlib.pyplot as plt
import numpy as np
data = np.loadtxt("circular.txt",float)
# como mudar a origem e a escala do grafico
plt.imshow(data,origin="lower",cmap='viridis',extent=[0,10,0,10])
plt.show()
"""
Explanation: Os gráficos com imshow podem ter diversas escalas de cor: jet, gray, hot, hsv, entre outros
End of explanation
"""
wavelength = 5.0
k = 2*np.pi/wavelength
xi0 = 1.0
separation = 50.0 # Separation of centers in cm
side = 100.0 # Side of the square in cm
points = 500 # Number of grid points along each side
spacing = side/points # Spacing of points in cm
# Calculate the positions of the centers of the circles
x1 = side/2 + separation/2
y1 = side/2
x2 = side/2 - separation/2
y2 = side/2
# Make an array to store the heights
xi = np.empty([points,points],float)
# Calculate the values in the array
for i in range(points):
y = spacing*i
for j in range(points):
x = spacing*j
r1 = np.sqrt((x-x1)**2+(y-y1)**2)
r2 = np.sqrt((x-x2)**2+(y-y2)**2)
xi[i,j] = xi0*np.sin(k*r1) + xi0*np.sin(k*r2)
# Make the plot
plt.imshow(xi,origin="lower",extent=[0,side,0,side])
plt.gray()
plt.show()
"""
Explanation: Veja o exercício na pag 109 do capitulo 3: http://www.umich.edu/~mejn/cp/chapters/graphics.pdf
End of explanation
"""
%matplotlib
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.gca(projection='3d')
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
R = np.sqrt(X**2 + Y**2)
Z = np.sin(R)
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
ax.set_zlim(-1.01, 1.01)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
%matplotlib
import matplotlib
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import matplotlib.pyplot as plt
import numpy as np
data = np.loadtxt("circular.txt",float)
sz = data.shape
X = np.linspace(0, 10, sz[0])
Y = np.linspace(0, 10, sz[1])
X, Y = np.meshgrid(X, Y)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(X,Y,data,rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
ax.zaxis.set_major_locator(LinearLocator(10))
print sz
plt.show()
print [X,Y].shape
"""
Explanation: Exercício
fazer o exercício 3.1 do cap. 3: http://www.umich.edu/~mejn/cp/chapters/graphics.pdf
o arquivo de dados pode ser obtido do link: http://www-personal.umich.edu/~mejn/cp/data/sunspots.txt
Gáficos em 3D
Um dos principais pacotes para gráficos em 3D é mplot3d
http://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html
End of explanation
"""
|
angelmtenor/data-science-keras | enron_scandal.ipynb | mit | import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import helper
import keras
helper.info_gpu()
#sns.set_palette("Reds")
helper.reproducible(seed=0) # setup reproducible results from run to run using Keras
%matplotlib inline
%load_ext autoreload
%autoreload
"""
Explanation: Enron Scandal: Indentifying Person of Interest
Identification of Enron employees who may have committed fraud
Supervised Learning. Classification
Data: Enron financial dataset from Udacity
End of explanation
"""
data_path = 'data/enron_financial_data.pkl'
target = ['poi']
df = pd.read_pickle(data_path)
df = pd.DataFrame.from_dict(df, orient='index')
"""
Explanation: 1. Data Processing and Exploratory Data Analysis
Load the Data
End of explanation
"""
helper.info_data(df, target)
"""
Explanation: Explore the Data
End of explanation
"""
df.head(3)
"""
Explanation: Imbalanced target: the evaluation metric used in this problem is the Area Under the ROC Curve <br>
poi = person of interest (boolean) <br>
End of explanation
"""
# delete 'TOTAL' row (at the bottom)
if 'TOTAL' in df.index:
df.drop('TOTAL', axis='index', inplace=True)
# convert dataframe values (objects) to numerical. There are no categorical features
df = df.apply(pd.to_numeric, errors='coerce')
"""
Explanation: Transform the data
End of explanation
"""
helper.missing(df)
"""
Explanation: Missing features
End of explanation
"""
df.drop('email_address', axis='columns', inplace=True)
"""
Explanation: High-missing features, like 'loan_advances', are needed to obtain better models
Remove irrelevant features
End of explanation
"""
num = list(df.select_dtypes(include=[np.number]))
df = helper.classify_data(df, target, numerical=num)
helper.get_types(df)
"""
Explanation: Classify variables
End of explanation
"""
# Reeplace NaN values with the median
df.fillna(df.median(), inplace=True)
#helper.fill_simple(df, target, inplace=True) # same result
"""
Explanation: Fill missing values
End of explanation
"""
df.describe(percentiles=[0.5]).astype(int)
"""
Explanation: Visualize the data
End of explanation
"""
helper.show_numerical(df, kde=True, ncols=5)
"""
Explanation: Numerical features
End of explanation
"""
helper.show_target_vs_numerical(df, target, jitter=0.05, point_size=50, ncols=5)
"""
Explanation: Target vs Numerical features
End of explanation
"""
# df.plot.scatter(x='salary', y='total_stock_value')
# df.plot.scatter(x='long_term_incentive', y='total_stock_value')
# sns.lmplot(x="salary", y="total_stock_value", hue='poi', data=df)
# sns.lmplot(x="long_term_incentive", y="total_stock_value", hue='poi', data=df)
g = sns.PairGrid(
df,
y_vars=["total_stock_value"],
x_vars=["salary", "long_term_incentive", "from_this_person_to_poi"],
hue='poi',
size=4)
g.map(sns.regplot).add_legend()
plt.ylim(
ymin=0, ymax=0.5e8)
#sns.pairplot(df, hue='poi', vars=['long_term_incentive', 'total_stock_value', 'from_poi_to_this_person'], kind='reg', size=3)
"""
Explanation: Total stock value vs some features
End of explanation
"""
helper.correlation(df, target)
"""
Explanation: The person of interest seems to have a higher stock vs salary and long-term incentive, especially when his stock value is high. There is no dependency between POI and the amount of emails from or to another person of interest.
Correlation between numerical features and target
End of explanation
"""
droplist = [] # features to drop from the model
# For the model 'data' instead of 'df'
data = df.copy()
data.drop(droplist, axis='columns', inplace=True)
data.head(3)
"""
Explanation: 2. Neural Network model
Select the features
End of explanation
"""
data, scale_param = helper.scale(data)
"""
Explanation: Scale numerical features
Shift and scale numerical variables to a standard normal distribution. The scaling factors are saved to be used for predictions.
End of explanation
"""
test_size = 0.4
random_state = 9
x_train, y_train, x_test, y_test = helper.simple_split(data, target, True, test_size,
random_state)
"""
Explanation: There are no categorical variables
Split the data into training and test sets
Data leakage: Test set hidden when training the model, but seen when preprocessing the dataset
No validation set (small dataset)
End of explanation
"""
y_train, y_test = helper.one_hot_output(y_train, y_test)
print("train size \t X:{} \t Y:{}".format(x_train.shape, y_train.shape))
print("test size \t X:{} \t Y:{} ".format(x_test.shape, y_test.shape))
"""
Explanation: Encode the output
End of explanation
"""
helper.dummy_clf(x_train, y_train, x_test, y_test)
"""
Explanation: Build a dummy classifier
End of explanation
"""
# class weight for imbalance target
cw = helper.get_class_weight(y_train[:,1])
model_path = os.path.join("models", "enron_scandal.h5")
model = None
model = helper.build_nn_clf(x_train.shape[1], y_train.shape[1], dropout=0.3, summary=True)
helper.train_nn(model, x_train, y_train, class_weight=cw, path=model_path)
from sklearn.metrics import roc_auc_score
y_pred_train = model.predict(x_train, verbose=0)
print('\nROC_AUC train:\t{:.2f} \n'.format(roc_auc_score(y_train, y_pred_train)))
"""
Explanation: Build the Neural Network for Binary Classification
End of explanation
"""
# Dataset too small for train, validation, and test sets. More data is needed for a proper
y_pred = model.predict(x_test, verbose=0)
helper.binary_classification_scores(y_test[:, 1], y_pred[:, 1], return_dataframe=True, index="DNN")
"""
Explanation: Evaluate the model
End of explanation
"""
helper.ml_classification(x_train, y_train[:,1], x_test, y_test[:,1])
"""
Explanation: Compare with non-neural network models
End of explanation
"""
|
yugangzhang/CHX_Pipelines | 2019_1/CameraTalk/XPCS_SiO2_500nm_For_CameraTalk.ipynb | bsd-3-clause | from pyCHX.chx_packages import *
%matplotlib notebook
plt.rcParams.update({'figure.max_open_warning': 0})
plt.rcParams.update({ 'image.origin': 'lower' })
plt.rcParams.update({ 'image.interpolation': 'none' })
import pickle as cpk
from pyCHX.chx_xpcs_xsvs_jupyter_V1 import *
import itertools
#from pyCHX.XPCS_SAXS import get_QrQw_From_RoiMask
#%run /home/yuzhang/pyCHX_link/pyCHX/chx_generic_functions.py
#%matplotlib notebook
%matplotlib inline
"""
Explanation: XPCS&XSVS Pipeline for Single-(Gi)-SAXS Run
"This notebook corresponds to version {{ version }} of the pipeline tool: https://github.com/NSLS-II/pipelines"
This notebook begins with a raw time-series of images and ends with $g_2(t)$ for a range of $q$, fit to an exponential or stretched exponential, and a two-time correlation functoin.
Overview
Setup: load packages/setup path
Load Metadata & Image Data
Apply Mask
Clean Data: shutter open/bad frames
Get Q-Map
Get 1D curve
Define Q-ROI (qr, qz)
Check beam damage
One-time Correlation
Fitting
Two-time Correlation
The important scientific code is imported from the chxanalys and scikit-beam project. Refer to chxanalys and scikit-beam for additional documentation and citation information.
DEV
V8: Update visbility error bar calculation using pi = his/N +/- sqrt(his_i)/N
Update normlization in g2 calculation uing 2D-savitzky golay (SG ) smooth
CHX Olog NoteBook
CHX Olog (https://logbook.nsls2.bnl.gov/11-ID/)
Setup
Import packages for I/O, visualization, and analysis.
End of explanation
"""
scat_geometry = 'saxs' #suport 'saxs', 'gi_saxs', 'ang_saxs' (for anisotropics saxs or flow-xpcs)
#scat_geometry = 'ang_saxs'
#scat_geometry = 'gi_waxs'
#scat_geometry = 'gi_saxs'
analysis_type_auto = True #if True, will take "analysis type" option from data acquisition func series
qphi_analysis = False #if True, will do q-phi (anisotropic analysis for transmission saxs)
isotropic_Q_mask = 'normal' #'wide' # 'normal' # 'wide' ## select wich Q-mask to use for rings: 'normal' or 'wide'
phi_Q_mask = 'phi_4x_20deg' ## select wich Q-mask to use for phi analysis
q_mask_name = ''
force_compress = False #True #force to compress data
bin_frame = False #generally make bin_frame as False
para_compress = True #parallel compress
run_fit_form = False #run fit form factor
run_waterfall = False #True #run waterfall analysis
run_profile_plot = False #run prolfile plot for gi-saxs
run_t_ROI_Inten = True #run ROI intensity as a function of time
run_get_mass_center = False # Analysis for mass center of reflective beam center
run_invariant_analysis = False
run_one_time = True #run one-time
cal_g2_error = False #True #calculate g2 signal to noise
#run_fit_g2 = True #run fit one-time, the default function is "stretched exponential"
fit_g2_func = 'stretched'
run_two_time = False #run two-time
run_four_time = False #True #True #False #run four-time
run_xsvs= False #False #run visibility analysis
att_pdf_report = True #attach the pdf report to CHX olog
qth_interest = 1 #the intested single qth
use_sqnorm = True #if True, use sq to normalize intensity
use_SG = True # False #if True, use the Sawitzky-Golay filter for <I(pix)>
use_imgsum_norm= True #if True use imgsum to normalize intensity for one-time calculatoin
pdf_version='_%s'%get_today_date() #for pdf report name
run_dose = False #True # True #False #run dose_depend analysis
if scat_geometry == 'gi_saxs':run_xsvs= False;use_sqnorm=False
if scat_geometry == 'gi_waxs':use_sqnorm = False
if scat_geometry != 'saxs':qphi_analysis = False;scat_geometry_ = scat_geometry
else:scat_geometry_ = ['','ang_'][qphi_analysis]+ scat_geometry
if scat_geometry != 'gi_saxs':run_profile_plot = False
scat_geometry
taus=None;g2=None;tausb=None;g2b=None;g12b=None;taus4=None;g4=None;times_xsv=None;contrast_factorL=None; lag_steps = None
"""
Explanation: Control Runs Here
End of explanation
"""
CYCLE= '2019_1' #change clycle here
path = '/XF11ID/analysis/%s/masks/'%CYCLE
"""
Explanation: Make a directory for saving results
End of explanation
"""
username = getpass.getuser()
run_two_time = False
run_dose = False
uid = '7095d1b4' ##Fri Feb 24 12:38:04 2017 #17400: approx. 9kHz 30k (mbs:.1x.4) energy threshold: +30%C3: SiO2 500nm in water-- 7095d1b4-53a4-4104-836e-be43c8d476ad
data_dir0 = create_user_folder(CYCLE, username)
print( data_dir0 )
uid = uid[:8]
print('The current uid for analysis is: %s...'%uid)
#get_last_uids( -1)
sud = get_sid_filenames(db[uid])
for pa in sud[2]:
if 'master.h5' in pa:
data_fullpath = pa
print ('scan_id, full-uid, data path are: %s--%s--%s'%(sud[0], sud[1], data_fullpath ))
#start_time, stop_time = '2017-2-24 12:23:00', '2017-2-24 13:42:00'
#sids, uids, fuids = find_uids(start_time, stop_time)
data_dir = os.path.join(data_dir0, '%s/'%(sud[1]))
os.makedirs(data_dir, exist_ok=True)
print('Results from this analysis will be stashed in the directory %s' % data_dir)
uidstr = 'uid=%s'%uid
"""
Explanation: Load Metadata & Image Data
Change this line to give a uid
End of explanation
"""
md = get_meta_data( uid )
md_blue = md.copy()
#md_blue
#md_blue['detectors'][0]
#if md_blue['OAV_mode'] != 'none':
# cx , cy = md_blue[md_blue['detectors'][0]+'_beam_center_x'], md_blue[md_blue['detectors'][0]+'_beam_center_x']
#else:
# cx , cy = md_blue['beam_center_x'], md_blue['beam_center_y']
#print(cx,cy)
detectors = sorted(get_detectors(db[uid]))
print('The detectors are:%s'%detectors)
if len(detectors) >1:
md['detector'] = detectors[1]
print( md['detector'])
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image':
reverse= True
rot90= False
elif md['detector'] =='eiger500K_single_image':
reverse= True
rot90=True
elif md['detector'] =='eiger1m_single_image':
reverse= True
rot90=False
print('Image reverse: %s\nImage rotate 90: %s'%(reverse, rot90))
try:
cx , cy = md_blue['beam_center_x'], md_blue['beam_center_y']
print(cx,cy)
except:
print('Will find cx,cy later.')
"""
Explanation: Don't Change the lines below here
get metadata
End of explanation
"""
if analysis_type_auto:#if True, will take "analysis type" option from data acquisition func series
try:
qphi_analysis_ = md['analysis'] #if True, will do q-phi (anisotropic analysis for transmission saxs)
print(md['analysis'])
if qphi_analysis_ == 'iso':
qphi_analysis = False
elif qphi_analysis_ == '':
qphi_analysis = False
else:
qphi_analysis = True
except:
print('There is no analysis in metadata.')
print('Will %s qphis analysis.'%['NOT DO','DO'][qphi_analysis])
if scat_geometry != 'saxs':qphi_analysis = False;scat_geometry_ = scat_geometry
else:scat_geometry_ = ['','ang_'][qphi_analysis]+ scat_geometry
if scat_geometry != 'gi_saxs':run_profile_plot = False
print(scat_geometry_)
scat_geometry
"""
Explanation: Load ROI defined by "XPCS_Setup" Pipeline
Define data analysis type
End of explanation
"""
##For SAXS
roi_path = '/XF11ID/analysis/2019_1/masks/'
roi_date = 'Feb6'
if scat_geometry =='saxs':
if qphi_analysis == False:
if isotropic_Q_mask == 'normal':
#print('Here')
q_mask_name='rings'
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image': #for 4M
fp = roi_path + 'roi_mask_%s_4M_norm.pkl'%roi_date
elif md['detector'] =='eiger500K_single_image': #for 500K
fp = roi_path + 'roi_mask_%s_500K_norm.pkl'%roi_date
fp = '/home/yuzhang/XScattering/CameraTalk/Results/roi_mask_SiO2500nm_500K_SR16Rings.pkl'
elif isotropic_Q_mask == 'wide':
q_mask_name='wide_rings'
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image': #for 4M
fp = roi_path + 'roi_mask_%s_4M_wide.pkl'%roi_date
elif md['detector'] =='eiger500K_single_image': #for 500K
fp = roi_path + 'roi_mask_%s_500K_wide.pkl'%roi_date
elif qphi_analysis:
if phi_Q_mask =='phi_4x_20deg':
q_mask_name='phi_4x_20deg'
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image': #for 4M
fp = roi_path + 'roi_mask_%s_4M_phi_4x_20deg.pkl'%roi_date
elif md['detector'] =='eiger500K_single_image': #for 500K
fp = roi_path + 'roi_mask_%s_500K_phi_4x_20deg.pkl'%roi_date
#fp = 'XXXXXXX.pkl'
roi_mask,qval_dict = cpk.load( open(fp, 'rb' ) ) #for load the saved roi data
#print(fp)
## Gi_SAXS
elif scat_geometry =='gi_saxs':
# dynamics mask
fp = '/XF11ID/analysis/2018_2/masks/uid=460a2a3a_roi_mask.pkl'
roi_mask,qval_dict = cpk.load( open(fp, 'rb' ) ) #for load the saved roi data
print('The dynamic mask is: %s.'%fp)
# static mask
fp = '/XF11ID/analysis/2018_2/masks/uid=460a2a3a_roi_masks.pkl'
roi_masks,qval_dicts = cpk.load( open(fp, 'rb' ) ) #for load the saved roi data
print('The static mask is: %s.'%fp)
# q-map
fp = '/XF11ID/analysis/2018_2/masks/uid=460a2a3a_qmap.pkl'
#print(fp)
qr_map, qz_map, ticks, Qrs, Qzs, Qr, Qz, inc_x0,refl_x0, refl_y0 = cpk.load( open(fp, 'rb' ) )
print('The qmap is: %s.'%fp)
## WAXS
elif scat_geometry =='gi_waxs':
fp = '/XF11ID/analysis/2018_2/masks/uid=db5149a1_roi_mask.pkl'
roi_mask,qval_dict = cpk.load( open(fp, 'rb' ) ) #for load the saved roi data
print(roi_mask.shape)
#qval_dict
#roi_mask = shift_mask(roi_mask, 10,30) #if shift mask to get new mask
show_img(roi_mask, aspect=1.0, image_name = fp)#, center=center[::-1])
#%run /home/yuzhang/pyCHX_link/pyCHX/chx_generic_functions.py
"""
Explanation: Load ROI mask depending on data analysis type
End of explanation
"""
imgs = load_data( uid, md['detector'], reverse= reverse, rot90=rot90 )
md.update( imgs.md );Nimg = len(imgs);
#md['beam_center_x'], md['beam_center_y'] = cx, cy
#if 'number of images' not in list(md.keys()):
md['number of images'] = Nimg
pixel_mask = 1- np.int_( np.array( imgs.md['pixel_mask'], dtype= bool) )
print( 'The data are: %s' %imgs )
#md['acquire period' ] = md['cam_acquire_period']
#md['exposure time'] = md['cam_acquire_time']
mdn = md.copy()
"""
Explanation: get data
End of explanation
"""
if md['detector'] =='eiger1m_single_image':
Chip_Mask=np.load( '/XF11ID/analysis/2017_1/masks/Eiger1M_Chip_Mask.npy')
elif md['detector'] =='eiger4m_single_image' or md['detector'] == 'image':
Chip_Mask= np.array(np.load( '/XF11ID/analysis/2017_1/masks/Eiger4M_chip_mask.npy'), dtype=bool)
BadPix = np.load('/XF11ID/analysis/2018_1/BadPix_4M.npy' )
Chip_Mask.ravel()[BadPix] = 0
elif md['detector'] =='eiger500K_single_image':
#print('here')
Chip_Mask= np.load( '/XF11ID/analysis/2017_1/masks/Eiger500K_Chip_Mask.npy') #to be defined the chip mask
Chip_Mask = np.rot90(Chip_Mask)
pixel_mask = np.rot90( 1- np.int_( np.array( imgs.md['pixel_mask'], dtype= bool)) )
else:
Chip_Mask = 1
print(Chip_Mask.shape, pixel_mask.shape)
use_local_disk = True
import shutil,glob
save_oavs = False
if len(detectors)==2:
if '_image' in md['detector']:
pref = md['detector'][:-5]
else:
pref=md['detector']
for k in [ 'beam_center_x', 'beam_center_y','cam_acquire_time','cam_acquire_period','cam_num_images',
'wavelength', 'det_distance', 'photon_energy']:
md[k] = md[ pref + '%s'%k]
if 'OAV_image' in detectors:
try:
save_oavs_tifs( uid, data_dir )
save_oavs = True
except:
pass
print_dict( md, ['suid', 'number of images', 'uid', 'scan_id', 'start_time', 'stop_time', 'sample', 'Measurement',
'acquire period', 'exposure time',
'det_distance', 'beam_center_x', 'beam_center_y', ] )
"""
Explanation: Load Chip mask depeding on detector
End of explanation
"""
if scat_geometry =='gi_saxs':
inc_x0 = md['beam_center_x']
inc_y0 = imgs[0].shape[0] - md['beam_center_y']
refl_x0 = md['beam_center_x']
refl_y0 = 1000 #imgs[0].shape[0] - 1758
print( "inc_x0, inc_y0, ref_x0,ref_y0 are: %s %s %s %s."%(inc_x0, inc_y0, refl_x0, refl_y0) )
else:
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image' or md['detector']=='eiger1m_single_image':
inc_x0 = imgs[0].shape[0] - md['beam_center_y']
inc_y0= md['beam_center_x']
elif md['detector'] =='eiger500K_single_image':
inc_y0 = imgs[0].shape[1] - md['beam_center_y']
inc_x0 = imgs[0].shape[0] - md['beam_center_x']
print(inc_x0, inc_y0)
###for this particular uid, manually give x0/y0
#inc_x0 = 1041
#inc_y0 = 1085
dpix, lambda_, Ldet, exposuretime, timeperframe, center = check_lost_metadata(
md, Nimg, inc_x0 = inc_x0, inc_y0= inc_y0, pixelsize = 7.5*10*(-5) )
if scat_geometry =='gi_saxs':center=center[::-1]
setup_pargs=dict(uid=uidstr, dpix= dpix, Ldet=Ldet, lambda_= lambda_, exposuretime=exposuretime,
timeperframe=timeperframe, center=center, path= data_dir)
print_dict( setup_pargs )
setup_pargs
"""
Explanation: Overwrite Some Metadata if Wrong Input
Define incident beam center (also define reflection beam center for gisaxs)
End of explanation
"""
if scat_geometry == 'gi_saxs':
mask_path = '/XF11ID/analysis/2018_2/masks/'
mask_name = 'July13_2018_4M.npy'
elif scat_geometry == 'saxs':
mask_path = '/XF11ID/analysis/2019_1/masks/'
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image':
mask_name = 'Feb6_2019_4M_SAXS.npy'
elif md['detector'] =='eiger500K_single_image':
mask_name = 'Feb6_2019_500K_SAXS.npy'
mask_path = '/XF11ID/analysis/2017_1/masks/'
mask_name = 'Feb23_500K_SAXS6_mask.npy'
elif scat_geometry == 'gi_waxs':
mask_path = '/XF11ID/analysis/2018_2/masks/'
mask_name = 'July20_2018_1M_WAXS.npy'
mask = load_mask(mask_path, mask_name, plot_ = False, image_name = uidstr + '_mask', reverse= reverse, rot90=rot90 )
mask = mask * pixel_mask * Chip_Mask
show_img(mask,image_name = uidstr + '_mask', save=True, path=data_dir, aspect=1, center=center[::-1])
mask_load=mask.copy()
imgsa = apply_mask( imgs, mask )
"""
Explanation: Apply Mask
load and plot mask if exist
otherwise create a mask using Mask pipeline
Reverse the mask in y-direction due to the coordination difference between python and Eiger software
Reverse images in y-direction
Apply the mask
Change the lines below to give mask filename
End of explanation
"""
img_choice_N = 3
img_samp_index = random.sample( range(len(imgs)), img_choice_N)
avg_img = get_avg_img( imgsa, img_samp_index, plot_ = False, uid =uidstr)
if avg_img.max() == 0:
print('There are no photons recorded for this uid: %s'%uid)
print('The data analysis should be terminated! Please try another uid.')
#show_img( imgsa[1000], vmin=.1, vmax= 1e1, logs=True, aspect=1,
# image_name= uidstr + '_img_avg', save=True, path=data_dir, cmap = cmap_albula )
print(center[::-1])
show_img( imgsa[ 5], vmin = -1, vmax = 20, logs=False, aspect=1, #save_format='tif',
image_name= uidstr + '_img_avg', save=True, path=data_dir, cmap=cmap_albula,center=center[::-1])
# select subregion, hard coded center beam location
#show_img( imgsa[180+40*3/0.05][110:110+840*2, 370:370+840*2], vmin = 0.01, vmax = 20, logs=False, aspect=1, #save_format='tif',
# image_name= uidstr + '_img_avg', save=True, path=data_dir, cmap=cmap_albula,center=[845,839])
"""
Explanation: Check several frames average intensity
End of explanation
"""
compress=True
photon_occ = len( np.where(avg_img)[0] ) / ( imgsa[0].size)
#compress = photon_occ < .4 #if the photon ocupation < 0.5, do compress
print ("The non-zeros photon occupation is %s."%( photon_occ))
print("Will " + 'Always ' + ['NOT', 'DO'][compress] + " apply compress process.")
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image':
good_start = 5 #make the good_start at least 0
elif md['detector'] =='eiger500K_single_image':
good_start = 100 #5 #make the good_start at least 0
elif md['detector'] =='eiger1m_single_image' or md['detector'] == 'image':
good_start = 5
bin_frame = False # True #generally make bin_frame as False
if bin_frame:
bin_frame_number=4
acquisition_period = md['acquire period']
timeperframe = acquisition_period * bin_frame_number
else:
bin_frame_number =1
force_compress = False
#force_compress = True
import time
t0= time.time()
if not use_local_disk:
cmp_path = '/nsls2/xf11id1/analysis/Compressed_Data'
else:
cmp_path = '/tmp_data/compressed'
cmp_path = '/nsls2/xf11id1/analysis/Compressed_Data'
if bin_frame_number==1:
cmp_file = '/uid_%s.cmp'%md['uid']
else:
cmp_file = '/uid_%s_bined--%s.cmp'%(md['uid'],bin_frame_number)
filename = cmp_path + cmp_file
mask2, avg_img, imgsum, bad_frame_list = compress_eigerdata(imgs, mask, md, filename,
force_compress= force_compress, para_compress= para_compress, bad_pixel_threshold = 1e14,
reverse=reverse, rot90=rot90,
bins=bin_frame_number, num_sub= 100, num_max_para_process= 500, with_pickle=True,
direct_load_data =use_local_disk, data_path = data_fullpath, )
min_inten = 10
good_start = max(good_start, np.where( np.array(imgsum) > min_inten )[0][0] )
####################################
##########For this particular UID using this good_start and good_end parameters
good_start = 18000
good_end = 28000
#normally for 500K
#good_start = 100
#good_end= len(imgs)//bin_frame_number
#####################################
print ('The good_start frame number is: %s '%good_start)
#FD = Multifile(filename, good_start, len(imgs)//bin_frame_number )
FD = Multifile(filename, good_start, good_end )
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( uid_ )
plot1D( y = imgsum[ np.array( [i for i in np.arange(good_start, len(imgsum)) if i not in bad_frame_list])],
title =uidstr + '_imgsum', xlabel='Frame', ylabel='Total_Intensity', legend='imgsum' )
Nimg = Nimg/bin_frame_number
run_time(t0)
mask = mask * pixel_mask * Chip_Mask
mask_copy = mask.copy()
mask_copy2 = mask.copy()
#%run ~/pyCHX_link/pyCHX/chx_generic_functions.py
try:
if md['experiment']=='printing':
#p = md['printing'] #if have this printing key, will do error function fitting to find t_print0
find_tp0 = True
t_print0 = ps( y = imgsum[:400] ) * timeperframe
print( 'The start time of print: %s.' %(t_print0 ) )
else:
find_tp0 = False
print('md[experiment] is not "printing" -> not going to look for t_0')
t_print0 = None
except:
find_tp0 = False
print('md[experiment] is not "printing" -> not going to look for t_0')
t_print0 = None
show_img( avg_img, vmin=1e-3, vmax= 1e1, logs=True, aspect=1, #save_format='tif',
image_name= uidstr + '_img_avg', save=True,
path=data_dir, center=center[::-1], cmap = cmap_albula )
"""
Explanation: Compress Data
Generate a compressed data with filename
Replace old mask with a new mask with removed hot pixels
Do average image
Do each image sum
Find badframe_list for where image sum above bad_pixel_threshold
Check shutter open frame to get good time series
End of explanation
"""
good_end= None # 2000
if good_end is not None:
FD = Multifile(filename, good_start, min( len(imgs)//bin_frame_number, good_end) )
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( uid_ )
re_define_good_start =False
if re_define_good_start:
good_start = 180
#good_end = 19700
good_end = len(imgs)
FD = Multifile(filename, good_start, good_end)
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( FD.beg, FD.end)
bad_frame_list = get_bad_frame_list( imgsum, fit='both', plot=True,polyfit_order = 30,
scale= 3.5, good_start = good_start, good_end=good_end, uid= uidstr, path=data_dir)
print( 'The bad frame list length is: %s'%len(bad_frame_list) )
"""
Explanation: Get bad frame list by a polynominal fit
End of explanation
"""
imgsum_y = imgsum[ np.array( [i for i in np.arange( len(imgsum)) if i not in bad_frame_list])]
imgsum_x = np.arange( len( imgsum_y))
save_lists( [imgsum_x, imgsum_y], label=['Frame', 'Total_Intensity'],
filename=uidstr + '_img_sum_t', path= data_dir )
"""
Explanation: Creat new mask by masking the bad pixels and get new avg_img
End of explanation
"""
plot1D( y = imgsum_y, title = uidstr + '_img_sum_t', xlabel='Frame', c='b',
ylabel='Total_Intensity', legend='imgsum', save=True, path=data_dir)
"""
Explanation: Plot time~ total intensity of each frame
End of explanation
"""
if md['detector'] =='eiger4m_single_image' or md['detector'] == 'image':
pass
elif md['detector'] =='eiger500K_single_image':
#if md['cam_acquire_period'] <= 0.00015: #will check this logic
if imgs[0].dtype == 'uint16':
print('Create dynamic mask for 500K due to 9K data acquistion!!!')
bdp = find_bad_pixels_FD( bad_frame_list, FD, img_shape = avg_img.shape, threshold=20 )
mask = mask_copy2.copy()
mask *=bdp
mask_copy = mask.copy()
show_img( mask, image_name='New Mask_uid=%s'%uid )
"""
Explanation: Get Dynamic Mask (currently designed for 500K)
End of explanation
"""
setup_pargs
if scat_geometry =='saxs':
## Get circular average| * Do plot and save q~iq
mask = mask_copy.copy()
hmask = create_hot_pixel_mask( avg_img, threshold = 1e8, center=center, center_radius= 10)
qp_saxs, iq_saxs, q_saxs = get_circular_average( avg_img * Chip_Mask , mask * hmask, pargs=setup_pargs )
plot_circular_average( qp_saxs, iq_saxs, q_saxs, pargs=setup_pargs, show_pixel=True,
xlim=[qp_saxs.min(), qp_saxs.max()*1.0], ylim = [iq_saxs.min(), iq_saxs.max()*2] )
mask =np.array( mask * hmask, dtype=bool)
if scat_geometry =='saxs':
if run_fit_form:
form_res = fit_form_factor( q_saxs,iq_saxs, guess_values={'radius': 2500, 'sigma':0.05,
'delta_rho':1E-10 }, fit_range=[0.0001, 0.015], fit_variables={'radius': T, 'sigma':T,
'delta_rho':T}, res_pargs=setup_pargs, xlim=[0.0001, 0.015])
qr = np.array( [qval_dict[k][0] for k in sorted( qval_dict.keys())] )
if qphi_analysis == False:
try:
qr_cal, qr_wid = get_QrQw_From_RoiMask( roi_mask, setup_pargs )
print(len(qr))
if (qr_cal - qr).sum() >=1e-3:
print( 'The loaded ROI mask might not be applicable to this UID: %s.'%uid)
print('Please check the loaded roi mask file.')
except:
print('Something is wrong with the roi-mask. Please check the loaded roi mask file.')
show_ROI_on_image( avg_img*roi_mask, roi_mask, center, label_on = False, rwidth = 840, alpha=.9,
save=True, path=data_dir, uid=uidstr, vmin= 1e-3,
vmax= 1e-1, #np.max(avg_img),
aspect=1,
show_roi_edge=True,
show_ang_cor = True)
plot_qIq_with_ROI( q_saxs, iq_saxs, np.unique(qr), logs=True, uid=uidstr,
xlim=[q_saxs.min(), q_saxs.max()*1.02],#[0.0001,0.08],
ylim = [iq_saxs.min(), iq_saxs.max()*1.02], save=True, path=data_dir)
roi_mask = roi_mask * mask
"""
Explanation: Static Analysis
SAXS Scattering Geometry
End of explanation
"""
if scat_geometry =='saxs':
Nimg = FD.end - FD.beg
time_edge = create_time_slice( Nimg, slice_num= 10, slice_width= 1, edges = None )
time_edge = np.array( time_edge ) + good_start
#print( time_edge )
qpt, iqst, qt = get_t_iqc( FD, time_edge, mask*Chip_Mask, pargs=setup_pargs, nx=1500, show_progress= False )
plot_t_iqc( qt, iqst, time_edge, pargs=setup_pargs, xlim=[qt.min(), qt.max()],
ylim = [iqst.min(), iqst.max()], save=True )
if run_invariant_analysis:
if scat_geometry =='saxs':
invariant = get_iq_invariant( qt, iqst )
time_stamp = time_edge[:,0] * timeperframe
if scat_geometry =='saxs':
plot_q2_iq( qt, iqst, time_stamp,pargs=setup_pargs,ylim=[ -0.001, 0.01] ,
xlim=[0.007,0.2],legend_size= 6 )
if scat_geometry =='saxs':
plot_time_iq_invariant( time_stamp, invariant, pargs=setup_pargs, )
if False:
iq_int = np.zeros( len(iqst) )
fig, ax = plt.subplots()
q = qt
for i in range(iqst.shape[0]):
yi = iqst[i] * q**2
iq_int[i] = yi.sum()
time_labeli = 'time_%s s'%( round( time_edge[i][0] * timeperframe, 3) )
plot1D( x = q, y = yi, legend= time_labeli, xlabel='Q (A-1)', ylabel='I(q)*Q^2', title='I(q)*Q^2 ~ time',
m=markers[i], c = colors[i], ax=ax, ylim=[ -0.001, 0.01] , xlim=[0.007,0.2],
legend_size=4)
#print( iq_int )
"""
Explanation: Time Depedent I(q) Analysis
End of explanation
"""
if scat_geometry =='gi_saxs':
plot_qzr_map( qr_map, qz_map, inc_x0, ticks = ticks, data= avg_img, uid= uidstr, path = data_dir )
"""
Explanation: GiSAXS Scattering Geometry
End of explanation
"""
if scat_geometry =='gi_saxs':
#roi_masks, qval_dicts = get_gisaxs_roi( Qrs, Qzs, qr_map, qz_map, mask= mask )
show_qzr_roi( avg_img, roi_masks, inc_x0, ticks[:4], alpha=0.5, save=True, path=data_dir, uid=uidstr )
if scat_geometry =='gi_saxs':
Nimg = FD.end - FD.beg
time_edge = create_time_slice( N= Nimg, slice_num= 3, slice_width= 2, edges = None )
time_edge = np.array( time_edge ) + good_start
print( time_edge )
qrt_pds = get_t_qrc( FD, time_edge, Qrs, Qzs, qr_map, qz_map, mask=mask, path=data_dir, uid = uidstr )
plot_qrt_pds( qrt_pds, time_edge, qz_index = 0, uid = uidstr, path = data_dir )
"""
Explanation: Static Analysis for gisaxs
End of explanation
"""
if scat_geometry =='gi_saxs':
if run_profile_plot:
xcorners= [ 1100, 1250, 1250, 1100 ]
ycorners= [ 850, 850, 950, 950 ]
waterfall_roi_size = [ xcorners[1] - xcorners[0], ycorners[2] - ycorners[1] ]
waterfall_roi = create_rectangle_mask( avg_img, xcorners, ycorners )
#show_img( waterfall_roi * avg_img, aspect=1,vmin=.001, vmax=1, logs=True, )
wat = cal_waterfallc( FD, waterfall_roi, qindex= 1, bin_waterfall=True,
waterfall_roi_size = waterfall_roi_size,save =True, path=data_dir, uid=uidstr)
if scat_geometry =='gi_saxs':
if run_profile_plot:
plot_waterfallc( wat, qindex=1, aspect=None, vmin=1, vmax= np.max( wat), uid=uidstr, save =True,
path=data_dir, beg= FD.beg)
"""
Explanation: Make a Profile Plot
End of explanation
"""
if scat_geometry =='gi_saxs':
show_qzr_roi( avg_img, roi_mask, inc_x0, ticks[:4], alpha=0.5, save=True, path=data_dir, uid=uidstr )
## Get 1D Curve (Q||-intensity¶)
qr_1d_pds = cal_1d_qr( avg_img, Qr, Qz, qr_map, qz_map, inc_x0= None, mask=mask, setup_pargs=setup_pargs )
plot_qr_1d_with_ROI( qr_1d_pds, qr_center=np.unique( np.array(list( qval_dict.values() ) )[:,0] ),
loglog=True, save=True, uid=uidstr, path = data_dir)
"""
Explanation: Dynamic Analysis for gi_saxs
End of explanation
"""
if scat_geometry =='gi_waxs':
#badpixel = np.where( avg_img[:600,:] >=300 )
#roi_mask[badpixel] = 0
show_ROI_on_image( avg_img, roi_mask, label_on = True, alpha=.5,
save=True, path=data_dir, uid=uidstr, vmin=0.1, vmax=5)
"""
Explanation: GiWAXS Scattering Geometry
End of explanation
"""
qind, pixelist = roi.extract_label_indices(roi_mask)
noqs = len(np.unique(qind))
print(noqs)
"""
Explanation: Extract the labeled array
End of explanation
"""
nopr = np.bincount(qind, minlength=(noqs+1))[1:]
nopr
"""
Explanation: Number of pixels in each q box
End of explanation
"""
roi_inten = check_ROI_intensity( avg_img, roi_mask, ring_number= 2, uid =uidstr ) #roi starting from 1
"""
Explanation: Check one ROI intensity
End of explanation
"""
qth_interest = 2 #the second ring. #qth_interest starting from 1
if scat_geometry =='saxs' or scat_geometry =='gi_waxs':
if run_waterfall:
wat = cal_waterfallc( FD, roi_mask, qindex= qth_interest, save =True, path=data_dir, uid=uidstr)
plot_waterfallc( wat, qth_interest, aspect= None, vmin=1e-1, vmax= wat.max(), uid=uidstr, save =True,
path=data_dir, beg= FD.beg, cmap = cmap_vge )
ring_avg = None
if run_t_ROI_Inten:
times_roi, mean_int_sets = cal_each_ring_mean_intensityc(FD, roi_mask, timeperframe = None, multi_cor=True )
plot_each_ring_mean_intensityc( times_roi, mean_int_sets, uid = uidstr, save=True, path=data_dir )
roi_avg = np.average( mean_int_sets, axis=0)
"""
Explanation: Do a waterfall analysis
End of explanation
"""
if run_get_mass_center:
cx, cy = get_mass_center_one_roi(FD, roi_mask, roi_ind=25)
if run_get_mass_center:
fig,ax=plt.subplots(2)
plot1D( cx, m='o', c='b',ax=ax[0], legend='mass center-refl_X',
ylim=[940, 960], ylabel='posX (pixel)')
plot1D( cy, m='s', c='r',ax=ax[1], legend='mass center-refl_Y',
ylim=[1540, 1544], xlabel='frames',ylabel='posY (pixel)')
"""
Explanation: Analysis for mass center of reflective beam center
End of explanation
"""
define_good_series = False
#define_good_series = True
if define_good_series:
good_start = 200
FD = Multifile(filename, beg = good_start, end = 600) #end=1000)
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( uid_ )
if use_sqnorm:#for transmision SAXS
norm = get_pixelist_interp_iq( qp_saxs, iq_saxs, roi_mask, center)
print('Using circular average in the normalization of G2 for SAXS scattering.')
elif use_SG:#for Gi-SAXS or WAXS
avg_imgf = sgolay2d( avg_img, window_size= 11, order= 5) * mask
norm=np.ravel(avg_imgf)[pixelist]
print('Using smoothed image by SavitzkyGolay filter in the normalization of G2.')
else:
norm= None
print('Using simple (average) normalization of G2.')
if use_imgsum_norm:
imgsum_ = imgsum
print('Using frame total intensity for intensity normalization in g2 calculation.')
else:
imgsum_ = None
import time
if run_one_time:
t0 = time.time()
if cal_g2_error:
g2,lag_steps,g2_err = cal_g2p(FD,roi_mask,bad_frame_list,good_start, num_buf = 8,
num_lev= None,imgsum= imgsum_, norm=norm, cal_error= True )
else:
g2,lag_steps = cal_g2p(FD,roi_mask,bad_frame_list,good_start, num_buf = 8,
num_lev= None,imgsum= imgsum_, norm=norm, cal_error= False )
run_time(t0)
lag_steps = lag_steps[:g2.shape[0]]
g2.shape[1]
if run_one_time:
taus = lag_steps * timeperframe
try:
g2_pds = save_g2_general( g2, taus=taus,qr= np.array( list( qval_dict.values() ) )[:g2.shape[1],0],
qz = np.array( list( qval_dict.values() ) )[:g2.shape[1],1],
uid=uid_+'_g2.csv', path= data_dir, return_res=True )
except:
g2_pds = save_g2_general( g2, taus=taus,qr= np.array( list( qval_dict.values() ) )[:g2.shape[1],0],
uid=uid_+'_'+q_mask_name+'_g2.csv', path= data_dir, return_res=True )
if cal_g2_error:
try:
g2_err_pds = save_g2_general( g2_err, taus=taus,qr= np.array( list( qval_dict.values() ) )[:g2.shape[1],0],
qz = np.array( list( qval_dict.values() ) )[:g2.shape[1],1],
uid=uid_+'_g2_err.csv', path= data_dir, return_res=True )
except:
g2_err_pds = save_g2_general( g2_err, taus=taus,qr= np.array( list( qval_dict.values() ) )[:g2.shape[1],0],
uid=uid_+'_'+q_mask_name+'_g2_err.csv', path= data_dir, return_res=True )
"""
Explanation: One time Correlation
Note : Enter the number of buffers for Muliti tau one time correlation
number of buffers has to be even. More details in https://github.com/scikit-beam/scikit-beam/blob/master/skbeam/core/correlation.py
if define another good_series
End of explanation
"""
if run_one_time:
g2_fit_result, taus_fit, g2_fit = get_g2_fit_general( g2, taus,
function = fit_g2_func, vlim=[0.95, 1.05], fit_range= None, sequential_fit=True,
fit_variables={'baseline': True, 'beta': True, 'alpha':False,'relaxation_rate':True,},
guess_values={'baseline':1.0,'beta': 0.2,'alpha':1.0,'relaxation_rate':1e3},
guess_limits = dict( baseline =[.8, 1.3], alpha=[0, 2],
beta = [0, 1], relaxation_rate= [1e-7, 100000]) ,)
g2_fit_paras = save_g2_fit_para_tocsv(g2_fit_result, filename= uid_ +'_'+q_mask_name +'_g2_fit_paras.csv', path=data_dir )
if run_one_time:
if cal_g2_error:
g2_fit_err = np.zeros_like(g2_fit)
plot_g2_general( g2_dict={1:g2, 2:g2_fit}, taus_dict={1:taus, 2:taus_fit},
vlim=[0.95, 1.05], g2_err_dict= {1:g2_err, 2: g2_fit_err},
qval_dict = dict(itertools.islice(qval_dict.items(),g2.shape[1])), fit_res= g2_fit_result, geometry= scat_geometry_,filename= uid_+'_g2',
path= data_dir, function= fit_g2_func, ylabel='g2', append_name= '_fit')
else:
plot_g2_general( g2_dict={1:g2, 2:g2_fit}, taus_dict={1:taus, 2:taus_fit}, vlim=[0.95, 1.05],
qval_dict = dict(itertools.islice(qval_dict.items(),g2.shape[1])), fit_res= g2_fit_result, geometry= scat_geometry_,filename= uid_+'_g2',
path= data_dir, function= fit_g2_func, ylabel='g2', append_name= '_fit')
if run_one_time:
if True:
fs, fe = 0, 10
#fs,fe=0, 6
qval_dict_ = {k:qval_dict[k] for k in list(qval_dict.keys())[fs:fe] }
D0, qrate_fit_res = get_q_rate_fit_general( qval_dict_, g2_fit_paras['relaxation_rate'][fs:fe],
geometry= scat_geometry_ )
plot_q_rate_fit_general( qval_dict_, g2_fit_paras['relaxation_rate'][fs:fe], qrate_fit_res,
geometry= scat_geometry_,uid=uid_ , path= data_dir )
else:
D0, qrate_fit_res = get_q_rate_fit_general( qval_dict, g2_fit_paras['relaxation_rate'],
fit_range=[0, 26], geometry= scat_geometry_ )
plot_q_rate_fit_general( qval_dict, g2_fit_paras['relaxation_rate'], qrate_fit_res,
geometry= scat_geometry_,uid=uid_ ,
show_fit=False, path= data_dir, plot_all_range=False)
#plot1D( x= qr, y=g2_fit_paras['beta'], ls='-', m = 'o', c='b', ylabel=r'$\beta$', xlabel=r'$Q( \AA^{-1} ) $' )
"""
Explanation: Fit g2
End of explanation
"""
define_good_series = False
#define_good_series = True
if define_good_series:
good_start = 5
FD = Multifile(filename, beg = good_start, end = 1000)
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( uid_ )
data_pixel = None
if run_two_time:
data_pixel = Get_Pixel_Arrayc( FD, pixelist, norm= norm ).get_data()
import time
t0=time.time()
g12b=None
if run_two_time:
g12b = auto_two_Arrayc( data_pixel, roi_mask, index = None )
if run_dose:
np.save( data_dir + 'uid=%s_g12b'%uid, g12b)
run_time( t0 )
if run_two_time:
show_C12(g12b, q_ind= 2, qlabel=dict(itertools.islice(qval_dict.items(),g2.shape[1])),N1= FD.beg,logs=False, N2=min( FD.end,10000), vmin= 1.0, vmax=1.18,timeperframe=timeperframe,save=True, path= data_dir, uid = uid_ ,cmap=plt.cm.jet)#cmap=cmap_albula)
multi_tau_steps = True
if run_two_time:
if lag_steps is None:
num_bufs=8
noframes = FD.end - FD.beg
num_levels = int(np.log( noframes/(num_bufs-1))/np.log(2) +1) +1
tot_channels, lag_steps, dict_lag = multi_tau_lags(num_levels, num_bufs)
max_taus= lag_steps.max()
#max_taus= lag_steps.max()
max_taus = Nimg
t0=time.time()
#tausb = np.arange( g2b.shape[0])[:max_taus] *timeperframe
if multi_tau_steps:
lag_steps_ = lag_steps[ lag_steps <= g12b.shape[0] ]
g2b = get_one_time_from_two_time(g12b)[lag_steps_]
tausb = lag_steps_ *timeperframe
else:
tausb = (np.arange( g12b.shape[0]) *timeperframe)[:-200]
g2b = (get_one_time_from_two_time(g12b))[:-200]
run_time(t0)
g2b_pds = save_g2_general( g2b, taus=tausb, qr= np.array( list( qval_dict.values() ) )[:g2.shape[1],0],
qz=None, uid=uid_+'_'+q_mask_name+'_g2b.csv', path= data_dir, return_res=True )
if run_two_time:
g2b_fit_result, tausb_fit, g2b_fit = get_g2_fit_general( g2b, tausb,
function = fit_g2_func, vlim=[0.95, 1.05], fit_range= None,
fit_variables={'baseline':False, 'beta': True, 'alpha':True,'relaxation_rate':True},
guess_values={'baseline':1.0,'beta': 0.15,'alpha':1.0,'relaxation_rate':1e-3,},
guess_limits = dict( baseline =[1, 1.8], alpha=[0, 2],
beta = [0, 1], relaxation_rate= [1e-8, 5000]) )
g2b_fit_paras = save_g2_fit_para_tocsv(g2b_fit_result, filename= uid_ +'_'+q_mask_name+'_g2b_fit_paras.csv', path=data_dir )
#plot1D( x = tausb[1:], y =g2b[1:,0], ylim=[0.95, 1.46], xlim = [0.0001, 10], m='', c='r', ls = '-',
# logx=True, title='one_time_corelation', xlabel = r"$\tau $ $(s)$", )
if run_two_time:
plot_g2_general( g2_dict={1:g2b, 2:g2b_fit}, taus_dict={1:tausb, 2:tausb_fit}, vlim=[0.95, 1.05],
qval_dict=dict(itertools.islice(qval_dict.items(),g2.shape[1])), fit_res= g2b_fit_result, geometry=scat_geometry_,filename=uid_+'_g2',
path= data_dir, function= fit_g2_func, ylabel='g2', append_name= '_b_fit')
if run_two_time:
D0b, qrate_fit_resb = get_q_rate_fit_general( dict(itertools.islice(qval_dict.items(),g2.shape[1])), g2b_fit_paras['relaxation_rate'],
fit_range=[0, 10], geometry= scat_geometry_ )
#qval_dict, g2b_fit_paras['relaxation_rate']
if run_two_time:
if True:
fs, fe = 0,8
#fs, fe = 0,12
qval_dict_ = {k:qval_dict[k] for k in list(qval_dict.keys())[fs:fe] }
D0b, qrate_fit_resb = get_q_rate_fit_general( qval_dict_, g2b_fit_paras['relaxation_rate'][fs:fe], geometry= scat_geometry_ )
plot_q_rate_fit_general( qval_dict_, g2b_fit_paras['relaxation_rate'][fs:fe], qrate_fit_resb,
geometry= scat_geometry_,uid=uid_ +'_two_time' , path= data_dir )
else:
D0b, qrate_fit_resb = get_q_rate_fit_general( qval_dict, g2b_fit_paras['relaxation_rate'],
fit_range=[0, 10], geometry= scat_geometry_ )
plot_q_rate_fit_general( qval_dict, g2b_fit_paras['relaxation_rate'], qrate_fit_resb,
geometry= scat_geometry_,uid=uid_ +'_two_time', show_fit=False,path= data_dir, plot_all_range= True )
if run_two_time and run_one_time:
plot_g2_general( g2_dict={1:g2, 2:g2b}, taus_dict={1:taus, 2:tausb},vlim=[0.99, 1.007],
qval_dict=dict(itertools.islice(qval_dict.items(),g2.shape[1])), g2_labels=['from_one_time', 'from_two_time'],
geometry=scat_geometry_,filename=uid_+'_g2_two_g2', path= data_dir, ylabel='g2', )
"""
Explanation: For two-time
End of explanation
"""
#run_dose = True
if run_dose:
get_two_time_mulit_uids( [uid], roi_mask, norm= norm, bin_frame_number=1,
path= data_dir0, force_generate=False, compress_path = cmp_path + '/' )
try:
print( md['transmission'] )
except:
md['transmission'] =1
exposuretime
if run_dose:
N = len(imgs)
print(N)
#exposure_dose = md['transmission'] * exposuretime* np.int_([ N/16, N/8, N/4 ,N/2, 3*N/4, N*0.99 ])
exposure_dose = md['transmission'] * exposuretime* np.int_([ N/8, N/4 ,N/2, 3*N/4, N*0.99 ])
print( exposure_dose )
if run_dose:
taus_uids, g2_uids = get_series_one_time_mulit_uids( [ uid ], qval_dict, good_start=good_start,
path= data_dir0, exposure_dose = exposure_dose, num_bufs =8, save_g2= False,
dead_time = 0, trans = [ md['transmission'] ] )
if run_dose:
plot_dose_g2( taus_uids, g2_uids, ylim=[1.0, 1.2], vshift= 0.00,
qval_dict = qval_dict, fit_res= None, geometry= scat_geometry_,
filename= '%s_dose_analysis'%uid_,
path= data_dir, function= None, ylabel='g2_Dose', g2_labels= None, append_name= '' )
if run_dose:
qth_interest = 1
plot_dose_g2( taus_uids, g2_uids, qth_interest= qth_interest, ylim=[0.98, 1.2], vshift= 0.00,
qval_dict = qval_dict, fit_res= None, geometry= scat_geometry_,
filename= '%s_dose_analysis'%uidstr,
path= data_dir, function= None, ylabel='g2_Dose', g2_labels= None, append_name= '' )
"""
Explanation: Run Dose dependent analysis
End of explanation
"""
if run_four_time:
t0=time.time()
g4 = get_four_time_from_two_time(g12b, g2=g2b)[:int(max_taus)]
run_time(t0)
if run_four_time:
taus4 = np.arange( g4.shape[0])*timeperframe
g4_pds = save_g2_general( g4, taus=taus4, qr=np.array( list( qval_dict.values() ) )[:,0],
qz=None, uid=uid_ +'_g4.csv', path= data_dir, return_res=True )
if run_four_time:
plot_g2_general( g2_dict={1:g4}, taus_dict={1:taus4},vlim=[0.95, 1.05], qval_dict=qval_dict, fit_res= None,
geometry=scat_geometry_,filename=uid_+'_g4',path= data_dir, ylabel='g4')
"""
Explanation: Four Time Correlation
End of explanation
"""
if run_xsvs:
max_cts = get_max_countc(FD, roi_mask )
#max_cts = 15 #for eiger 500 K
qind, pixelist = roi.extract_label_indices( roi_mask )
noqs = len( np.unique(qind) )
nopr = np.bincount(qind, minlength=(noqs+1))[1:]
#time_steps = np.array( utils.geometric_series(2, len(imgs) ) )
time_steps = [0,1] #only run the first two levels
num_times = len(time_steps)
times_xsvs = exposuretime + (2**( np.arange( len(time_steps) ) ) -1 ) * timeperframe
print( 'The max counts are: %s'%max_cts )
"""
Explanation: Speckle Visiblity
End of explanation
"""
if run_xsvs:
if roi_avg is None:
times_roi, mean_int_sets = cal_each_ring_mean_intensityc(FD, roi_mask, timeperframe = None, )
roi_avg = np.average( mean_int_sets, axis=0)
t0=time.time()
spec_bins, spec_his, spec_std, spec_sum = xsvsp( FD, np.int_(roi_mask), norm=None,
max_cts=int(max_cts+2), bad_images=bad_frame_list, only_two_levels=True )
spec_kmean = np.array( [roi_avg * 2**j for j in range( spec_his.shape[0] )] )
run_time(t0)
spec_pds = save_bin_his_std( spec_bins, spec_his, spec_std, filename=uid_+'_spec_res.csv', path=data_dir )
"""
Explanation: Do historam
End of explanation
"""
if run_xsvs:
ML_val, KL_val,K_ = get_xsvs_fit( spec_his, spec_sum, spec_kmean,
spec_std, max_bins=2, fit_range=[1,60], varyK= False )
#print( 'The observed average photon counts are: %s'%np.round(K_mean,4))
#print( 'The fitted average photon counts are: %s'%np.round(K_,4))
print( 'The difference sum of average photon counts between fit and data are: %s'%np.round(
abs(np.sum( spec_kmean[0,:] - K_ )),4))
print( '#'*30)
qth= 0
print( 'The fitted M for Qth= %s are: %s'%(qth, ML_val[qth]) )
print( K_[qth])
print( '#'*30)
"""
Explanation: Do historam fit by negtive binominal function with maximum likehood method
End of explanation
"""
if run_xsvs:
qr = [qval_dict[k][0] for k in list(qval_dict.keys()) ]
plot_xsvs_fit( spec_his, ML_val, KL_val, K_mean = spec_kmean, spec_std=spec_std,
xlim = [0,10], vlim =[.9, 1.1],
uid=uid_, qth= qth_interest, logy= True, times= times_xsvs, q_ring_center=qr, path=data_dir)
plot_xsvs_fit( spec_his, ML_val, KL_val, K_mean = spec_kmean, spec_std = spec_std,
xlim = [0,15], vlim =[.9, 1.1],
uid=uid_, qth= None, logy= True, times= times_xsvs, q_ring_center=qr, path=data_dir )
"""
Explanation: Plot fit results
End of explanation
"""
if run_xsvs:
contrast_factorL = get_contrast( ML_val)
spec_km_pds = save_KM( spec_kmean, KL_val, ML_val, qs=qr, level_time=times_xsvs, uid=uid_, path = data_dir )
#spec_km_pds
"""
Explanation: Get contrast
End of explanation
"""
if run_xsvs:
plot_g2_contrast( contrast_factorL, g2b, times_xsvs, tausb, qr,
vlim=[0.8,1.2], qth = qth_interest, uid=uid_,path = data_dir, legend_size=14)
plot_g2_contrast( contrast_factorL, g2b, times_xsvs, tausb, qr,
vlim=[0.8,1.2], qth = None, uid=uid_,path = data_dir, legend_size=4)
#from chxanalys.chx_libs import cmap_vge, cmap_albula, Javascript
"""
Explanation: Plot contrast with g2 results
End of explanation
"""
md['mask_file']= mask_path + mask_name
md['roi_mask_file']= fp
md['mask'] = mask
#md['NOTEBOOK_FULL_PATH'] = data_dir + get_current_pipeline_fullpath(NFP).split('/')[-1]
md['good_start'] = good_start
md['bad_frame_list'] = bad_frame_list
md['avg_img'] = avg_img
md['roi_mask'] = roi_mask
md['setup_pargs'] = setup_pargs
if scat_geometry == 'gi_saxs':
md['Qr'] = Qr
md['Qz'] = Qz
md['qval_dict'] = qval_dict
md['beam_center_x'] = inc_x0
md['beam_center_y']= inc_y0
md['beam_refl_center_x'] = refl_x0
md['beam_refl_center_y'] = refl_y0
elif scat_geometry == 'gi_waxs':
md['beam_center_x'] = center[1]
md['beam_center_y']= center[0]
else:
md['qr']= qr
#md['qr_edge'] = qr_edge
md['qval_dict'] = qval_dict
md['beam_center_x'] = center[1]
md['beam_center_y']= center[0]
md['beg'] = FD.beg
md['end'] = FD.end
md['t_print0'] = t_print0
md['qth_interest'] = qth_interest
md['metadata_file'] = data_dir + 'uid=%s_md.pkl'%uid
psave_obj( md, data_dir + 'uid=%s_md.pkl'%uid ) #save the setup parameters
save_dict_csv( md, data_dir + 'uid=%s_md.csv'%uid, 'w')
Exdt = {}
if scat_geometry == 'gi_saxs':
for k,v in zip( ['md', 'roi_mask','qval_dict','avg_img','mask','pixel_mask', 'imgsum', 'bad_frame_list', 'qr_1d_pds'],
[md, roi_mask, qval_dict, avg_img,mask,pixel_mask, imgsum, bad_frame_list, qr_1d_pds] ):
Exdt[ k ] = v
elif scat_geometry == 'saxs':
for k,v in zip( ['md', 'q_saxs', 'iq_saxs','iqst','qt','roi_mask','qval_dict','avg_img','mask','pixel_mask', 'imgsum', 'bad_frame_list'],
[md, q_saxs, iq_saxs, iqst, qt,roi_mask, qval_dict, avg_img,mask,pixel_mask, imgsum, bad_frame_list] ):
Exdt[ k ] = v
elif scat_geometry == 'gi_waxs':
for k,v in zip( ['md', 'roi_mask','qval_dict','avg_img','mask','pixel_mask', 'imgsum', 'bad_frame_list'],
[md, roi_mask, qval_dict, avg_img,mask,pixel_mask, imgsum, bad_frame_list] ):
Exdt[ k ] = v
if run_waterfall:Exdt['wat'] = wat
if run_t_ROI_Inten:Exdt['times_roi'] = times_roi;Exdt['mean_int_sets']=mean_int_sets
if run_one_time:
if run_invariant_analysis:
for k,v in zip( ['taus','g2','g2_fit_paras', 'time_stamp','invariant'], [taus,g2,g2_fit_paras,time_stamp,invariant] ):Exdt[ k ] = v
else:
for k,v in zip( ['taus','g2','g2_fit_paras' ], [taus,g2,g2_fit_paras ] ):Exdt[ k ] = v
if run_two_time:
for k,v in zip( ['tausb','g2b','g2b_fit_paras', 'g12b'], [tausb,g2b,g2b_fit_paras,g12b] ):Exdt[ k ] = v
#for k,v in zip( ['tausb','g2b','g2b_fit_paras', ], [tausb,g2b,g2b_fit_paras] ):Exdt[ k ] = v
if run_dose:
for k,v in zip( [ 'taus_uids', 'g2_uids' ], [taus_uids, g2_uids] ):Exdt[ k ] = v
if run_four_time:
for k,v in zip( ['taus4','g4'], [taus4,g4] ):Exdt[ k ] = v
if run_xsvs:
for k,v in zip( ['spec_kmean','spec_pds','times_xsvs','spec_km_pds','contrast_factorL'],
[ spec_kmean,spec_pds,times_xsvs,spec_km_pds,contrast_factorL] ):Exdt[ k ] = v
export_xpcs_results_to_h5( 'uid=%s_%s_Res.h5'%(md['uid'],q_mask_name), data_dir, export_dict = Exdt )
#extract_dict = extract_xpcs_results_from_h5( filename = 'uid=%s_Res.h5'%md['uid'], import_dir = data_dir )
#g2npy_filename = data_dir + '/' + 'uid=%s_g12b.npy'%uid
#print(g2npy_filename)
#if os.path.exists( g2npy_filename):
# print('Will delete this file=%s.'%g2npy_filename)
# os.remove( g2npy_filename )
#extract_dict = extract_xpcs_results_from_h5( filename = 'uid=%s_Res.h5'%md['uid'], import_dir = data_dir )
#extract_dict = extract_xpcs_results_from_h5( filename = 'uid=%s_Res.h5'%md['uid'], import_dir = data_dir )
"""
Explanation: Export Results to a HDF5 File
End of explanation
"""
pdf_out_dir = os.path.join('/XF11ID/analysis/', CYCLE, username, 'Results/')
pdf_filename = "XPCS_Analysis_Report2_for_uid=%s%s%s.pdf"%(uid,pdf_version,q_mask_name)
if run_xsvs:
pdf_filename = "XPCS_XSVS_Analysis_Report_for_uid=%s%s%s.pdf"%(uid,pdf_version,q_mask_name)
#%run /home/yuzhang/chxanalys_link/chxanalys/Create_Report.py
data_dir
make_pdf_report( data_dir, uid, pdf_out_dir, pdf_filename, username,
run_fit_form,run_one_time, run_two_time, run_four_time, run_xsvs, run_dose,
report_type= scat_geometry, report_invariant= run_invariant_analysis,
md = md )
"""
Explanation: Creat PDF Report
End of explanation
"""
#%run /home/yuzhang/chxanalys_link/chxanalys/chx_olog.py
if att_pdf_report:
os.environ['HTTPS_PROXY'] = 'https://proxy:8888'
os.environ['no_proxy'] = 'cs.nsls2.local,localhost,127.0.0.1'
update_olog_uid_with_file( uid[:6], text='Add XPCS Analysis PDF Report',
filename=pdf_out_dir + pdf_filename, append_name='_R1' )
"""
Explanation: Attach the PDF report to Olog
End of explanation
"""
if save_oavs:
os.environ['HTTPS_PROXY'] = 'https://proxy:8888'
os.environ['no_proxy'] = 'cs.nsls2.local,localhost,127.0.0.1'
update_olog_uid_with_file( uid[:6], text='Add OVA images',
filename= data_dir + 'uid=%s_OVA_images.png'%uid, append_name='_img' )
# except:
"""
Explanation: Save the OVA image
End of explanation
"""
|
NGSchool2016/ngschool2016-materials | jupyter/fbrazdovic/.ipynb_checkpoints/NGSchool_python_USERS-checkpoint.ipynb | gpl-3.0 | %pylab inline
"""
Explanation: Set the matplotlib magic to notebook enable inline plots.
End of explanation
"""
import subprocess
import matplotlib.pyplot as plt
import random
import numpy as np
"""
Explanation: Calculate the Nonredundant Read Fraction (NRF)
SAM format example:
SRR585264.8766235 0 1 4 15 35M * 0 0 CTTAAACAATTATTCCCCCTGCAAACATTTTCAAT GGGGGGGGGGGGGGGGGGGGGGFGGGGGGGGGGGG XT:A:U NM:i:1 X0:i:1 X1:i:6 XM:i:1 XO:i:0 XG:i:0 MD:Z:8T26
Import the required modules
End of explanation
"""
plt.style.use('ggplot')
figsize(10,5)
"""
Explanation: Make figures prettier and biger
End of explanation
"""
file = "/ngschool/chip_seq/bwa/input.sorted.bam"
"""
Explanation: Parse the SAM file and extract the unique start coordinates.
First store the file name in the variable
End of explanation
"""
p = subprocess.Popen(["samtools", "view", "-q10", "-F260", file],
stdout=subprocess.PIPE)
coords = []
for line in p.stdout:
flag, chrom, start = line.decode('utf-8').split()[1:4]
coords.append([ flag, chrom, start])
coords[:5]
coords[-5:]
"""
Explanation: Next we read the file using samtools. From each read we need to store the flag, chromosome name and start coordinate.
End of explanation
"""
len(coords)
"""
Explanation: What is the total number of our unique reads?
End of explanation
"""
random.seed(1234)
sample = random.sample(coords, 1000000)
"""
Explanation: In python we can randomly sample the coordinates to get 1M for NRF calculations
End of explanation
"""
len(sample)
uniqueStarts = {'watson': set(), 'crick': set()}
for coord in sample:
flag, ref, start = coord
if int(flag) & 16:
uniqueStarts['crick'].add((ref, start))
else:
uniqueStarts['watson'].add((ref, start))
"""
Explanation: How many of those coordinates are unique? (We will use the set python object which only the unique items.)
End of explanation
"""
len(uniqueStarts['watson'])
"""
Explanation: How many on the Watson strand?
End of explanation
"""
len(uniqueStarts['crick'])
"""
Explanation: And on the Crick?
End of explanation
"""
NRF_input = (len(uniqueStarts['watson']) + len(uniqueStarts['crick']))*1.0 /
print(NRF_input)
"""
Explanation: Calculate the NRF
End of explanation
"""
def calculateNRF(filePath, pickSample=True, sampleSize=10000000, seed=1234):
p = subprocess.Popen(['samtools', 'view', '-q10', '-F260', filePath],
stdout=subprocess.PIPE)
coordType = np.dtype({'names': ['flag', 'ref', 'start'],
'formats': ['uint16', 'U10', 'uint32']})
coordArray = np.empty(10000000, dtype=coordType)
i = 0
for line in p.stdout:
if i >= len(coordArray):
coordArray = np.append(coordArray, np.empty(1000000, dtype=coordType), axis=0)
fg, rf, st = line.decode('utf-8').split()[1:4]
coordArray[i] = np.array((fg, rf, st), dtype=coordType)
i += 1
coordArray = coordArray[:i]
sample = coordArray
if pickSample and len(coordArray) > sampleSize:
np.random.seed(seed)
sample = np.random.choice(coordArray, sampleSize, replace=False)
uniqueStarts = {'watson': set(), 'crick': set()}
for read in sample:
flag, ref, start = read
if flag & 16:
uniqueStarts['crick'].add((ref, start))
else:
uniqueStarts['watson'].add((ref, start))
NRF = (len(uniqueStarts['watson']) + len(uniqueStarts['crick']))*1.0/len(sample)
return NRF
"""
Explanation: Lets create a function from what we did above and apply it to all of our files!
To use our function on the real sequencing datasets (not only on a small subset) we need to optimize our method a bit- we will use python module called numpy.
End of explanation
"""
NRF_chip = calculateNRF("", sampleSize=1000000)
print(NRF_chip)
"""
Explanation: Calculate the NRF for the chip-seq sample
End of explanation
"""
plt.bar([0,2],[NRF_input, NRF_chip], width=1)
plt.xlim([-0.5,3.5]), plt.xticks([0.5, 2.5], ['Input', 'ChIP'])
plt.xlabel('Sample')
plt.ylabel('NRF')
plt.ylim([0, 1.25]), plt.yticks(np.arange(0, 1.2, 0.2))
plt.plot((-0.5,3.5), (0.8,0.8), 'red', linestyle='dashed')
plt.show()
"""
Explanation: Plot the NRF!
End of explanation
"""
countList = []
with open('/ngschool/chip_seq/bedtools/input_coverage.bed', 'r') as covFile:
for line in covFile:
countList.append(int(line.strip('\n').split('\t')[3]))
countlist[:5]
plot(range(101267), countlist)
"""
Explanation: Calculate the Signal Extraction Scaling
Load the results from the coverage calculations. Lets take the input sample first.
20 0 1000 6
20 1000 2000 15
20 2000 3000 13
...
End of explanation
"""
plt.plot(range(len(countList)), countList)
plt.xlabel('Bin number')
plt.ylabel('Bin coverage')
plt.xlim([0, len(countList)])
plt.show()
"""
Explanation: Lets see where do our reads align to the genome. Plot the distribution of tags along the genome.
End of explanation
"""
countList.sort()
"""
Explanation: Now sort the list- order the windows based on the tag count
End of explanation
"""
countSum = sum()
countSum
"""
Explanation: What do you suppose is in the beginning of our list?
Sum all the aligned tags
End of explanation
"""
countFraction = []
for i, count in enumerate(countList):
if i == 0:
countFraction.append(count*1.0 / countSum)
else:
countFraction.append((count*1.0 / countSum) + countFraction[i-1])
"""
Explanation: Calculate the summaric fraction of tags along the ordered windows.
End of explanation
"""
winNumber =
winNumber
"""
Explanation: Look at the last five items of the list:
Calculate the number of windows.
End of explanation
"""
winFraction = []
for i in range(winNumber):
winFraction.append(i*1.0 / winNumber)
"""
Explanation: Calculate what fraction of a whole is the position of each window.
End of explanation
"""
def calculateSES(filePath):
countList = []
with open(filePath, 'r') as covFile:
for line in covFile:
countList.append(int(line.strip('\n').split('\t')[3]))
plt.plot(range(len(countList)), countList)
plt.xlabel('Bin number')
plt.ylabel('Bin coverage')
plt.xlim([0, len(countList)])
plt.show()
countList.sort()
countSum = sum(countList)
countFraction = []
for i, count in enumerate(countList):
if i == 0:
countFraction.append(count*1.0 / countSum)
else:
countFraction.append((count*1.0 / countSum) + countFraction[i-1])
winNumber = len(countFraction)
winFraction = []
for i in range(winNumber):
winFraction.append(i*1.0 / winNumber)
return [winFraction, countFraction]
"""
Explanation: Look at the last five items of our new list:
Now prepare the function!
End of explanation
"""
chipSes = calculateSES("")
"""
Explanation: Use our function to calculate the signal extraction scaling for the Sox2 ChIP sample:
End of explanation
"""
plt.plot(winFraction, countFraction, label='input')
plt.plot(chipSes[0], chipSes[1], label='Sox2 ChIP')
plt.ylim([0,1])
plt.xlabel('Ordered window franction')
plt.ylabel('Genome coverage fraction')
plt.legend(loc='best')
plt.show()
"""
Explanation: Now we can plot the calculated fractions for both the input and ChIP sample:
End of explanation
"""
|
mtury/scapy | doc/notebooks/Scapy in 15 minutes.ipynb | gpl-2.0 | send(IP(dst="1.2.3.4")/TCP(dport=502, options=[("MSS", 0)]))
"""
Explanation: Scapy in 15 minutes (or longer)
Guillaume Valadon & Pierre Lalet
Scapy is a powerful Python-based interactive packet manipulation program and library. It can be used to forge or decode packets for a wide number of protocols, send them on the wire, capture them, match requests and replies, and much more.
This iPython notebook provides a short tour of the main Scapy features. It assumes that you are familiar with networking terminology. All examples where built using the development version from https://github.com/secdev/scapy, and tested on Linux. They should work as well on OS X, and other BSD.
The current documentation is available on http://scapy.readthedocs.io/ !
Scapy eases network packets manipulation, and allows you to forge complicated packets to perform advanced tests. As a teaser, let's have a look a two examples that are difficult to express without Scapy:
1_ Sending a TCP segment with maximum segment size set to 0 to a specific port is an interesting test to perform against embedded TCP stacks. It can be achieved with the following one-liner:
End of explanation
"""
ans = sr([IP(dst="8.8.8.8", ttl=(1, 8), options=IPOption_RR())/ICMP(seq=RandShort()), IP(dst="8.8.8.8", ttl=(1, 8), options=IPOption_Traceroute())/ICMP(seq=RandShort()), IP(dst="8.8.8.8", ttl=(1, 8))/ICMP(seq=RandShort())], verbose=False, timeout=3)[0]
ans.make_table(lambda x, y: (", ".join(z.summary() for z in x[IP].options) or '-', x[IP].ttl, y.sprintf("%IP.src% %ICMP.type%")))
"""
Explanation: 2_ Adanced firewalking using IP options is sometimes useful to perform network enumeration. Here is more complicate one-liner:
End of explanation
"""
from scapy.all import *
"""
Explanation: Now that, we've got your attention, let's start the tutorial !
Quick setup
The easiest way to try Scapy is to clone the github repository, then launch the run_scapy script as root. The following examples can be pasted on the Scapy prompt. There is no need to install any external Python modules.
```shell
git clone https://github.com/secdev/scapy --depth=1
sudo ./run_scapy
Welcome to Scapy (2.4.0)
```
Note: iPython users must import scapy as follows
End of explanation
"""
packet = IP()/TCP()
Ether()/packet
"""
Explanation: First steps
With Scapy, each network layer is a Python class.
The '/' operator is used to bind layers together. Let's put a TCP segment on top of IP and assign it to the packet variable, then stack it on top of Ethernet.
End of explanation
"""
>>> ls(IP, verbose=True)
version : BitField (4 bits) = (4)
ihl : BitField (4 bits) = (None)
tos : XByteField = (0)
len : ShortField = (None)
id : ShortField = (1)
flags : FlagsField (3 bits) = (0)
MF, DF, evil
frag : BitField (13 bits) = (0)
ttl : ByteField = (64)
proto : ByteEnumField = (0)
chksum : XShortField = (None)
src : SourceIPField (Emph) = (None)
dst : DestIPField (Emph) = (None)
options : PacketListField = ([])
"""
Explanation: This last output displays the packet summary. Here, Scapy automatically filled the Ethernet type as well as the IP protocol field.
Protocol fields can be listed using the ls() function:
End of explanation
"""
p = Ether()/IP(dst="www.secdev.org")/TCP()
p.summary()
"""
Explanation: Let's create a new packet to a specific IP destination. With Scapy, each protocol field can be specified. As shown in the ls() output, the interesting field is dst.
Scapy packets are objects with some useful methods, such as summary().
End of explanation
"""
print(p.dst) # first layer that has an src field, here Ether
print(p[IP].src) # explicitly access the src field of the IP layer
# sprintf() is a useful method to display fields
print(p.sprintf("%Ether.src% > %Ether.dst%\n%IP.src% > %IP.dst%"))
"""
Explanation: There are not many differences with the previous example. However, Scapy used the specific destination to perform some magic tricks !
Using internal mechanisms (such as DNS resolution, routing table and ARP resolution), Scapy has automatically set fields necessary to send the packet. This fields can of course be accessed and displayed.
End of explanation
"""
print(p.sprintf("%TCP.flags% %TCP.dport%"))
"""
Explanation: Scapy uses default values that work most of the time. For example, TCP() is a SYN segment to port 80.
End of explanation
"""
[p for p in IP(ttl=(1,5))/ICMP()]
"""
Explanation: Moreover, Scapy has implicit packets. For example, they are useful to make the TTL field value vary from 1 to 5 to mimic traceroute.
End of explanation
"""
p = sr1(IP(dst="8.8.8.8")/UDP()/DNS(qd=DNSQR()))
p[DNS].an
"""
Explanation: Sending and receiving
Currently, you know how to build packets with Scapy. The next step is to send them over the network !
The sr1() function sends a packet and return the corresponding answer. srp1() does the same for layer two packets, i.e. Ethernet. If you are only interested in sending packets send() is your friend.
As an example, we can use the DNS protocol to get www.example.com IPv4 address.
End of explanation
"""
r, u = srp(Ether()/IP(dst="8.8.8.8", ttl=(5,10))/UDP()/DNS(rd=1, qd=DNSQR(qname="www.example.com")))
r, u
"""
Explanation: Another alternative is the sr() function. Like srp1(), the sr1() function can be used for layer 2 packets.
End of explanation
"""
# Access the first tuple
print(r[0][0].summary()) # the packet sent
print(r[0][1].summary()) # the answer received
# Access the ICMP layer. Scapy received a time-exceeded error message
r[0][1][ICMP]
"""
Explanation: sr() sent a list of packets, and returns two variables, here r and u, where:
1. r is a list of results (i.e tuples of the packet sent and its answer)
2. u is a list of unanswered packets
End of explanation
"""
wrpcap("scapy.pcap", r)
pcap_p = rdpcap("scapy.pcap")
pcap_p[0]
"""
Explanation: With Scapy, list of packets, such as r or u, can be easily written to, or read from PCAP files.
End of explanation
"""
s = sniff(count=2)
s
"""
Explanation: Sniffing the network is a straightforward as sending and receiving packets. The sniff() function returns a list of Scapy packets, that can be manipulated as previously described.
End of explanation
"""
sniff(count=2, prn=lambda p: p.summary())
"""
Explanation: sniff() has many arguments. The prn one accepts a function name that will be called on received packets. Using the lambda keyword, Scapy could be used to mimic the tshark command behavior.
End of explanation
"""
import socket
sck = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # create an UDP socket
sck.connect(("8.8.8.8", 53)) # connect to 8.8.8.8 on 53/UDP
# Create the StreamSocket and gives the class used to decode the answer
ssck = StreamSocket(sck)
ssck.basecls = DNS
# Send the DNS query
ssck.sr1(DNS(rd=1, qd=DNSQR(qname="www.example.com")))
"""
Explanation: Alternatively, Scapy can use OS sockets to send and receive packets. The following example assigns an UDP socket to a Scapy StreamSocket, which is then used to query www.example.com IPv4 address.
Unlike other Scapy sockets, StreamSockets do not require root privileges.
End of explanation
"""
ans, unans = srloop(IP(dst=["8.8.8.8", "8.8.4.4"])/ICMP(), inter=.1, timeout=.1, count=100, verbose=False)
"""
Explanation: Visualization
Parts of the following examples require the matplotlib module.
With srloop(), we can send 100 ICMP packets to 8.8.8.8 and 8.8.4.4.
End of explanation
"""
%matplotlib inline
ans.multiplot(lambda x, y: (y[IP].src, (y.time, y[IP].id)), plot_xy=True)
"""
Explanation: Then we can use the results to plot the IP id values.
End of explanation
"""
pkt = IP() / UDP() / DNS(qd=DNSQR())
print(repr(raw(pkt)))
"""
Explanation: The raw() constructor can be used to "build" the packet's bytes as they would be sent on the wire.
End of explanation
"""
print(pkt.summary())
"""
Explanation: Since some people cannot read this representation, Scapy can:
- give a summary for a packet
End of explanation
"""
hexdump(pkt)
"""
Explanation: "hexdump" the packet's bytes
End of explanation
"""
pkt.show()
"""
Explanation: dump the packet, layer by layer, with the values for each field
End of explanation
"""
pkt.canvas_dump()
"""
Explanation: render a pretty and handy dissection of the packet
End of explanation
"""
ans, unans = traceroute('www.secdev.org', maxttl=15)
"""
Explanation: Scapy has a traceroute() function, which basically runs a sr(IP(ttl=(1..30)) and creates a TracerouteResult object, which is a specific subclass of SndRcvList().
End of explanation
"""
ans.world_trace()
"""
Explanation: The result can be plotted with .world_trace() (this requires GeoIP module and data, from MaxMind)
End of explanation
"""
ans = sr(IP(dst=["scanme.nmap.org", "nmap.org"])/TCP(dport=[22, 80, 443, 31337]), timeout=3, verbose=False)[0]
ans.extend(sr(IP(dst=["scanme.nmap.org", "nmap.org"])/UDP(dport=53)/DNS(qd=DNSQR()), timeout=3, verbose=False)[0])
ans.make_table(lambda x, y: (x[IP].dst, x.sprintf('%IP.proto%/{TCP:%r,TCP.dport%}{UDP:%r,UDP.dport%}'), y.sprintf('{TCP:%TCP.flags%}{ICMP:%ICMP.type%}')))
"""
Explanation: The PacketList.make_table() function can be very helpful. Here is a simple "port scanner":
End of explanation
"""
class DNSTCP(Packet):
name = "DNS over TCP"
fields_desc = [ FieldLenField("len", None, fmt="!H", length_of="dns"),
PacketLenField("dns", 0, DNS, length_from=lambda p: p.len)]
# This method tells Scapy that the next packet must be decoded with DNSTCP
def guess_payload_class(self, payload):
return DNSTCP
"""
Explanation: Implementing a new protocol
Scapy can be easily extended to support new protocols.
The following example defines DNS over TCP. The DNSTCP class inherits from Packet and defines two field: the length, and the real DNS message. The length_of and length_from arguments link the len and dns fields together. Scapy will be able to automatically compute the len value.
End of explanation
"""
# Build then decode a DNS message over TCP
DNSTCP(raw(DNSTCP(dns=DNS())))
"""
Explanation: This new packet definition can be direcly used to build a DNS message over TCP.
End of explanation
"""
import socket
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # create an TCP socket
sck.connect(("8.8.8.8", 53)) # connect to 8.8.8.8 on 53/TCP
# Create the StreamSocket and gives the class used to decode the answer
ssck = StreamSocket(sck)
ssck.basecls = DNSTCP
# Send the DNS query
ssck.sr1(DNSTCP(dns=DNS(rd=1, qd=DNSQR(qname="www.example.com"))))
"""
Explanation: Modifying the previous StreamSocket example to use TCP allows to use the new DNSCTP layer easily.
End of explanation
"""
from scapy.all import *
import argparse
parser = argparse.ArgumentParser(description="A simple ping6")
parser.add_argument("ipv6_address", help="An IPv6 address")
args = parser.parse_args()
print(sr1(IPv6(dst=args.ipv6_address)/ICMPv6EchoRequest(), verbose=0).summary())
"""
Explanation: Scapy as a module
So far, Scapy was only used from the command line. It is also a Python module than can be used to build specific network tools, such as ping6.py:
End of explanation
"""
# Specify the Wi-Fi monitor interface
#conf.iface = "mon0" # uncomment to test
# Create an answering machine
class ProbeRequest_am(AnsweringMachine):
function_name = "pram"
# The fake mac of the fake access point
mac = "00:11:22:33:44:55"
def is_request(self, pkt):
return Dot11ProbeReq in pkt
def make_reply(self, req):
rep = RadioTap()
# Note: depending on your Wi-Fi card, you might need a different header than RadioTap()
rep /= Dot11(addr1=req.addr2, addr2=self.mac, addr3=self.mac, ID=RandShort(), SC=RandShort())
rep /= Dot11ProbeResp(cap="ESS", timestamp=time.time())
rep /= Dot11Elt(ID="SSID",info="Scapy !")
rep /= Dot11Elt(ID="Rates",info=b'\x82\x84\x0b\x16\x96')
rep /= Dot11Elt(ID="DSset",info=chr(10))
OK,return rep
# Start the answering machine
#ProbeRequest_am()() # uncomment to test
"""
Explanation: Answering machines
A lot of attack scenarios look the same: you want to wait for a specific packet, then send an answer to trigger the attack.
To this extent, Scapy provides the AnsweringMachine object. Two methods are especially useful:
1. is_request(): return True if the pkt is the expected request
2. make_reply(): return the packet that must be sent
The following example uses Scapy Wi-Fi capabilities to pretend that a "Scapy !" access point exists.
Note: your Wi-Fi interface must be set to monitor mode !
End of explanation
"""
from scapy.all import *
import nfqueue, socket
def scapy_cb(i, payload):
s = payload.get_data() # get and parse the packet
p = IP(s)
# Check if the packet is an ICMP Echo Request to 8.8.8.8
if p.dst == "8.8.8.8" and ICMP in p:
# Delete checksums to force Scapy to compute them
del(p[IP].chksum, p[ICMP].chksum)
# Set the ICMP sequence number to 0
p[ICMP].seq = 0
# Let the modified packet go through
ret = payload.set_verdict_modified(nfqueue.NF_ACCEPT, raw(p), len(p))
else:
# Accept all packets
payload.set_verdict(nfqueue.NF_ACCEPT)
# Get an NFQUEUE handler
q = nfqueue.queue()
# Set the function that will be call on each received packet
q.set_callback(scapy_cb)
# Open the queue & start parsing packes
q.fast_open(2807, socket.AF_INET)
q.try_run()
"""
Explanation: Cheap Man-in-the-middle with NFQUEUE
NFQUEUE is an iptables target than can be used to transfer packets to userland process. As a nfqueue module is available in Python, you can take advantage of this Linux feature to perform Scapy based MiTM.
This example intercepts ICMP Echo request messages sent to 8.8.8.8, sent with the ping command, and modify their sequence numbers. In order to pass packets to Scapy, the following iptable command put packets into the NFQUEUE #2807:
$ sudo iptables -I OUTPUT --destination 8.8.8.8 -p icmp -o eth0 -j NFQUEUE --queue-num 2807
End of explanation
"""
class TCPScanner(Automaton):
@ATMT.state(initial=1)
def BEGIN(self):
pass
@ATMT.state()
def SYN(self):
print("-> SYN")
@ATMT.state()
def SYN_ACK(self):
print("<- SYN/ACK")
raise self.END()
@ATMT.state()
def RST(self):
print("<- RST")
raise self.END()
@ATMT.state()
def ERROR(self):
print("!! ERROR")
raise self.END()
@ATMT.state(final=1)
def END(self):
pass
@ATMT.condition(BEGIN)
def condition_BEGIN(self):
raise self.SYN()
@ATMT.condition(SYN)
def condition_SYN(self):
if random.randint(0, 1):
raise self.SYN_ACK()
else:
raise self.RST()
@ATMT.timeout(SYN, 1)
def timeout_SYN(self):
raise self.ERROR()
TCPScanner().run()
TCPScanner().run()
"""
Explanation: Automaton
When more logic is needed, Scapy provides a clever way abstraction to define an automaton. In a nutshell, you need to define an object that inherits from Automaton, and implement specific methods:
- states: using the @ATMT.state decorator. They usually do nothing
- conditions: using the @ATMT.condition and @ATMT.receive_condition decorators. They describe how to go from one state to another
- actions: using the ATMT.action decorator. They describe what to do, like sending a back, when changing state
The following example does nothing more than trying to mimic a TCP scanner:
End of explanation
"""
# Instantiate the blocks
clf = CLIFeeder()
ijs = InjectSink("enx3495db043a28")
# Plug blocks together
clf > ijs
# Create and start the engine
pe = PipeEngine(clf)
pe.start()
"""
Explanation: Pipes
Pipes are an advanced Scapy feature that aims sniffing, modifying and printing packets. The API provides several buildings blocks. All of them, have high entries and exits (>>) as well as low (>) ones.
For example, the CliFeeder is used to send message from the Python command line to a low exit. It can be combined to the InjectSink that reads message on its low entry and inject them to the specified network interface. These blocks can be combined as follows:
End of explanation
"""
clf.send("Hello Scapy !")
"""
Explanation: Packet can be sent using the following command on the prompt:
End of explanation
"""
|
radical-cybertools/supercomputing2015-tutorial | 01_hadoop/Spark.ipynb | apache-2.0 | %matplotlib inline
%run ../env.py
%run ../util/init_spark.py
from pilot_hadoop import PilotComputeService as PilotSparkComputeService
pilotcompute_description = {
"service_url": "yarn-client://yarn-aws.radical-cybertools.org",
"number_of_processes": 2
}
print "SPARK HOME: %s"%os.environ["SPARK_HOME"]
print "PYTHONPATH: %s"%os.environ["PYTHONPATH"]
pilot_spark = PilotSparkComputeService.create_pilot(pilotcompute_description=pilotcompute_description)
sc = pilot_spark.get_spark_context()
"""
Explanation: Spark Introduction
This example shows how the Pilot-Abstraction is used to spawn a Spark job inside of YARN. We show how to combine the Pilot and Spark programming modelling using several examples.
Spark Documentation: http://spark.apache.org/docs/latest/
Pilot-Spark: https://pypi.python.org/pypi/SAGA-Hadoop/
1. Initialize Spark
The following codes show how the Pilot-Abstraction is used to connect to an existing YARN cluster and start Spark.
End of explanation
"""
output=!yarn application -list -appTypes Spark -appStates RUNNING
print_application_url(output)
"""
Explanation: After the Spark application has been submitted it can be monitored via the YARN web interface: http://yarn-aws.radical-cybertools.org:8088/. The following commands prints out the Spark application currently running in YARN
End of explanation
"""
text_rdd = sc.textFile("/data/nasa/")
text_rdd.count()
"""
Explanation: 2. Spark: Hello RDD Abstraction
The RDD Abstraction builts on the popular MapReduce programming model and extends it by supporting a greater variety of transformations!
Here we will use Spark to analyze the NASA log file (that we encountered earlier).
Line Count: How many lines of logs do we have?
End of explanation
"""
text_rdd.flatMap(lambda line: line.split(" ")).map(lambda word: (word, 1)).reduceByKey(lambda x,y: x+y).take(10)
"""
Explanation: Word Count: How many words?
End of explanation
"""
text_rdd = sc.textFile("/data/nasa/")
text_rdd.filter(lambda x: len(x)>8).map(lambda x: (x.split()[-2],1)).reduceByKey(lambda x,y: x+y).collect()
"""
Explanation: HTTP Response Code Count: How many HTTP errors did we observe?
End of explanation
"""
from pyspark.sql import SQLContext, Row
sqlContext = SQLContext(sc)
text_filtered = text_rdd.filter(lambda x: len(x)>8)
logs = text_filtered.top(20)
cleaned = text_filtered.map(lambda l: (l.split(" ")[0], l.split(" ")[3][1:], l.split(" ")[6], l.split(" ")[-2]))
rows = cleaned.map(lambda l: Row(referer=l[0], ts=l[1], response_code=l[3]))
schemaLog = sqlContext.createDataFrame(rows)
schemaLog.registerTempTable("row")
schemaLog.show()
df=sqlContext.sql("select response_code, count(*) as count from row group by response_code")
df.show()
"""
Explanation: Compare the lines of code that were needed to perform the same functionality using MapReduce versus Spark. Which is more?
3. Spark-SQL
Dataframes are an abstraction that allows high-level reasoning on structured data. Data can easily be filtered, aggregated and combined using DataFrames. DataFrames can also be used for machine learning tasks.
In the following commands, we are transforming unstructured log data into a structured DataFrame consisting of three columns: referer, timestamp and response code. We then sample and view the data.
End of explanation
"""
pdf=df.toPandas()
%matplotlib inline
pdf['count']=pdf['count']/1000
pdf.plot(x='response_code', y='count', kind='barh')
"""
Explanation: Spark Dataframes interoperate with Pandas Dataframes. Small data can be further processed using Pandas and Python tools, e.g. Matplotlib and Bokeh for plotting.
End of explanation
"""
pilot_spark.cancel()
"""
Explanation: 4. Stop Pilot-Spark Application
End of explanation
"""
|
UWPreMAP/PreMAP2015 | Lessons/PythonIntro.ipynb | mit | print "hello, world!"
%%bash
echo "print 'hello, world!'" > hello.py # write our .py file
cat hello.py # print the contents of this file to the screen
python hello.py # run the python script
"""
Explanation: much of this material is based on notebook's from Jake Vanderplas' Intro to Scientific Computing in Python course
What is Python?
Python is an open source, interpreted, and object-oriented programming language. What does this mean? It means it can be freely used and modified by others, that it does not need to be compiled to run, and that it uses the concept of data structures called "objects" that have both attributes (data) and methods (procedures).
Why Python?
Python is a great first language to learn because you do not have to worry about things like assigning types to your variables or memory allocation, and in addition the syntax is generally very readable for first-time users. Python is also a portable skill-set--there is LOTS that Python can do both within the field of astronomy and also outside of it. Note that there are some "cons" to Python which require some more clever workarounds. For example, because Python is an interpreted language it can be slow compared to something like C.
Ways to Use Python:
IPython Notebook (type ipython notebook to start in terminal and ctrl-c to exit)
In a given cell, type Enter to add new lines, Ctrl-Return or Shift-Return to run a line, and Alt-Return to run a line and add a new cell)
Python command line interpreter (type python to start in terminal and ctrl-d to exit)
IPython command line interpreter (type ipython to start in terminal and exit to exit)
Making and editing .py files in your favorite text editor (gedit, vim, emacs, nano, et al.)
You can also use Python IDEs (integrated development environment) once you get more comfortable, like Spyder or PyCharm
For this class we will be using Python 2.7
Hello, World!
This is one the most simple programs to write in any language, so it's what we'll learn first. In Python it's extra easy, and there are a couple of different ways you could go about it. Let's first try it interactively, and then see how we might run it from a .py file instead.
End of explanation
"""
# Beginning a line with "#" is a comment that is ignored by python
myint = 8 # assigns the integer value to the variable myint
print myint # shows the value assigned to myint on the screen
print type(myint) # shows the variable type of myint, note that we did not have to specify this! python got it.
"""
Explanation: Basic Cheat Sheet
Variable Types in Python
Numbers
int (integers, limited to 64bit representation), i.e. 10
float (floating point real values), i.e 6.2
long (long integers, limited only to available memory), i.e 10L
complex (complex numbers), i.e. 0.5j
String, i.e. "hello, world!"
List, i.e. [1,'star', 3, 'planet', 7]
Tuple, i.e. (1, 'star', 3, 'planet',7)
think of this as a "read-only" list
Dictionary, i.e. {'star': 'sun', 'planet': 'earth'}
think of dictionaries as "key-value" pairs
Arithmetic in Python
+ : addition
-: subtraction
/: division
*: multiplication
%: modulus (remainder)
**: exponentiation
Comparisons and Boolean Operators in Python
==, !=: equal to, not equal to
<, <=: less than, less than or equal to
>, >=: greater than, greater than or equal to
or, i.e A or B: true if either A or B or both are true
and, i.e. A and B: true only if both are true
not, i.e. not A: true only if A is false
Basic Data Types in Python
End of explanation
"""
myfloat = 8. # note the difference in the trailing "." for a float. This is important for calculations!
print myfloat
print type(myfloat)
mycomplex = 3.0 + 4.1j # note that the imaginary part of complex numbers get a trailing "j"
print mycomplex
print type(mycomplex)
mystring = "stars are so cool" # assigns the string to the variable mystring
print mystring
print type(mystring)
mylist = [1,5,'star','9', 'planet'] # lists can have mixed variable types.
#they should be comma separated and in square brackets
print mylist
print type(mylist)
otherlist = [mylist, 'earth', 'mars'] # you can put lists inside lists!
print otherlist # you can see from the output that mylist stores the list above and acts as shorthand
mylist.append(42) # lists can be appended to. we call this a method of the list object
print mylist
mylist.pop(0) # lists can have items deleted from this, in this case the first item in the list
print mylist
mylist.sort() # lists can also be sorted or reverse sorted
print mylist
mytuple = (1,5,'star','9','planet') # tuples are like read-only lists, and use parenthese, NOT square brackets
print mytuple
print type(mytuple)
mydictionary = {'star': 'sun', 'planet': 'earth', 'satellite': 'moon'} # dictionaries are enclosed in curly brackets
# are made up of key value pairs
print mydictionary
print type(mydictionary)
print mydictionary['star'] # look up a value in a dictionary by keyword. here keyword is "star" and value is "sun"
print mydictionary.keys() # print all keys in dictionary
print mydictionary.values() # print all values in dictionary
mydictionary = {'list1': mylist, 'list2': otherlist} # we can assign lists to be values in a dictionary!
print mydictionary['list1'] # finds the value (in this case a list) associated with key "list1"
"""
Explanation: NOTE: in an IPython Notebook, or IPython itself you don't need to use print to get the value of the variable to show on the screen, you only need to type the variable itself. Try it!
End of explanation
"""
print 2 + 2 # basic addition of integers in python
print 2*3 # basic multiplication of integers in python
print 19 - 7 # basic subtraction of integers in python
print 6 / 3 # basic division of integers in python
print 6 / 5 # in python 2.7 you have to be careful about using integers for division! integer operations give
# you back an integer, NOT a float.
# It is usually safe just to use floats when doing division if you don't want to get an error like above.
print 6 / 5. # remember the trailing "." tells python we want to use a float
print 10 % 6 # this is the modulus (remainder) operator
print 2**2 # basic exponentiation
print 3.154e+7 # note that you can do scientific notation as well
# this is the same as 3.154*10**7
"""
Explanation: Arithmetic Operations and Variable Assignment
End of explanation
"""
subject = "ASTRO"
course = "192"
print subject+course
print "there are" + 3.154e+7 + "seconds in a year" # why won't this work?
print "\t there are \n " + str(3.154e+7) + " seconds in a year" # need to make it a string. escape characters
# like \n and \t do things like "newline" and "tab"
print "o*"*50
"""
Explanation: We can also do arithmetic on strings!
End of explanation
"""
c = 3.0*10**5 # speed of light (c) in km/s
diameter_lyr = 120000 # diameter of mw in lyr
s_per_yr = 3.154e+7 # number of seconds in a year
diameter_km = diameter_lyr*c*s_per_yr # diameter of mw in km
print diameter_km
"""
Explanation: Instead of doing all our calculations like this in Python, let's look at how we can make things easier using variable assignment.
End of explanation
"""
c, diameter_lyr, s_per_yr = 3.0*10**5,120000, 3.154e+7 # define all variables on one line
diameter_km = diameter_lyr*c*s_per_yr # diameter of mw in km
print diameter_km
"""
Explanation: Or, we can make this look even simpler with a nice trick in Python.
End of explanation
"""
y = 3
y += 3 # y = y + 3
print y
"""
Explanation: Another neat arithmetic trick is doing "operate-and-assign," which will likely become more important for loops.
End of explanation
"""
mass_of_sun = 1.989*10**30 # mass of sun in kg
mass_of_earth = 5.972*10**24 # mass of earth in kg
mass_of_earth < mass_of_sun # here we are making a comparison that is evaluated to true
mass_of_sun == mass_of_earth # this comparison operator means "equal to," note that it is NOT the same as "=" which
# assigns a value to a variable
mass_of_sun != mass_of_earth # this comparison operator means "not equal to"
"""
Explanation: Comparison Operators and Boolean Variables
End of explanation
"""
mass_of_moon = 7.35*10**22 # mass of moon in kg
mass_of_moon < mass_of_earth < mass_of_sun
"""
Explanation: We can also string together multiple inequalities:
End of explanation
"""
0.1 + 0.2 == 0.3 # this equality returns False, why?
print "{0:.20f}".format(0.1 + 0.2) # don't worry about the print statements now; they tell us how many decimals to print
print "{0:.20f}".format(0.3) # clearly these two are not equal! careful with floats.
"""
Explanation: Note that you should be careful about doing comparisons on floating point values since these are stored in a
specific way.
End of explanation
"""
(mass_of_moon < mass_of_earth) and (mass_of_earth < mass_of_sun) # both MUST be true for this to return true
(mass_of_moon > mass_of_earth) or (mass_of_earth < mass_of_sun) # only one must be true for this to return true
(mass_of_moon > mass_of_earth) or not (mass_of_earth > mass_of_sun) # what do you expect this to return?
(mass_of_moon > mass_of_earth) or (mass_of_earth > mass_of_sun) # what do you expect this to return?
"""
Explanation: Now that we've seen the "Boolean" variables True and False in Python let's take a look at logical operators
that can test these Boolean variables.
End of explanation
"""
import numpy as np # import the numpy module so that we can use all it's built-in functions. shorten the name for ease.
myarray = np.zeros(5) # this is a built-in function from the module numpy that creates an array of zeros
# the size of the array is the argument to zeros, in this case 5
print myarray # the default type for values IN the array from np.zeros is float
print myarray.dtype
myarray = np.zeros(5, dtype='int') # but we can change the type of value inside an array in this way
print myarray.dtype
print myarray.shape # you can also get other properties of your array in this way (like dtype above)
print myarray.size
print myarray.sum() # there are also methods like sum, mean, min, and max for arrays
"""
Explanation: Importing Modules and Using Built-In Functions:
Using Arrays in Python
A lot of functionality in Python comes from being able to use pre-existing modules and the functions
therein to perform specific operations. We will go through the syntax for doing this, below, and in addition talk about some of the most useful modules for astronomy that exist in Python. We will talk about building our own functions in a later lesson. A module is just an organized piece of code (.py files are treated as modules, for example).
To import a module you simply type the following:
import module
To then use a function from this module you would do the following:
module.function(x)
Where x is going to be whatever the function takes as its argument. There may be more than one argument that the function takes, in which case you could have function(x,y,z).
Modules may also have sub-modules that in turn have their own functions you want to use, in which case you would
type this:
from module1 import module2
module2.function(x)
Lastly, you can change the name of a module in your code for ease of typing if it is something you use quite often. For example:
import module as mod
mod.function
Let's take a look at a concrete example for one of the most useful modules you will come across in python. This module is called "numpy" and makes using arrays (which we will discuss momentarily) and doing mathematical operations on
these arrays very simple.
End of explanation
"""
myarray + 2 # we can do mathematical operations on arrays for the WHOLE array at once! very powerful.
myarray = myarray + 10 # note that the above cell did not change the value of the whole array because we didn't
# assign that operation to any variable. now we have done this
np.log10(myarray) # this is another built-in function from numpy that allows you to take the log
# of the whole array (in base 10)
help(np.zeros) # using this help function is much like "man" in bash. it will tell us more about this function
"""
Explanation: Sidebar: how are arrays different from lists that we have seen above? We have seen this lists can store heterogenous data (data that contains different types). Arrays, on the other hand, should store homogenous data (data of the same type), and are usually used for storing things that you want to perform fast mathematical operations on. Arrays can speed things up a lot in Python as we will see later. Arrays are NOT comma separated like a list, but the elements of
an array can be accessed just like the individual elements of a list can (again, as we will see later).
End of explanation
"""
mymatrix = np.zeros((5,3)) # python uses row-column notation, so this creates a matrix of five rows and three columns
print mymatrix
"""
Explanation: You can also make arrays in Python which are not strictly 1-d. For example:
End of explanation
"""
myarray[2] = 80. # this assigns a value to the THIRD element of my array. you can see that it's not different!
myarray
myarray[:3] = 0. # this assigns the value to the first through third (NOT including fourth) elements of my array
# this is an example of array slicing
myarray
"""
Explanation: So we have seen now how to create arrays and lists and how to do operations on these as a whole, but how do we
access individual elements of these arrays or lists? To do this we need to learn about array and list indexing and how
to "slice" arrays in Python. This is a pictorial represenation of how indexing and slicing works with python arrays. Notice how it starts with ZERO. Python is a "zero-indexing" language.
<img src="images/string-slicing.png">
End of explanation
"""
bins = np.zeros(5) + np.arange(5) # this creates an array of zeros, and then uses another built in function of numpy
# called arange to populate each element of this array with different values
bins # notice how arange gives you the "range" of the input value 5, but starts again with zero.
bins = np.zeros(5) + np.arange(6) # why won't this work?
len(bins) # you can always check the size of your array using "len"
bins = np.arange(5) # note that there are multiple ways to create this array, all of which work fine
bins
bincenters = (bins[1:] + bins[:-1])*0.5 # slicing the array in this way gives me the "centers" of my previous values
bincenters
# let's try to see in detail how this slicing works, step-by-step:
print "first bin slice: {0}".format(str(bins[1:]))
print "second bin slice: {0}".format(str(bins[:-1]))
print "bin slices added: {0}".format(str(bins[1:] + bins[:-1]))
print "bin centers: {0}".format(str(bincenters))
"""
Explanation: Let's look at a more complicated way to slice arrays:
End of explanation
"""
mymatrix # recall we made this 5x3 matrix earlier
mymatrix[0] = 6. # assigns the value 6 to first ROW of the matrix
mymatrix[0,0] = 10. # assigns the vlaue 10 to the first element of the first column in the first row
mymatrix
print mymatrix[:,0] # accesses first column of matrix
print mymatrix[0,:] # accesses first row of matrix
"""
Explanation: Now that we know more about indexing and slicing, we can even access individual elements of matrices!
End of explanation
"""
arr = np.linspace(0,35) # another built-in numpy function for making evenly spaced array over given interval
print np.where(arr > 10.) # here we have used where to find the INDICES where the array is greater than 10
indices = np.where(arr > 10.) # assign the variable indices with these values from above
arr[indices] # now reindex our array with these indices and we get the VALUES at those indices
arr[np.where(arr > 10.)] # you can also skip a step above and write it like this
arr[np.where(arr >= np.max(arr))] # what do you think this will do? combines the operators we learned before
"""
Explanation: Now that we know more about array slicing and indexing, let's look at another powerful function of numpy called "where:"
End of explanation
"""
|
pdh21/XID_plus | docs/notebooks/examples/SED_emulator/JAX_greybody_emulator.ipynb | mit | import fitIR
import fitIR.models as models
import fitIR.analyse as analyse
from astropy.cosmology import WMAP9 as cosmo
import jax
import numpy as onp
import pylab as plt
import astropy.units as u
import scipy.integrate as integrate
%matplotlib inline
import jax.numpy as np
from jax import grad, jit, vmap, value_and_grad
from jax import random
from jax import vmap # for auto-vectorizing functions
from functools import partial # for use with vmap
from jax import jit # for compiling functions for speedup
from jax.experimental import stax # neural network library
from jax.experimental.stax import Conv, Dense, MaxPool, Relu, Flatten, LogSoftmax, LeakyRelu # neural network layers
from jax.experimental import optimizers
from jax.tree_util import tree_multimap # Element-wise manipulation of collections of numpy arrays
import matplotlib.pyplot as plt # visualization
# Generate key which is used to generate random numbers
key = random.PRNGKey(1)
"""
Explanation: The JAX emulator: Greybody prototype
In this notebook, I will prototype my idea for emulating radiative transfer codes with a Deepnet in order for it to be used inside xidplus. As numpyro uses JAX, the Deepnet wil ideally be trained with a JAX network. As a proof of concept, I will use a greybody rather than a radiative transfer code.
End of explanation
"""
def standarise_uniform(lims):
param_sd=(lims[1]-lims[0])/np.sqrt(12.0)
param_mean=0.5*(lims[1]+lims[0])
return param_sd,param_mean
def generate_samples(size=100,lims=np.array([[6,16],[0,7],[20,80]])):
"""Sample from uniform space"""
#get parameter values from uniform distribution
LIR=onp.random.uniform(low=lims[0,0],high=lims[0,1],size=size)
#sample in log10 space for redshift
redshift=onp.random.uniform(low=lims[1,0],high=lims[1,1],size=size)
#sample in log10 space
temperature=onp.random.uniform(low=lims[2,0],high=lims[2,1],size=size)
#get standard deviation and mean for uniform dist
LIR_sd,LIR_mean=standarise_uniform(lims[0,:])
red_sd, red_mean=standarise_uniform(lims[1,:])
temp_sd,temp_mean=standarise_uniform(lims[2,:])
return onp.vstack((LIR,redshift,temperature)).T,onp.vstack(((LIR-LIR_mean)/LIR_sd,(redshift-red_mean)/red_sd,(temperature-temp_mean)/temp_sd)).T
def transform_parameters(param,lims=np.array([[6,16],[0,7],[20,80]])):
"""transform from physical values to standardised values"""
LIR_sd,LIR_mean=standarise_uniform(lims[0,:])
red_sd, red_mean=standarise_uniform(lims[1,:])
temp_sd,temp_mean=standarise_uniform(lims[2,:])
LIR_norm=(param[0]-LIR_mean)/LIR_sd
red_norm=(param[1]-red_mean)/red_sd
temp_norm=(param[2]-temp_mean)/temp_sd
return np.vstack((LIR_norm,red_norm,temp_norm)).T
def inverse_transform_parameters(param,lims=np.array([[6,16],[0,7],[20,80]])):
""" Transform from standardised parameters to physical values
function works with posterior samples"""
LIR_sd,LIR_mean=standarise_uniform(lims[0,:])
red_sd, red_mean=standarise_uniform(lims[1,:])
temp_sd,temp_mean=standarise_uniform(lims[2,:])
LIR=param[...,0]*LIR_sd+LIR_mean
red=param[...,1]*red_sd+red_mean
temp=param[...,2]*temp_sd+temp_mean
return np.stack((LIR.T,red.T,temp.T)).T
"""
Explanation: The first step is to create a training and validation dataset. To do this I will randomly sample from parameter space (rather than a grid). I will create a function to do the sampling. I will also define function to do the transform and inverse_transform from standardised values to physical values.
End of explanation
"""
import xidplus
from xidplus import filters
filter_=filters.FilterFile(file=xidplus.__path__[0]+'/../test_files/filters.res')
SPIRE_250=filter_.filters[215]
SPIRE_350=filter_.filters[216]
SPIRE_500=filter_.filters[217]
MIPS_24=filter_.filters[201]
PACS_100=filter_.filters[250]
PACS_160=filter_.filters[251]
bands=[SPIRE_250,SPIRE_350,SPIRE_500]#,PACS_100,PACS_160]
eff_lam=[250.0,350.0,500.0]#, 100.0,160.0]
from scipy.interpolate import interp1d
def get_fluxes(samples):
measured=onp.empty((samples.shape[0],len(bands)))
val = onp.linspace(onp.log10(3E8/8E-6),onp.log10(3E8/1E-3),1000)
val = 10**val
for i,s in enumerate(samples):
z=s[1]
prior = {}
prior['z'] = s[1]
prior['log10LIR'] = s[0]
prior['T'] = s[2]
prior['emissivity'] = 1.5
source = models.greybody(prior)
nu,lnu = source.generate_greybody(val,z)
wave = 3E8/nu*1E6
sed=interp1d(wave,lnu)
dist = cosmo.luminosity_distance(z).to(u.cm).value
for b in range(0,len(bands)):
measured[i,b]=(1.0+z)*filters.fnu_filt(sed(bands[b].wavelength/1E4),
3E8/(bands[b].wavelength/1E10),
bands[b].transmission,
3E8/(eff_lam[b]*1E-6),
sed(eff_lam[b]))/(4*onp.pi*dist**2)
return measured/10**(-26)
"""
Explanation: I need to convolve the grebody with the relevant filters. I will use the code I already wrote in xidplus for the original SED work
End of explanation
"""
import torch
from torch.utils.data import Dataset, DataLoader
## class for sed using the torch dataset class
class sed_data(Dataset):
def __init__(self,params,fluxes):
self.X=params
self.y=fluxes
def __len__(self):
return len(self.X)
def __getitem__(self,idx):
return self.X[idx],self.y[idx]
"""
Explanation: DeepNet building
I will build a multi input, multi output deepnet model as my emulator, with parameters as input and the observed flux as outputs. I will train on log10 flux to make the model easier to train, and have already standarised the input parameters. I wilkl be using stax which can be thought of as the Keras equivalent for JAX. This blog was useful starting point.
End of explanation
"""
batch_size=10
## generate random SED samples
samp_train,samp_stand_train=generate_samples(2000)
## Use Steve's code and xidplus filters to get fluxes
measured_train=get_fluxes(samp_train)
## use data in SED dataclass
ds = sed_data(samp_stand_train,measured_train)
## use torch DataLoader
train_loader = DataLoader(ds, batch_size=batch_size,)
## do same but for test set
samp_test,samp_stand_test=generate_samples(500)
measured_test=get_fluxes(samp_test)
ds = sed_data(samp_stand_test,measured_test)
test_loader = DataLoader(ds, batch_size=batch_size)
# Use stax to set up network initialization and evaluation functions
net_init, net_apply = stax.serial(
Dense(128), LeakyRelu,
Dense(128), LeakyRelu,
Dense(len(bands))
)
in_shape = (-1, 3,)
out_shape, net_params = net_init(key,in_shape)
def loss(params, inputs, targets):
# Computes average loss for the batch
predictions = net_apply(params, inputs)
return np.mean((targets - predictions)**2)
def batch_loss(p,x_b,y_b):
loss_b=vmap(partial(loss,p))(x_b,y_b)
return np.mean(loss_b)
def sample_batch(outer_batch_size,inner_batch_size):
def get_batch():
xs, ys = [], []
for i in range(0,outer_batch_size):
samp_train,samp_stand_train=generate_samples(inner_batch_size)
## Use Steve's code and xidplus filters to get fluxes
measured_train=get_fluxes(samp_train)
xs.append(samp_stand_train)
ys.append(np.log(measured_train))
return np.stack(xs), np.stack(ys)
x1, y1 = get_batch()
return x1, y1
opt_init, opt_update, get_params= optimizers.adam(step_size=1e-3)
out_shape, net_params = net_init(key,in_shape)
opt_state = opt_init(net_params)
@jit
def step(i, opt_state, x1, y1):
p = get_params(opt_state)
g = grad(batch_loss)(p, x1, y1)
loss_tmp=batch_loss(p,x1,y1)
return opt_update(i, g, opt_state),loss_tmp
np_batched_loss_1 = []
valid_loss=[]
K=40
for i in range(4000):
# sample random batchs for training
x1_b, y1_b = sample_batch(10, K)
# sample random batches for validation
x2_b,y2_b = sample_batch(1,K)
opt_state, l = step(i, opt_state, x1_b, y1_b)
p = get_params(opt_state)
valid_loss.append(batch_loss(p,x2_b,y2_b))
np_batched_loss_1.append(l)
if i % 100 == 0:
print(i)
net_params = get_params(opt_state)
opt_init, opt_update, get_params= optimizers.adam(step_size=1e-4)
for i in range(5000):
# sample random batchs for training
x1_b, y1_b = sample_batch(10, K)
# sample random batches for validation
x2_b,y2_b = sample_batch(1,K)
opt_state, l = step(i, opt_state, x1_b, y1_b)
p = get_params(opt_state)
valid_loss.append(batch_loss(p,x2_b,y2_b))
np_batched_loss_1.append(l)
if i % 100 == 0:
print(i)
net_params = get_params(opt_state)
plt.figure(figsize=(10,5))
plt.semilogy(np_batched_loss_1,label='Training loss')
plt.semilogy(valid_loss,label='Validation loss')
plt.xlabel('Iteration')
plt.ylabel('Loss (MSE)')
plt.legend()
"""
Explanation: I will use batches to help train the network
End of explanation
"""
x,y=sample_batch(100,100)
predictions = net_apply(net_params,x)
res=(predictions-y)/(y)
fig,axes=plt.subplots(1,3,figsize=(50,10))
for i in range(0,3):
axes[i].hist(res[:,:,i].flatten()*100.0,np.arange(-20,20,0.5))
axes[i].set_title(bands[i].name)
axes[i].set_xlabel(r'$\frac{f_{pred} - f_{True}}{f_{True}} \ \%$ error')
plt.subplots_adjust(wspace=0.5)
"""
Explanation: Investigate performance of each band of emulator
To visulise performance of the trainied emulator, I will show the difference between real and emulated for each band.
End of explanation
"""
import cloudpickle
with open('GB_emulator_20210324_notlog10z_T.pkl', 'wb') as f:
cloudpickle.dump({'net_init':net_init,'net_apply': net_apply,'params':net_params,'transform_parameters':transform_parameters,'inverse_transform_parameters':inverse_transform_parameters}, f)
net_init, net_apply
transform_parameters
from xidplus.numpyro_fit.misc import load_emulator
obj=load_emulator('GB_emulator_20210323.pkl')
"""
Explanation: Save network
Having trained and validated network, I need to save the network and relevant functions
End of explanation
"""
|
rishuatgithub/MLPy | Topic_Modelling_LDA.ipynb | apache-2.0 | ## required installation for LDA visualization
!pip install pyLDAvis
## imports
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import LatentDirichletAllocation
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import pyLDAvis
from pyLDAvis import sklearn
pyLDAvis.enable_notebook()
"""
Explanation: <a href="https://colab.research.google.com/github/rishuatgithub/MLPy/blob/master/Topic_Modelling_LDA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Topic Modelling using Latent-Dirichlet Allocation
Blog URL : Topic Modelling : Latent Dirichlet Allocation, an introduction
Author : Rishu Shrivastava
End of explanation
"""
### Reading the dataset from path
filename = 'News_Category_Dataset_v2.json'
data = pd.read_json(filename, lines=True)
data.head()
### data dimensions (rows, columns) of the dataset we are dealing with
data.shape
### Total articles by category spread - viz
plt.figure(figsize=(20,5))
sns.set_style("whitegrid")
sns.countplot(x='category',data=data, orient='h', palette='husl')
plt.xticks(rotation=90)
plt.title("Category count of article")
plt.show()
"""
Explanation: Step 1: Loading and Understanding Data
<div class="active active-primary">
As part of this step we will load an existing dataset and load it into a pandas dataframe. We will also try to on a brief understand the data.
- Source of the dataset is from Kaggle [News category classifer](https://www.kaggle.com/hengzheng/news-category-classifier-val-acc-0-65).</div>
End of explanation
"""
### tranform the dataset to fit the original requirement
data['Combined_Description'] = data['headline'] + data['short_description']
filtered_data = data[['category','Combined_Description']]
filtered_data.head()
## checking the dimensions of filtered data
filtered_data.shape
"""
Explanation: <div class ="alert alert-success">As we can see in the above diagram, a lot of new relates to Politics and its related items. Also we can understand a total of 20 new categories are defined in this dataset. So as part of Topic modelling exercise we can try to categories the dataset into 20 topics.
</div>
Step 2: Transforming the dataset
<div class="alert alert-warning">For the purpose of this demo and blog, we will do the following:
1. **Combine** both the **Headline and Short Description** into one single column to bring more context to the news and corpus. Calling it as: ```Combined_Description```
2. **Drop** rest of the attributes from the dataframe other than Combined_Description and Categories.
</div>
End of explanation
"""
df_tfidf = TfidfVectorizer(max_df=0.5, min_df=10, stop_words='english', lowercase=True)
df_tfidf_transformed = df_tfidf.fit_transform(filtered_data['Combined_Description'])
df_tfidf_transformed
"""
Explanation: <div class="alert alert-warning">
<b>Applying TFIDFVectorizer to pre-process the data into vectors.</b>
- max_df : Ignore the words that occurs more than 95% of the corpus.
- min_df : Accept the words in preparation of vocab that occurs in atleast 2 of the documents in the corpus.
- stop_words : Remove the stop words. We can do this in separate steps or in a single step.
</div>
End of explanation
"""
### Define the LDA model and set the topic size to 20.
topic_clusters = 20
lda_model = LatentDirichletAllocation(n_components=topic_clusters, batch_size=128, random_state=42)
### Fit the filtered data to the model
lda_model.fit(df_tfidf_transformed)
"""
Explanation: <div class="alert alert-success">
Here you can notice that the transformed dataset holds a sparse matrix with a dimension of 200853x21893; where 200853 is the total number of rows and 21893 is the total word corpus.
</div>
Step 3: Building Latent-Dirichlet Algorithm using scikit-learn
End of explanation
"""
topic_word_dict = {}
top_n_words_num = 10
for index, topic in enumerate(lda_model.components_):
topic_id = index
topic_words_max = [df_tfidf.get_feature_names()[i] for i in topic.argsort()[-top_n_words_num:]]
topic_word_dict[topic_id] = topic_words_max
print(f"Topic ID : {topic_id}; Top 10 Most Words : {topic_words_max}")
"""
Explanation: <div class="alert alert-danger">
Note: Fitting the model to the dataset take a long time. You will see the output as model summary, if success.
</div>
Step 4: LDA Topic Cluster
End of explanation
"""
topic_output = lda_model.transform(df_tfidf_transformed)
filtered_data = filtered_data.copy()
filtered_data['LDA_Topic_ID'] = topic_output.argmax(axis=1)
filtered_data['Topic_word_categories'] = filtered_data['LDA_Topic_ID'].apply(lambda id: topic_word_dict[id])
filtered_data[['category','Combined_Description','LDA_Topic_ID','Topic_word_categories']].head()
"""
Explanation: <div class="alert alert-warning">
Transforming the existing dataframe and adding the content with a topic id and LDA generated topics
<div>
End of explanation
"""
viz = sklearn.prepare(lda_model=lda_model, dtm=df_tfidf_transformed, vectorizer=df_tfidf)
pyLDAvis.display(viz)
"""
Explanation: Step 6: Visualizing
End of explanation
"""
|
p0licat/university | Experiments/Crawling/Jupyter Notebooks/Camelia Chira.ipynb | mit | class HelperMethods:
@staticmethod
def IsDate(text):
# print("text")
# print(text)
for c in text.lstrip():
if c not in "1234567890 ":
return False
return True
import pandas
import requests
page = requests.get('http://www.cs.ubbcluj.ro/~cchira/publications.html')
data = page.text
from bs4 import BeautifulSoup
soup = BeautifulSoup(data)
def SearchDate(text):
date_val = re.search('\(?(1|2)(9|0)[0-9]{2}( ?)(\.|,|\))', text)
try:
date = date_val.group(0)
except AttributeError:
print(text)
return date.lstrip('\(').rstrip(",. \)")
import re
def GetPublicationTitle(text):
match = re.search(', ([a-zA-Z ]+-?:?){3,}(,|.)', text)
try:
title = match.group(0).lstrip(', ')
except AttributeError:
return ""
date = SearchDate(text)
return title, date
def GetPublicationTitle_B(text):
# print("B: ")
# print(text)
authors = "".join(text.split('.,'))
title_val = re.search('([a-zA-Z]+ ?-?)+(,)', text)
title = title_val.group(0)
date = SearchDate(text)
return title, date
pubs = []
# print(soup.find_all('div'))
for e in soup.find_all('div', attrs={"class": "section_mine"}):
lines = e.find_all("p")
for line in lines:
# print(line)
try:
title = line.find_all('a')
title = title[0].contents[0]
date = SearchDate(line.text)
except:
title, date = GetPublicationTitle(line.text)
if len(title) < 25 and '.' in title:
title, date = GetPublicationTitle_B(line.text)
print("title: ", title)
print("date: ", date)
pubs.append((title, date))
"""
Explanation: Manual publication DB insertion from raw text using syntax features
Publications and conferences of Dr. CHIRA Camelia, Profesor Universitar
http://www.cs.ubbcluj.ro/~cchira
End of explanation
"""
import mariadb
import json
with open('../credentials.json', 'r') as crd_json_fd:
json_text = crd_json_fd.read()
json_obj = json.loads(json_text)
credentials = json_obj["Credentials"]
username = credentials["username"]
password = credentials["password"]
table_name = "publications_cache"
db_name = "ubbcluj"
mariadb_connection = mariadb.connect(user=username, password=password, database=db_name)
mariadb_cursor = mariadb_connection.cursor()
for paper in pubs:
title = ""
pub_date = ""
affiliations = ""
try:
pub_date = paper[1].lstrip()
pub_date = str(pub_date) + "-01-01"
if len(pub_date) != 10:
pub_date = ""
except:
pass
try:
title = paper[0].lstrip()
except:
pass
insert_string = "INSERT INTO {0} SET ".format(table_name)
insert_string += "Title=\'{0}\', ".format(title)
insert_string += "ProfessorId=\'{0}\', ".format(9)
if pub_date != "":
insert_string += "PublicationDate=\'{0}\', ".format(str(pub_date))
insert_string += "Authors=\'{0}\', ".format("")
insert_string += "Affiliations=\'{0}\' ".format("")
print(insert_string)
try:
mariadb_cursor.execute(insert_string)
except mariadb.ProgrammingError as pe:
print("Error")
raise pe
except mariadb.IntegrityError:
continue
mariadb_connection.close()
"""
Explanation: DB Storage (TODO)
Time to store the entries in the papers DB table.
End of explanation
"""
|
csiu/100daysofcode | datamining/2017-03-03-day07.ipynb | mit | def readability_ease(num_sentences, num_words, num_syllables):
asl = num_words / num_sentences
asw = num_syllables / num_words
return(206.835 - (1.015 * asl) - (84.6 * asw))
"""
Explanation: layout: post
author: csiu
date: 2017-03-03
title: "Day07:"
categories: update
tags:
- 100daysofcode
- text-mining
excerpt:
DAY 07 - Mar 3, 2017
Yesterday, the Flesch reading ease score got me thinking ...
Flesch reading ease
Flesch reading ease is a measure of how difficult a passage in English is to understand. The formula for the readability ease measure is calculated as follows:
$RE = 206.835 – (1.015 x \frac{total\ words}{total\ sentences}) – (84.6 x \frac{total\ syllables}{total\ words})$
where $\frac{total\ words}{total\ sentences}$ refers to the average sentence length (ASL) and
$\frac{total\ syllables}{total\ words}$ refers to the average number of syllables per word (ASW).
End of explanation
"""
def readability_ease_interpretation(x):
if 90 <= x:
res = "5th grade] "
res += "Very easy to read. Easily understood by an average 11-year-old student."
elif 80 <= x < 90:
res = "6th grade] "
res += "Easy to read. Conversational English for consumers."
elif 70 <= x < 80:
res = "7th grade] "
res += "Fairly easy to read."
elif 60 <= x < 70:
res = "8th & 9th grade] "
res += "Plain English. Easily understood by 13- to 15-year-old students."
elif 50 <= x < 60:
res = "10th to 12th grade] "
res += "Fairly difficult to read."
elif 30 <= x < 50:
res = "College] "
res += "Difficult to read."
elif 0 <= x < 30:
res = "College Graduate] "
res += "Very difficult to read. Best understood by university graduates."
print("[{:.1f}|{}".format(x, res))
"""
Explanation: The readability ease (RE) score ranges from 0 to 100 and a higher scores indicate material that is easier to read.
End of explanation
"""
text = "Hello world, how are you? I am great. Thank you for asking!"
"""
Explanation: Test case
End of explanation
"""
import nltk
import re
text = text.lower()
words = nltk.wordpunct_tokenize(re.sub('[^a-zA-Z_ ]', '',text))
num_words = len(words)
print(words)
print(num_words)
"""
Explanation: In this test case, we have 12 words, 14 syllables, and 3 sentences.
Counting words
Counting words is easy.
End of explanation
"""
from nltk.corpus import cmudict
from curses.ascii import isdigit
d = cmudict.dict()
def count_syllables(word):
return([len(list(y for y in x if isdigit(y[-1]))) for x in d[word.lower()]][0])
print("Number of syllables per word", "="*28, sep="\n")
for word in words:
num_syllables = count_syllables(word)
print("{}: {}".format(word, num_syllables))
"""
Explanation: Counting syllables
Counting syllables is a bit more tricky. According to Using Python and the NLTK to Find Haikus in the Public Twitter Stream by Brandon Wood (2013), the Carnegie Mellon University (CMU) Pronouncing Dictionary corpora contain the syllable count for over 125,000 (English) words and thus could be used to count syllables.
End of explanation
"""
sentences = nltk.tokenize.sent_tokenize(text)
num_sentences = len(sentences)
print("Number of sentences: {}".format(num_sentences), "="*25, sep="\n")
for sentence in sentences:
print(sentence)
"""
Explanation: Counting sentences
This was already done in Day03.
End of explanation
"""
def flesch_reading_ease(text):
## Preprocessing
text = text.lower()
sentences = nltk.tokenize.sent_tokenize(text)
words = nltk.wordpunct_tokenize(re.sub('[^a-zA-Z_ ]', '',text))
## Count
num_sentences = len(sentences)
num_words = len(words)
num_syllables = sum([count_syllables(word) for word in words])
## Calculate
fre = readability_ease(num_sentences, num_words, num_syllables)
return(fre)
fre = flesch_reading_ease(text)
readability_ease_interpretation(fre)
"""
Explanation: Putting it all together
End of explanation
"""
# (As You Like it Act 2, Scene 7)
text = """
All the world's a stage,
and all the men and women merely players.
They have their exits and their entrances;
And one man in his time plays many parts
"""
fre = flesch_reading_ease(text)
readability_ease_interpretation(fre)
"""
Explanation: In the example, the sentence was constructed at a 5th grade level. It's strange the score is above 100.
What about Shakespeare?
End of explanation
"""
|
dmlc/web-data | gluonnlp/logs/embedding_results/results.ipynb | apache-2.0 | from __future__ import print_function
import pandas as pd
pd.options.display.max_rows = 999
pd.set_option('display.width', 1000)
import glob
header = ["evaluation_type", "dataset", "kwargs", "evaluation", "value", "num_skipped"]
similarity_dfs = []
similarity_names = []
similarity_glob = './results/similarity*'
for similarity_file in glob.glob(similarity_glob):
df = pd.read_table(similarity_file, header=None, names=header).set_index(["dataset", "kwargs"]).drop(["evaluation_type"], axis=1)
similarity_dfs.append(df)
similarity_names.append(similarity_file[len(similarity_glob):])
analogy_dfs = []
analogy_names = []
analogy_glob = './results/analogy*'
for analogy_file in glob.glob(analogy_glob):
df = pd.read_table(analogy_file, header=None, names=header).set_index(["dataset", "kwargs", "evaluation"]).drop(["evaluation_type"], axis=1)
analogy_dfs.append(df)
analogy_names.append(analogy_file[len(analogy_glob):])
similarity_df = pd.concat(similarity_dfs, keys=similarity_names, names=['embedding']).reorder_levels(["dataset", "kwargs", "embedding"]).sort_index()
analogy_df = pd.concat(analogy_dfs, keys=analogy_names, names=['embedding']).reorder_levels(["dataset", "evaluation", "kwargs", "embedding"]).sort_index()
"""
Explanation: Evaluating Pre-trained Word Embeddings - Extended results
This notebook contains the extended results on word embeddings evaluation.
End of explanation
"""
for (dataset, kwargs), df in similarity_df.groupby(level=[0,1]):
print('Performance on', dataset, kwargs)
print(df.loc[dataset, kwargs].sort_values(by='value', ascending=False))
print()
print()
"""
Explanation: Similarity task
We can see that the performance varies between the different embeddings on the different datasests.
Please see the API page for more information about the respective datasets.
End of explanation
"""
for kwargs, df in analogy_df.loc['GoogleAnalogyTestSet', 'threecosmul'].groupby(level=0):
print(kwargs)
print(df.loc[kwargs].sort_values(by='value', ascending=False))
print()
print()
"""
Explanation: Analogy task
For the analogy task, we report the results per category in the dataset.
Note that the analogy task is a open vocabulary task: Given a query of 3 words, we ask the model to select a 4th word from the whole vocabulary. Different pre-trained embeddings have vocabularies of different size. In general the vocabulary of embeddings pretrained on more tokens (indicated by a bigger number before the B in the embedding source name) include more tokens in their vocabulary. While training embeddings on more tokens improves their quality, the larger vocabulary also makes the analogy task harder.
In this experiment all results are reported with reducing the vocabulary to the 300k most frequent tokens. Questions containing Out Of Vocabulary words are ignored.
Google Analogy Test Set
We first display the results on the Google Analogy Test Set.
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient
estimation of word representations in vector space. In Proceedings of
the International Conference on Learning Representations (ICLR).
The Google Analogy Test Set contains the following categories.
All analogy questions per category follow the pattern specified by the category name.
We first present the results using the threecosmul analogy function.
End of explanation
"""
for kwargs, df in analogy_df.loc['GoogleAnalogyTestSet', 'threecosadd'].groupby(level=0):
print(kwargs)
print(df.loc[kwargs].sort_values(by='value', ascending=False))
print()
print()
"""
Explanation: We then present the results using the threecosadd analogy function.
End of explanation
"""
for kwargs, df in analogy_df.loc['BiggerAnalogyTestSet', 'threecosadd'].groupby(level=0):
print(kwargs)
print(df.loc[kwargs].sort_values(by='value', ascending=False))
print()
print()
"""
Explanation: Bigger Analogy Test Set
We then display the results on the Bigger Analogy Test Set (BATS).
Gladkova, A., Drozd, A., & Matsuoka, S. (2016). Analogy-based detection
of morphological and semantic relations with word embeddings: what works
and what doesn’t. In Proceedings of the NAACL-HLT SRW (pp. 47–54). San
Diego, California, June 12-17, 2016: ACL. Retrieved from
https://www.aclweb.org/anthology/N/N16/N16-2002.pdf
Unlike the Google Analogy Test Set, BATS is balanced across 4 types of relations (inflectional morphology, derivational morphology, lexicographic semantics, encyclopedic semantics).
We first present the results for the threecosadd analogy function.
End of explanation
"""
for kwargs, df in analogy_df.loc['BiggerAnalogyTestSet', 'threecosmul'].groupby(level=0):
print(kwargs)
print(df.loc[kwargs].sort_values(by='value', ascending=False))
print()
print()
"""
Explanation: We then present the results for the threecosmul analogy function.
End of explanation
"""
|
liufuyang/deep_learning_tutorial | jizhi-pytorch-2/02_sentiment_analysis/homework.ipynb | mit | import glob
all_filenames = glob.glob('./data/names/*.txt')
print(all_filenames)
"""
Explanation: 火炬上的深度学习(下)第二节:机器也懂感情?
课后练习:使用 LSTM 来判断人名属于哪个国家
我们要使用 PyTorch 搭建一个 LSTM 模型。
模型的输入是用ASCII字符表示的姓氏,输出是模型对这个姓氏所属语言的判断。
模型的训练数据是来自18种语言的2万条左右的姓氏文本。
训练完毕的理想模型可以预测出一个姓氏是属于哪种语言的。并且,我们还可以通过模型的预测结果分析各语言姓氏的相似性。
最终训练好的模型可以像下面那样使用。
```python
predict Hinton
(-0.47) Scottish
(-1.52) English
(-3.57) Irish
predict.py Schmidhuber
(-0.19) German
(-2.48) Czech
(-2.68) Dutch
```
理解 LSTM
看到本练习相信你已经对 LSTM 有一定的认识了。
如果还不熟悉 LSTM 可以再去看一下张老师讲的课程
处理训练数据
在提供的数据文件中,包含在 data/names 目录下的是18个命名规则为"[Language].txt"的文本文件,每个文件都包含一些名字,每个名字占一行。
End of explanation
"""
import unicodedata
import string
# 使用26个英文字母大小写再加上.,;这三个字符
# 建立字母表,并取其长度
all_letters = string.ascii_letters + " .,;'"
n_letters = len(all_letters)
# 将Unicode字符串转换为纯ASCII
def unicode_to_ascii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
print(unicode_to_ascii('Ślusàrski'))
print('all_letters:', all_letters)
print('all_letters:', len(all_letters))
"""
Explanation: 现在先让我们解决这个问题:
在我们收集的18种语言的名字中,中文、日文、韩文等名字都已经转化为音译的字母。这样做是因为有些语言的名字并不能用普通的ASCII英文字符来表示,比如“Ślusàrski”,这些不一样的字母会增加神经网络的“困惑”,影响其训练效果。所以我们得首先把这些特别的字母转换成普通的ASCII字符(即26个英文字母)。
End of explanation
"""
# 构建category_lines字典,名字和每种语言对应的列表
category_lines = {}
all_categories = []
# 按行读取出名字并转换成纯ASCII
def readLines(filename):
lines = open(filename).read().strip().split('\n')
return [unicode_to_ascii(line) for line in lines]
for filename in all_filenames:
# 取出每个文件的文件名(语言名)
category = filename.split('/')[-1].split('.')[0]
# 将语言名加入到all_categories列表
all_categories.append(category)
# 取出所有的姓氏lines
lines = readLines(filename)
# 将所有姓氏以语言为索引,加入到字典中
category_lines[category] = lines
n_categories = len(all_categories)
print('all_categories:', all_categories)
print('n_categories =', n_categories)
"""
Explanation: 然后再建立 readLines 方法,用于从文件中一行一行的将姓氏读取出来。
以18种语言为索引,将读取出的姓氏各自存储在名为 category_lines 的字典中。
End of explanation
"""
print(category_lines['Italian'][:5])
"""
Explanation: all_categories 中包含18中语言的姓氏。
category_lines 中以18中语言为索引,存储了所有的姓氏。
End of explanation
"""
all_line_num = 0
for key in category_lines:
all_line_num += len(category_lines[key])
print(all_line_num)
"""
Explanation: 我们来统计下数据中所有姓氏的个数。
End of explanation
"""
# 首先导入程序所需要的程序包
#PyTorch用的包
import torch
import torch.nn as nn
import torch.optim
from torch.autograd import Variable
#绘图、计算用的程序包
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
%matplotlib inline
"""
Explanation: 准备训练
End of explanation
"""
import random
def random_training_pair():
# 随机选择一种语言
category = random.choice(all_categories)
# 从语言中随机选择一个姓氏
line = random.choice(category_lines[category])
# 我们将姓氏和语言都转化为索引
category_index = all_categories.index(category)
line_index = [all_letters.index(letter) for letter in line]
# 你需要把 line 中字母的索引加入到line_index 中
# Todo:
return category, line, category_index, line_index
# 测试一下上面的函数方法
for i in range(5):
category, line, category_index, line_index = random_training_pair()
print('category =', category, '/ line =', line)
print('category =', category_index, '/ line =', line_index)
"""
Explanation: 下面我们再编写一个方法用于快速地获得一个训练实例(即一个名字以及它所属的语言):
其中 line_index 中保存的是选择的姓氏中的字母的索引,这个需要你去实现。
End of explanation
"""
def category_from_output(output):
# 1 代表在‘列’间找到最大
# top_n 是具体的值
# top_i 是位置索引
# 注意这里 top_n 和 top_i 都是1x1的张量
# output.data 取出张量数据
top_n, top_i = output.data.topk(1) # Tensor out of Variable with .data
# 从张量中取出索引值
category_i = top_i[0][0]
# 返回语言类别名和位置索引
return all_categories[category_i], category_i
"""
Explanation: 我们再建立一个用户转化模型输出的辅助函数。
它可以把网络的输出(1 x 18的张量)转化成“最可能的语言类别”,这就需要找到18列数据中哪个概率值最大。
我们可以使用 Tensor.topk 方法来得到数据中最大值位置的索引。
End of explanation
"""
class LSTMNetwork(nn.Module):
def __init__(self, input_size, hidden_size, output_size, n_layers=1):
super(LSTMNetwork, self).__init__()
self.n_layers = n_layers
self.hidden_size = hidden_size
# LSTM的构造如下:
# 一个embedding层,将输入的任意一个单词(list)映射为一个向量(向量的维度与隐含层有关系?)
self.embedding = nn.Embedding(input_size, hidden_size)
# 然后是一个LSTM隐含层,共有hidden_size个LSTM神经元,并且它可以根据n_layers设置层数
self.lstm = nn.LSTM(hidden_size, hidden_size, n_layers)
# 接着是一个全链接层,外接一个softmax输出
self.fc = nn.Linear(hidden_size, 18)
self.logsoftmax = nn.LogSoftmax()
def forward(self, input, hidden=None):
#首先根据输入input,进行词向量嵌入
embedded = self.embedding(input)
# 这里需要注意!
# PyTorch设计的LSTM层有一个特别别扭的地方是,输入张量的第一个维度需要是时间步,
# 第二个维度才是batch_size,所以需要对embedded变形
# 因为此次没有采用batch,所以batch_size为1
# 变形的维度应该是(input_list_size, batch_size, hidden_size)
embedded = embedded.view(input.data.size()[0], 1, self.hidden_size)
# 调用PyTorch自带的LSTM层函数,注意有两个输入,一个是输入层的输入,另一个是隐含层自身的输入
# 输出output是所有步的隐含神经元的输出结果,hidden是隐含层在最后一个时间步的状态。
# 注意hidden是一个tuple,包含了最后时间步的隐含层神经元的输出,以及每一个隐含层神经元的cell的状态
output, hidden = self.lstm(embedded, hidden)
#我们要把最后一个时间步的隐含神经元输出结果拿出来,送给全连接层
output = output[-1,...]
#全链接层
out = self.fc(output)
# softmax
out = self.logsoftmax(out)
return out
def initHidden(self):
# 对隐单元的初始化
# 对引单元输出的初始化,全0.
# 注意hidden和cell的维度都是layers,batch_size,hidden_size
hidden = Variable(torch.zeros(self.n_layers, 1, self.hidden_size))
# 对隐单元内部的状态cell的初始化,全0
cell = Variable(torch.zeros(self.n_layers, 1, self.hidden_size))
return (hidden, cell)
"""
Explanation: 编写 LSTM 模型
现在是建立 LSTM 模型的时候了。
我在模型中设置了一些空缺,你需要编写空缺处的代码。
如果遇到问题,可以参考课程中的代码讲解哦!
End of explanation
"""
import time
import math
# 开始训练LSTM网络
n_epochs = 100000
# 构造一个LSTM网络的实例
lstm = LSTMNetwork(n_letters, 10, n_categories, 2)
#定义损失函数
cost = torch.nn.NLLLoss()
#定义优化器,
optimizer = torch.optim.Adam(lstm.parameters(), lr = 0.001)
records = []
# 用于计算训练时间的函数
def time_since(since):
now = time.time()
s = now - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
start = time.time()
# 开始训练,一共5个epoch,否则容易过拟合
losses = []
for epoch in range(5):
#每次随机选择数据进行训练,每个 EPOCH 训练“所有名字个数”次。
for i in range(all_line_num):
category, line, y, x = random_training_pair()
x = Variable(torch.LongTensor(x))
y = Variable(torch.LongTensor(np.array([y])))
optimizer.zero_grad()
# Step1:初始化LSTM隐含层单元的状态
hidden = lstm.initHidden()
# Step2:让LSTM开始做运算,注意,不需要手工编写对时间步的循环,而是直接交给PyTorch的LSTM层。
# 它自动会根据数据的维度计算若干时间步
output = lstm(x, hidden)
# Step3:计算损失
loss = cost(output, y)
losses.append(loss.data.numpy()[0])
#反向传播
loss.backward()
optimizer.step()
#每隔3000步,跑一次校验集,并打印结果
if i >= 3000 and i % 3000 == 0:
# 判断模型的预测是否正确
guess, guess_i = category_from_output(output)
correct = '✓' if guess == category else '✗ (%s)' % category
# 计算训练进度
training_process = (all_line_num * epoch + i) / (all_line_num * 5) * 100
training_process = '%.2f' % training_process
print('第{}轮,训练损失:{:.2f},训练进度:{}%,({}),名字:{},预测国家:{},正确?{}'\
.format(epoch, np.mean(losses[-3000:]), float(training_process), time_since(start), line, guess, correct))
records.append([np.mean(losses[-3000:])])
a = [i[0] for i in records]
plt.plot(a, label = 'Train Loss')
plt.xlabel('Steps')
plt.ylabel('Loss')
plt.legend()
"""
Explanation: 训练网络
每次训练模型的时候,我的心里都是有点小激动的!
我同样在训练程序中预留了一些空位,你要编写空余位置的程序,训练才可以正常进行。
End of explanation
"""
import matplotlib.pyplot as plt
# 建立一个(18 x 18)的方阵张量
# 用于保存神经网络做出的预测结果
confusion = torch.zeros(n_categories, n_categories)
# 用于评估的模型的测试次数
n_confusion = 10000
# 评估用方法 传进去一个名字,给出预测结果
# 可以观察到这个方法的实现与 train 方法前半部分类似
# 其实它就是去掉反向传播的 train 方法
def evaluate(line_list):
# 调用模型前应该先初始化模型的隐含层
hidden = lstm.initHidden()
# 别忘了将输入的list转化为torch.Variable
line_variable = Variable(torch.LongTensor(line_list))
# 调用模型
output = lstm(line_variable, hidden)
return output
# 循环一万次
for i in range(n_confusion):
# 随机选择测试数据,包括姓氏以及所属语言
category, line, category_index, line_list = random_training_pair()
# 取得预测结果
output = evaluate(line_list)
# 取得预测结果的语言和索引
guess, guess_i = category_from_output(output)
# 以姓氏实际的所属语言为行
# 以模型预测的所属语言为列
# 在方阵的特定位置增加1
confusion[category_index][guess_i] += 1
# 数据归一化
for i in range(n_categories):
confusion[i] = confusion[i] / confusion[i].sum()
# 设置一个图表
fig = plt.figure()
ax = fig.add_subplot(111)
# 将 confusion 方阵数据传入
cax = ax.matshow(confusion.numpy())
fig.colorbar(cax)
# 设置图表两边的语言类别名称
ax.set_xticklabels([''] + all_categories, rotation=90)
ax.set_yticklabels([''] + all_categories)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
"""
Explanation: 通过姓氏分析语言的相似性
激动人心的时刻到了!
下面将使用10000条数据评估训练好的模型,并根据评估的结果绘制图形,从图形中我们可以发现哪些语言的姓氏是相似的!
你需要自行编写 evaluate 函数的内容。
End of explanation
"""
# predict函数
# 第一个参数为要进行预测的姓氏
# 第二个参数为预测最大可能所属语言的数量
def predict(input_line, n_predictions=3):
# 首先将用户输入的名字打印出来
print('\n> %s' % input_line)
# 将用户输入的字符串转化为索引列表
input_line = list(map(lambda x: all_letters.find(x), input_line))
# 将用户输入的名字传入模型中进行预测
output = evaluate(input_line)
# 获得概率最大的n_predictions个语言类别
topv, topi = output.data.topk(n_predictions, 1, True)
# topv中保存着概率值
# topi中保存着位置索引
predictions = []
for i in range(n_predictions):
value = topv[0][i]
category_index = topi[0][i]
# 将预测概率最大的三种语言类别格式化后打印出来
print('(%.2f) %s' % (value, all_categories[category_index]))
# 将它们存储到 predictions 中
predictions.append([value, all_categories[category_index]])
predict('Dovesky')
predict('Jackson')
predict('Satoshi')
predict('Han')
"""
Explanation: 先看行再看列,行标签代表姓氏实际所属语言,列标签代表模型预测姓氏所属语言。
色块颜色越亮代表预测次数越高。整张图表中对角线最亮,说明模型对大部分数据的预测都是准确的。
但是!我们要观察的是预测错误的情况,即对角线以外的亮色块!
先看 English 这一行,可以看到 English 对角线上的色块很暗啊,说明模型对英文姓氏的预测很差。同时,除对角线方块外,在 English 这一行可以观察到很多浅色方块,它们分别是:Czech(捷克语)、French(法语)、German(德语)、Irish(爱尔兰语)、Scottish(苏格兰语)。这些国家文化相近,姓氏相似,所以模型没能做到非常好的区分。
而东方国家,让我们观察中国这一行,可以看到中国、韩国、越南(Vietnamese)的姓氏有一定的相似度,这与国家间的文化是相符的。
另外还有西班牙和葡萄牙,这两个国家的姓氏也有些相似。
将模型封装的更易用
我们重新把焦点转移到训练的模型上来。
下面我要编写一个函数将训练的模型封装起来,以便于调用。
End of explanation
"""
|
lgautier/mashing-pumpkins | doc/notebooks/MinHash, design and performance.ipynb | mit | # we take a DNA sequence as an example, but this is arbitrary and not necessary.
alphabet = b'ATGC'
# create a lookup structure to go from byte to 4-mer
# (a arbitrary byte is a bitpacked 4-mer)
quad = [None, ]*(len(alphabet)**4)
i = 0
for b1 in alphabet:
for b2 in alphabet:
for b3 in alphabet:
for b4 in alphabet:
quad[i] = bytes((b1, b2, b3, b4))
i += 1
# random bytes for a 3M genome (order of magnitude for a bacterial genome)
import ssl
def make_rnd_sequence(size):
sequencebitpacked = ssl.RAND_bytes(int(size/4))
sequence = bytearray(int(size))
for i, b in zip(range(0, len(sequence), 4), sequencebitpacked):
sequence[i:(i+4)] = quad[b]
return bytes(sequence)
size = int(2E6)
sequence = make_rnd_sequence(size)
import time
class timedblock(object):
def __enter__(self):
self.tenter = time.time()
return self
def __exit__(self, type, value, traceback):
self.texit = time.time()
@property
def duration(self):
return self.texit - self.tenter
"""
Explanation: Designing a Python library for building prototypes around MinHash
This is very much work-in-progress. May be the software and or ideas presented with be the subject of a peer-reviewed or self-published write-up. For now the URL for this is: https://github.com/lgautier/mashing-pumpkins
MinHash in the context of biological sequenced was introduced by the Maryland Bioinformatics Lab [add reference here].
Building a MinHash is akin to taking a sample of all k-mers / n-grams found in a sequence and using that sample as a signature or sketch for that sequence.
A look at convenience vs performance
Moving Python code to C leads to performance improvement... sometimes.
Test sequence
First we need a test sequence. Generating a random one quickly can be achieved as follows, for example. If you already have you own way to generate a sequence, or your own benchmark sequence, the following code cell can be changed so as to end up with a variable sequence that is a bytes object containing it.
End of explanation
"""
from sourmash_lib._minhash import MinHash
SKETCH_SIZE = 5000
sequence_str = sequence.decode("utf-8")
with timedblock() as tb:
smh = MinHash(SKETCH_SIZE, 21)
smh.add_sequence(sequence_str)
t_sourmash = tb.duration
print("%.2f seconds / sequence" % t_sourmash)
"""
Explanation: Kicking the tires with sourmash
The executable sourmash is a nice package from the dib-lab implemented in Python and including a library [add reference here]. Perfect for trying out quick what MinHash sketches can do.
We will create a MinHash of maximum size 1000 (1000 elements) and of k-mer size 21 (all ngrams of length 21 across the input sequences will be considered for inclusion in the MinHash. At the time of writing MinHash is implemented in C/C++ and use that as a reference for speed, as we measure the time it takes to process our 1M reference sequence
End of explanation
"""
# make a hashing function to match our design
import mmh3
def hashfun(sequence, nsize, hbuffer, w=100):
n = min(len(hbuffer), len(sequence)-nsize+1)
for i in range(n):
ngram = sequence[i:(i+nsize)]
hbuffer[i] = mmh3.hash64(ngram)[0]
return n
from mashingpumpkins.minhashsketch import MinSketch
from array import array
with timedblock() as tb:
mhs = MinSketch(21, SKETCH_SIZE, hashfun, 42)
mhs.add(sequence, hashbuffer=array("q", [0,]*200))
t_basic = tb.duration
print("%.2f seconds / sequence" % (t_basic))
print("Our Python implementation is %.2f times slower." % (t_basic / t_sourmash))
"""
Explanation: This is awesome. The sketch for a bacteria-sized DNA sequence can be computed very quickly (about a second on my laptop).
Redisigning it all for convenience and flexibility
We have redesigned what a class could look like, and implemented that design in Python
foremost for our own convenience and to match the claim of convenience. Now how bad is the impact on performance ?
Our new design allows flexibility with respect to the hash function used, and to initially illustrate our point we use mmh an existing Python package wrapping MurmurHash3, the hashing function used in MASH and sourmash.
End of explanation
"""
from mashingpumpkins._murmurhash3 import hasharray
hashfun = hasharray
with timedblock() as tb:
hashbuffer = array('Q', [0, ] * 300)
mhs = MinSketch(21, SKETCH_SIZE, hashfun, 42)
mhs.add(sequence, hashbuffer=hashbuffer)
t_batch = tb.duration
print("%.2f seconds / sequence" % (t_batch))
print("Our Python implementation is %.2f times faster." % (t_sourmash / t_batch))
"""
Explanation: Ah. Our Python implementation only using mmh3 and the standard library is only a bit slower.
There is more to it though. The code in "mashingpumpkins" is doing more by keeping track of the k-mer/n-gram along with the hash value in order to allow the generation of inter-operable sketch [add reference to discussion on GitHub].
Our design in computing batches of hash values each time C is reached for MurmurHash3. We have implemented the small C function require to call MurmurHash for several k-mers, and when using it we have interesting performance gains.
End of explanation
"""
from mashingpumpkins._murmurhash3 import hasharray
hashfun = hasharray
from array import array
trans_tbl = bytearray(256)
for x,y in zip(b'ATGC', b'TACG'):
trans_tbl[x] = y
def revcomp(sequence):
ba = bytearray(sequence)
ba.reverse()
ba = ba.translate(trans_tbl)
return ba
class MyMash(MinSketch):
def add(self, seq, hashbuffer=array('Q', [0, ]*300)):
ba = revcomp(sequence)
if ba.find(0) >= 0:
raise ValueError("Input sequence is not DNA")
super().add(sequence, hashbuffer=hashbuffer)
super().add(ba, hashbuffer=hashbuffer)
with timedblock() as tb:
mhs = MyMash(21, SKETCH_SIZE, hashfun, 42)
mhs.add(sequence)
t_batch = tb.duration
print("%.2f seconds / sequence" % (t_batch))
print("Our Python implementation is %.2f times faster." % (t_sourmash / t_batch))
"""
Explanation: Wow!
At the time of writing this is between 1.5 and 2.5 times faster than C-implemented sourmash. And we are doing more work (we are keeping the ngrams / kmers associated with hash values).
We can modifying our class to stop storing the associated k-mer (only keep the hash value) to see if it improves performances:
However, as it was pointed out sourmash's minhash also checking that the sequenceo only uses letters from the DNA alphabet and computes the sketch for both the sequence and its reverse complement. We add these 2 operations (check and reverse complement) in a custom child class:
End of explanation
"""
from mashingpumpkins import _murmurhash3_mash
def hashfun(sequence, nsize, buffer=array('Q', [0,]*300), seed=42):
return _murmurhash3_mash.hasharray_withrc(sequence, revcomp(sequence), nsize, buffer, seed)
with timedblock() as tb:
hashbuffer = array('Q', [0, ] * 300)
mhs = MinSketch(21, SKETCH_SIZE, hashfun, 42)
mhs.add(sequence)
t_batch = tb.duration
print("%.2f seconds / sequence" % (t_batch))
print("Our Python implementation is %.2f times faster." % (t_sourmash / t_batch))
"""
Explanation: Still pretty good, the code for the check is not particularly optimal (that's the kind of primitives that would go to C).
MASH quirks
Unfortunately this is not quite what MASH (sourmash is based on) is doing. Tim highlighted what is happening: for every ngram and its reverse complement, the one with the lowest lexicograph order is picked for inclusion in the sketch.
Essentially, picking segment chunks depending on the lexicographic order of the chunk's direct sequence vs its reverse complement is a sampling/filtering strategy at local level before the hash value is considered for inclusion in the MinHash. The only possible reason for this could be the because the hash value is expensive to compute (but this does not seem to be the case).
Anyway, writing a slightly modified batch C function that does that extra sampling/filtering is easy and let's use conserve our design. We can then implement a MASH-like sampling in literally one line:
End of explanation
"""
len(set(smh.get_mins()) ^ mhs._heapset)
"""
Explanation: So now the claim is that we are just like sourmash/MASH, but mostly in Python and faster.
We check that the sketches are identical, and they are:
End of explanation
"""
from mashingpumpkins.sequence import chunkpos_iter
import ctypes
import multiprocessing
from functools import reduce
import time
NSIZE = 21
SEED = 42
def build_mhs(args):
sketch_size, nsize, sequence = args
mhs = MinSketch(nsize, sketch_size, hashfun, SEED)
mhs.add(sequence)
return mhs
res_mp = []
for l_seq in (int(x) for x in (1E6, 5E6, 1E7, 5E7)):
sequence = make_rnd_sequence(l_seq)
for sketch_size in (1000, 5000, 10000):
sequence_str = sequence.decode("utf-8")
with timedblock() as tb:
smh = MinHash(sketch_size, 21)
smh.add_sequence(sequence_str)
t_sourmash = tb.duration
with timedblock() as tb:
ncpu = 2
p = multiprocessing.Pool(ncpu)
# map step (parallel in chunks)
result = p.imap_unordered(build_mhs,
((sketch_size, NSIZE, sequence[begin:end])
for begin, end in chunkpos_iter(NSIZE, l_seq, l_seq//ncpu)))
# reduce step (reducing as chunks are getting ready)
mhs_mp = reduce(lambda x, y: x+y, result, next(result))
p.terminate()
t_pbatch = tb.duration
res_mp.append((l_seq, t_pbatch, sketch_size, t_sourmash))
from rpy2.robjects.lib import dplyr, ggplot2 as ggp
from rpy2.robjects.vectors import IntVector, FloatVector, StrVector, BoolVector
from rpy2.robjects import Formula
dataf = dplyr.DataFrame({'l_seq': IntVector([x[0] for x in res_mp]),
'time': FloatVector([x[1] for x in res_mp]),
'sketch_size': IntVector([x[2] for x in res_mp]),
'ref_time': FloatVector([x[3] for x in res_mp])})
p = (ggp.ggplot(dataf) +
ggp.geom_line(ggp.aes_string(x='l_seq',
y='log2(ref_time/time)',
color='factor(sketch_size, ordered=TRUE)'),
size=3) +
ggp.scale_x_sqrt("sequence length") +
ggp.theme_gray(base_size=18) +
ggp.theme(legend_position="top",
axis_text_x = ggp.element_text(angle = 90, hjust = 1))
)
import rpy2.ipython.ggplot
rpy2.ipython.ggplot.image_png(p, width=1000, height=500)
"""
Explanation: Parallel processing
Now what about parallel processing ?
End of explanation
"""
SEED = 42
def run_sourmash(sketchsize, sequence, nsize):
sequence_str = sequence.decode("utf-8")
with timedblock() as tb:
smh = MinHash(sketchsize, nsize)
smh.add_sequence(sequence_str)
return {'t': tb.duration,
'what': 'sourmash',
'keepngrams': False,
'l_sequence': len(sequence),
'bufsize': 0,
'nsize': nsize,
'sketchsize': sketchsize}
def run_mashingp(cls, bufsize, sketchsize, sequence, hashfun, nsize):
hashbuffer = array('Q', [0, ] * bufsize)
with timedblock() as tb:
mhs = cls(nsize, sketchsize, hashfun, SEED)
mhs.add(sequence, hashbuffer=hashbuffer)
keepngrams = True
return {'t': tb.duration,
'what': 'mashingpumpkins',
'keepngrams': keepngrams,
'l_sequence': len(sequence),
'bufsize': bufsize,
'nsize': nsize,
'sketchsize': sketchsize}
import gc
def run_mashingmp(cls, bufsize, sketchsize, sequence, hashfun, nsize):
with timedblock() as tb:
ncpu = 2
p = multiprocessing.Pool(ncpu)
l_seq = len(sequence)
result = p.imap_unordered(build_mhs,
((sketchsize, NSIZE, sequence[begin:end])
for begin, end in chunkpos_iter(nsize, l_seq, l_seq//ncpu))
)
# reduce step (reducing as chunks are getting ready)
mhs_mp = reduce(lambda x, y: x+y, result, next(result))
p.terminate()
return {'t': tb.duration,
'what': 'mashingpumpinks-2p',
'keepngrams': True,
'l_sequence': len(sequence),
'bufsize': bufsize,
'nsize': nsize,
'sketchsize': sketchsize}
from ipywidgets import FloatProgress
from IPython.display import display
res = list()
bufsize = 300
seqsizes = (5E5, 1E6, 5E6, 1E7)
sketchsizes = [int(x) for x in (5E3, 1E4, 5E4, 1E5)]
f = FloatProgress(min=0, max=len(seqsizes)*len(sketchsizes)*2)
display(f)
for seqsize in (int(s) for s in seqsizes):
env = dict()
sequencebitpacked = ssl.RAND_bytes(int(seqsize/4))
sequencen = bytearray(int(seqsize))
for i, b in zip(range(0, len(sequencen), 4), sequencebitpacked):
sequencen[i:(i+4)] = quad[b]
sequencen = bytes(sequencen)
for sketchsize in sketchsizes:
for nsize in (21, 31):
tmp = run_sourmash(sketchsize, sequencen, nsize)
tmp.update([('hashfun', 'murmurhash3')])
res.append(tmp)
for funname, hashfun in (('murmurhash3', hasharray),):
tmp = run_mashingp(MinSketch, bufsize, sketchsize, sequencen, hashfun, nsize)
tmp.update([('hashfun', funname)])
res.append(tmp)
tmp = run_mashingmp(MinSketch, bufsize, sketchsize, sequencen, hashfun, nsize)
tmp.update([('hashfun', funname)])
res.append(tmp)
f.value += 1
from rpy2.robjects.lib import dplyr, ggplot2 as ggp
from rpy2.robjects.vectors import IntVector, FloatVector, StrVector, BoolVector
from rpy2.robjects import Formula
d = dict((n, FloatVector([x[n] for x in res])) for n in ('t',))
d.update((n, StrVector([x[n] for x in res])) for n in ('what', 'hashfun'))
d.update((n, BoolVector([x[n] for x in res])) for n in ('keepngrams', ))
d.update((n, IntVector([x[n] for x in res])) for n in ('l_sequence', 'bufsize', 'sketchsize', 'nsize'))
dataf = dplyr.DataFrame(d)
p = (ggp.ggplot((dataf
.filter("hashfun != 'xxhash'")
.mutate(nsize='paste0("k=", nsize)',
implementation='paste0(what, ifelse(keepngrams, "(w/ kmers)", ""))'))) +
ggp.geom_line(ggp.aes_string(x='l_sequence',
y='l_sequence/t/1E6',
color='implementation',
group='paste(implementation, bufsize, nsize, keepngrams)'),
alpha=1) +
ggp.facet_grid(Formula('nsize~sketchsize')) +
ggp.scale_x_log10('sequence length') +
ggp.scale_y_continuous('MB/s') +
ggp.scale_color_brewer('Implementation', palette="Set1") +
ggp.theme_gray(base_size=18) +
ggp.theme(legend_position="top",
axis_text_x = ggp.element_text(angle = 90, hjust = 1))
)
import rpy2.ipython.ggplot
rpy2.ipython.ggplot.image_png(p, width=1000, height=500)
"""
Explanation: We have just made sourmash/MASH about 2 times faster... some of the times. Parallelization does not always bring speedups (depends on the size of the sketch and on the length of the sequence for which the sketch is built).
Scaling up
Now how much time should it take to compute signature for various references ?
First we check quickly that the time is roughly proportional to the size of the reference:
End of explanation
"""
dataf_plot = (
dataf
.filter("hashfun != 'xxhash'")
.mutate(nsize='paste0("k=", nsize)',
implementation='paste0(what, ifelse(keepngrams, "(w/ kmers)", ""))')
)
dataf_plot2 = (dataf_plot.filter('implementation!="sourmash"')
.inner_join(
dataf_plot.filter('implementation=="sourmash"')
.select('t', 'nsize', 'sketchsize', 'l_sequence'),
by=StrVector(('nsize', 'sketchsize', 'l_sequence'))))
p = (ggp.ggplot(dataf_plot2) +
ggp.geom_line(ggp.aes_string(x='l_sequence',
y='log2(t.y/t.x)',
color='implementation',
group='paste(implementation, bufsize, nsize, keepngrams)'),
alpha=1) +
ggp.facet_grid(Formula('nsize~sketchsize')) +
ggp.scale_x_log10('sequence length') +
ggp.scale_y_continuous('log2(time ratio)') +
ggp.scale_color_brewer('Implementation', palette="Set1") +
ggp.theme_gray(base_size=18) +
ggp.theme(legend_position="top",
axis_text_x = ggp.element_text(angle = 90, hjust = 1))
)
import rpy2.ipython.ggplot
rpy2.ipython.ggplot.image_png(p, width=1000, height=500)
"""
Explanation: The rate (MB/s) with which a sequence is processed seems to strongly depend on the size of the input sequence for the mashingpumpkins implementation (suggesting a significant setup cost than is amortized as the sequence is getting longer), and parallelization achieve a small boost in performance (with the size of the sketch apparently counteracting that small boost). Our implementation also appears to be scaling better with increasing sequence size (relatively faster as the size is increasing).
Keeping the kmers comes with a slight cost for the larger max_size values (not shown). Our Python implementation is otherwise holding up quite well. XXHash appears give slightly faster processing rates in the best case, and makes no difference compared with MurmushHash3 in other cases (not shown).
End of explanation
"""
seqsize = int(1E8)
print("generating sequence:")
f = FloatProgress(min=0, max=seqsize)
display(f)
sequencebitpacked = ssl.RAND_bytes(int(seqsize/4))
sequencen = bytearray(int(seqsize))
for i, b in zip(range(0, len(sequencen), 4), sequencebitpacked):
sequencen[i:(i+4)] = quad[b]
if i % int(1E4) == 0:
f.value += int(1E4)
f.value = i+4
sequencen = bytes(sequencen)
sketchsize = 20000
bufsize = 1000
nsize = 21
funname, hashfun = ('murmurhash3', hasharray)
tmp = run_mashingmp(MinSketch, bufsize, sketchsize, sequencen, hashfun, nsize)
print("%.2f seconds" % tmp['t'])
print("%.2f MB / second" % (tmp['l_sequence']/tmp['t']/1E6))
"""
Explanation: One can also observe that the performance dip for the largest max_size value is recovering as the input sequence is getting longer. We verifiy this with a .1GB reference and max_size equal to 20,000.
End of explanation
"""
tmp_sm = run_sourmash(sketchsize, sequencen, nsize)
print("%.2f seconds" % tmp_sm['t'])
print("%.2f MB / second" % (tmp_sm['l_sequence']/tmp_sm['t']/1E6))
"""
Explanation: In comparison, this is what sourmash manages to achieve:
End of explanation
"""
|
tkurfurst/deep-learning | first-neural-network/dlnd-your-first-neural-network (revised).ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes)) #2x56
"""
print("Weights - Input->Hidden: ", self.weights_input_to_hidden.shape)
"""
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes)) #1x2
"""print("Weights - Hidden->Output: ", self.weights_hidden_to_output.shape)
"""
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
# TODO: Activation Function
# ADDED
#self.activation_function = sigmoid if defined as a new function
self.activation_function = lambda x: 1 / (1 + np.exp(-x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T # 56x1
targets = np.array(targets_list, ndmin=2).T # 1x1
"""
print("Inputs: ", inputs.shape)
print("Targets: ", targets.shape)
"""
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
# ADDED
"""
hidden_inputs = inputs # signals into hidden layer
hidden_outputs = self.activation_function(np.dot(self.weights_input_to_hidden, inputs)) # signals from hidden layer
"""
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
"""
print(hidden_inputs.shape) #56x1
print(hidden_outputs.shape) #2x1
"""
# TODO: Output layer
# ADDED
# signals into final output layer
# ORIG
# final_inputs = hidden_outputs #2x1
# REVISED
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
# signals from final output layer
# NOTE: NO SIGMOID !!!!!!!!!!!!!!
# ORIG
# final_outputs = np.dot(self.weights_hidden_to_output, final_inputs) #1x1
# REVISED
final_outputs = final_inputs
"""
print(final_inputs.shape)
print(final_outputs.shape)
"""
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
# ADDED
output_errors = targets - final_outputs # 1x1 - Output layer error is the difference between desired target and actual output.
# error gradient for output layer
# NOTE: NO SIGMOID !!!!!!!!!!!!!!
# del_error_outputs = output_errors * final_outputs * (1 - final_outputs)
del_error_outputs = output_errors
#### CONTINUE ####
#### CONTINUE ####
#### CONTINUE ####
# TODO: Backpropagated error
# hidden layer gradients
# hidden_grad = output_errors * final_outputs * (1 - final_outputs) #1x1 * 1x1 = 1x1
# REVISED original never used
hidden_grad = hidden_outputs * (1.0 - hidden_outputs)
# errors propagated to the hidden layer
#ORIG
#hidden_errors = del_error_outputs * final_inputs * (1 - final_inputs) * self.weights_hidden_to_output.T #1x1 * 1x2 * 2x1 = 1
# REVISED
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)
"""
print(hidden_grad.shape)
print(hidden_errors.shape)
"""
# TODO: Update the weights
# ADDED
# update hidden-to-output weights with gradient descent step
# ORIG
#self.weights_hidden_to_output += self.lr * del_error_outputs * hidden_outputs.T #1x1 * 2x1 = 1x1
# REVISED
self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T)
# update input-to-hidden weights with gradient descent step
# ORIG
# self.weights_input_to_hidden += self.lr * hidden_errors * inputs.T
#REVISED
self.weights_input_to_hidden += self.lr * np.dot(hidden_errors * hidden_grad, inputs.T)
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
# ADDED (as above)
hidden_inputs = inputs # signals into hidden layer
hidden_outputs = self.activation_function(np.dot(self.weights_input_to_hidden, inputs)) # signals from hidden layer
# TODO: Output layer
# ADDED (as above)
# signals into final output layer
# ORIG
# final_inputs = hidden_outputs #2x1
# REVISED
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
# signals from final output layer
# NOTE: NO SIGMOID !!!!!!!!!!!!!!
# ORIGINAL
# final_outputs = np.dot(self.weights_hidden_to_output, final_inputs)
# REVISED
final_outputs = final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import sys
### TODO: Set the hyperparameters here ###
epochs = 4000
learning_rate = 0.1
hidden_nodes = 20
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
print(network.run(inputs))
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
"""
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
The model predicts the data quite well base on testing and validation losses, with performance varing as a function of number of training epochs, learning rate and the number of hidden units
That said the model does poorly predict the last ten of the 21 test data days, probably due the seasonal effect of the Christmas holidays which are not accounted for in the features or teh training data
increasing the number of hidden units has he most dramatic effect on improvement of learning results, up to point
using a number of different combiations, I settled on to jointly optimize result quality ans speed:
epochs = 400
learning rate = 0.1
hidden units = 20
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation
"""
|
dinrker/PredictiveModeling | Session 6 - Features_III_RandomProjections .ipynb | mit | from IPython.display import Image
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import time
%matplotlib inline
"""
Explanation: Goals of this Lesson
Random Projections for Dimensionality Reduction
References
Random Projections in Dimensionality Reduction
*Dropout: A Simple Way to Prevent NNs from Overfitting
SciKit-Learn's documentation on dimensionality reduction
0. Preliminaries
First we need to import Numpy, Pandas, MatPlotLib...
End of explanation
"""
### function for shuffling the data and labels
def shuffle_in_unison(features, labels):
rng_state = np.random.get_state()
np.random.shuffle(features)
np.random.set_state(rng_state)
np.random.shuffle(labels)
### calculate classification errors
# return a percentage: (number misclassified)/(total number of datapoints)
def calc_classification_error(predictions, class_labels):
n = predictions.size
num_of_errors = 0.
for idx in xrange(n):
if (predictions[idx] >= 0.5 and class_labels[idx]==0) or (predictions[idx] < 0.5 and class_labels[idx]==1):
num_of_errors += 1
return num_of_errors/n
"""
Explanation: Again we need functions for shuffling the data and calculating classification errrors.
End of explanation
"""
# load the 70,000 x 784 matrix
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
idxs_to_keep = []
for idx in xrange(mnist.data.shape[0]):
if mnist.target[idx] == 0 or mnist.target[idx] == 1: idxs_to_keep.append(idx)
mnist_x, mnist_y = (mnist.data[idxs_to_keep,:]/255., mnist.target[idxs_to_keep])
shuffle_in_unison(mnist_x, mnist_y)
print "Dataset size: %d x %d"%(mnist_x.shape)
# make a train / test split
x_train, x_test = (mnist_x[:10000,:], mnist_x[10000:,:])
y_train, y_test = (mnist_y[:10000], mnist_y[10000:])
# subplot containing first image
ax1 = plt.subplot(1,2,1)
digit = mnist_x[1,:]
ax1.imshow(np.reshape(digit, (28, 28)), cmap='Greys_r')
plt.show()
"""
Explanation: 0.1 Load the dataset of handwritten digits
We are going to use the MNIST dataset throughout this session. Let's load the data...
End of explanation
"""
from sklearn.random_projection import johnson_lindenstrauss_min_dim
johnson_lindenstrauss_min_dim(n_samples=x_train.shape[0], eps=0.9)
"""
Explanation: 1 Random Projections
We saw in the previous session that simply adding noise to the input of an Autoencoder improves it's performance. Let's see how far we can stretch this idea. Can we simply multiply our data by a random matrix and reduce its dimensionality while still preserving its structure? Yes! The answer is provided in a famous result called the Johnson-Lindenstrauss Lemma, for $\epsilon < 1$:
$$ (1- \epsilon) || \mathbf{x}{i} - \mathbf{x}{j} ||^{2} \le || \mathbf{x}{i}\mathbf{W} - \mathbf{x}{j}\mathbf{W} ||^{2} \le (1 + \epsilon) || \mathbf{x}{i} - \mathbf{x}{j} ||^{2} \text{ where } \mathbf{W} \text{ is a random matrix. }$$ In fact Scikit-Learn has a built in function that can tell you what $\epsilon$ should be for a given dataset.
End of explanation
"""
# set the random number generator for reproducability
np.random.seed(49)
# define the dimensionality of the hidden rep.
n_components = 200
# Randomly initialize the Weight matrix
W = np.random.normal(size=(x_train.shape[1], n_components), scale=1./x_train.shape[1])
train_red = np.dot(x_train, W)
test_red = np.dot(x_test, W)
print "Dataset is now of size: %d x %d"%(train_red.shape)
"""
Explanation: This is a nice function if we truly care about theoretical guarantees and about preserving distances, but in practice we can just see what works empirically. Let's next generate a random matrix...
End of explanation
"""
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
knn = KNeighborsClassifier(n_neighbors=3)
knn.fit(x_train, y_train)
preds = knn.predict(x_test)
knn_error_orig = calc_classification_error(preds, y_test) * 100
lr = LogisticRegression()
lr.fit(x_train, y_train)
preds = lr.predict(x_test)
lr_error_orig = calc_classification_error(preds, y_test) * 100
knn = KNeighborsClassifier(n_neighbors=3)
knn.fit(train_red, y_train)
preds = knn.predict(test_red)
knn_error_red = calc_classification_error(preds, y_test) * 100
lr = LogisticRegression()
lr.fit(train_red, y_train)
preds = lr.predict(test_red)
lr_error_red = calc_classification_error(preds, y_test) * 100
plt.bar([0,1,2,3], [knn_error_orig, lr_error_orig, knn_error_red, lr_error_red], color=['r','r','b','b'], align='center')
plt.xticks([0,1,2,3], ['kNN - OS', 'Log. Reg - OS', 'kNN - RP', 'Log. Reg. - RP'])
plt.ylim([0,5.])
plt.xlabel("Classifers and Features")
plt.ylabel("Classification Error")
plt.show()
"""
Explanation: Let's run a kNN classifier on the projections...
End of explanation
"""
### TO DO
"""
Explanation: <span style="color:red">STUDENT ACTIVITY (until end of session)</span>
<span style="color:red">Subtask 1: Train an Autoencoder and PCA; Compare kNN Classifer on Compressed Representation</span>
End of explanation
"""
### TO DO
### Shoud see graph trend downward, with classification error decreasing as dimensionality increases.
"""
Explanation: <span style="color:red">Subtask 2: For each model, plot classification error (y-axis) vs dimensionality of compression (x-axis).</span>
End of explanation
"""
|
fabriziocosta/GraphFinder | Functions_Fasta_Input_to_Structure_and_Graph_modifing...-submit.ipynb | gpl-2.0 | %matplotlib inline
import os, sys
import subprocess as sp
from itertools import cycle
import networkx as nx
import re
from eden.util import display
# read a fasta file separate the head and the sequence
def _readFastaFile(file_path=None):
head_start = '>'
head = []
seq = []
seq_temps = []
string_seq = ''
#for file in os.listdir(path): #open file
read_file = open(file_path,'r')
for line in read_file:
lines = list(line)
# the read line is the head of the sequence write it in head list
if lines[0] == head_start:
line = line.strip('\n')
line = line.strip(head_start)
head.append(line)
seq.append(string_seq)
seq_temps = []
# the read line is a sequence write it in a sequence list
# remove the unwanted charachters and whitespace, tab
if lines[0] != head_start:
line = line.strip()
line = re.sub(r'\ .*?\ ', '', line)
seq_temps.append(line)
string_seq= ''.join(seq_temps)
print ('string_seq', string_seq)
string_seq = re.sub(r' ', '',string_seq)
seq.append(string_seq)
#to remove empty head or seq
seq = filter(None, seq)
head_seq_zip = zip(head, seq)
print ('Sequences with comments', head_seq_zip)
return head_seq_zip
file_path = "/home/alsheikm/GitDir/EeDN_work/fasta/test2"
def _sequeceWrapper(file_path=None):
#path = "/home/alsheikm/Work/EDeN_examples/fastaFiles/"
zip_head_seqs = _readFastaFile(file_path)
print file_path
return zip_head_seqs
def _fold(seq):
head, seq, struc = _get_sequence_structure(seq)
#G = self._make_graph(seq, struc)
return head, seq, struc
"""
Explanation: New tasks:
make a function/object that read a fasta file from disk and (header, seq) pairs +
ex from:
AB003409.1/96-167
GGGCCCAUAGCUCAGUGGUAGAGUGCCUCCUUUGCAAGGAGGAUGCCCUGGGUUCGAAUC comment
CCAGUGGGUCCA
AB009835.1/1-71
CAUUAGAUGACUGAAAGCAAGUACUGGUCUCUUAAACCAUUUAAUAGUAAAUacagugcCUU
CAUUAGAUGACUGAAAGCAAGUACUGGUCUCUUAAACCAUUUAAUAGUAAAUacagugcCUU
CAUUAGAUGACUGAAAGCAAGUACUGGUCUCUUAAACCAUUUAAUAGUAAAUacagugcCUU
CAUUAGAUGACUGAAAGCAAGUACUGGUCUCUUAAACCAUUUAAUAGUAAAUacagugcCUU
AJGHDJHASGDJAS khsk skdjfhskdj slkshd skhksjdf
CACGUAGCAUGCUAGCAUGCUAGCAUGCUAGCUAGCUGAC 276512764523765423764527365427365427542735427
CAUCGUAGCUAGCUAGCUAGCUACG
AUCGUAGUAGCUAGCUAGCUAGCUAGC
yield:
(AB003409.1/96-167, GGGCCCAUAGCUCAGUGGUAGAGUGCCUCCUUUGCAAGGAGGAUGCCCUGGGUUCGAAUCCCAGUGGGUCCA)
(AB009835.1/1-71,CAUUAGAUGACUGAAAGCAAGUACUGGUCUCUUAAACCAUUUAAUAGUAAAUacagugcCUUCAUUAGAUGACUGAAAGCAAGUACUGGUCUCUUAAACCAUUUAAUAGUAAAUacagugcCUUCAUUAGAUGACUGAAAGCAAGUACUGGUCUCUUAAACCAUUUAAUAGUAAAUacagugcCUUCAUUAGAUGACUGAAAGCAAGUACUGGUCUCUUAAACCAUUUAAUAGUAAAUacagugcCUU)
(AJGHDJHASGDJAS khsk skdjfhskdj slkshd skhksjdf, CACGUAGCAUGCUAGCAUGCUAGCAUGCUAGCUAGCUGACCAUCGUAGCUAGCUAGCUAGCUACGAUCGUAGUAGCUAGCUAGCUAGCUAGC)
separately:
make a function that receives in input the list of sequences, and yields structure graphs (use RNAfold)
End of explanation
"""
#call RNAfold to get the sequence structure
def _get_sequence_structure(seqs):
if mode == 'RNAfold':
return _rnafold_wrapper(seqs)
else:
raise Exception('Not known: %s'% self.mode)
def _rnafold_wrapper(sequence):
head = sequence[0]
seq = sequence[1].split()[0]
flags='--noPS'
cmd = 'echo "%s" | RNAfold %s' % (seq, flags)
out = sp.check_output(cmd, shell=True)
#print out
text = out.strip().split('\n')
print ('text:', text)
seq = text[0]
struc = text[1].split()[0]
return head, seq, struc
"""
Explanation: Get the sequence structure
End of explanation
"""
#Recognize basepairs and add them to the generated graph
def _make_graph(head, seq, struc):
print ("Graph title", head)
open_pran = "("
close_pran = ")"
stack_o = []
stack_c = []
G = nx.Graph()
seq_struc_zip = zip(seq, struc)
#print seq_struc_zip
for i, k in enumerate(struc):
G.add_node(i, label = seq[i])
# connect with the next node
if i > 0:
G.add_edge(i-1, i, label= 'x')
# find basepair and connect them
if struc[i] == open_pran:
j = i
stack_o.append(struc[j])
open_len = len(stack_o)
if struc[i] == close_pran:
stack_c.append(struc[i])
stack_o.pop()
G.add_edge(i, j, label = 'b')
j = j-1
return G
"""
Explanation: Build the Graph
End of explanation
"""
#generating the graph
#seq,seqs are Not correct they do Not take the zipped output
zip_head_seqs= _sequeceWrapper(file_path)
print ('zip_head_seqs here', zip_head_seqs)
for i, seq in enumerate(zip_head_seqs):
heads = seq[0]
seq1 = seq[1]
mode = 'RNAfold'
head, seq, struc =_fold(seq)
G = _make_graph(head, seq, struc)
display.draw_graph(G, node_size=180, font_size=9, node_border=True, prog='neato')
"""
Explanation: Experiment
End of explanation
"""
|
jgarciab/wwd2017 | class2/hw_2.ipynb | gpl-3.0 | sns.jointplot?
##Some code to run at the beginning of the file, to be able to show images in the notebook
##Don't worry about this cell but run it
#Print the plots in this screen
%matplotlib inline
#Be able to plot images saved in the hard drive
from IPython.display import Image,display
#Make the notebook wider
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
"""
Explanation: Homework 2
Due date: Thursday 16th 23:59
Write your own code in the blanks. It is okay to collaborate with other students, but both students must write their own code and write the name of the other student in this cell. In case you adapt code from other sources you also must give that user credit (a comment with the link to the source suffices)
Complete the blanks, adding comments to explain what you are doing
Each plot must have labels
Collaborated with:
End of explanation
"""
#Create a dictionary of names and sizes usign the following two lists (tip use dict() and zip())
names = ["Alice","Bob","Chris","Dylan","Esther","Fred","Greg"]
heights = [1.74,1.9,1.6,1.8,1.6,1.8,1.7]
d_names2heights =
#Get the heights of "Alice" and "Bob"
#Add the height of "Holly" (1.98)
"""
Explanation: Assignment 1 (ungraded but important). Read some tutorials
Please play with the these ones (you should have them in the class1 folder if you downloaded them during the hw1):
- Chapter 4 - Find out on which weekday people bike the most with groupby and aggregate.ipynb
- Chapter 5 - Combining dataframes and scraping Canadian weather data.ipynb
- Chapter 6 - String Operations- Which month was the snowiest.ipynb
- Chapter 7 - Cleaning up messy data.ipynb
Read about outliers:
http://www.theanalysisfactor.com/outliers-to-drop-or-not-to-drop/
Assignment 2
Create a dictionary of names and sizes usign two lists
Get the heights of "Alice" and "Bob"
Add the height of "Holly"
End of explanation
"""
## Our own functions
def mean_ours(list_numbers): #list_numbers is the arguments
"""
This is called the docstring, it is a comment describing the function. In this case the function calculates the mean of a list of numbers.
input
list_numbers: a list of numbers
output: the mean of the input
"""
#what gives back
m_list = sum(list_numbers)/len(list_numbers)
return m_list
m_list = mean_ours([1,3,2])
print(m_list)
#define the function, with the arguments filename and column
def mean_column_stata(filename,column):
"""
input:
filename: the filename of a stata file to read
column: the column name to calculate the mean
output: the mean of the values in that column
"""
#read stata file
df =
#keep the values of the column
values_column =
#calculate the mean of that column using mean_ours
mean_column =
#return that mean
return
mean = mean_column_stata(filename="data/alcohol.dta",column="income")
print(mean)
mean = mean_column_stata(filename="data/alcohol2.dta",column="income")
print(mean)
"""
Explanation: Assignment 3
Create your own function called "mean_column_stata" of with the following inputs and outputs
```
input:
filename: the filename of a stata file to read
column: the column name to calculate the mean
output: the mean of the values in that column```
Use as the name of the function mean_column_stata()
This function must use our own function (mean_ours())
Use this function to calculate the mean of the "income" column in the dataset "data/alcohol.dta"
i.e. mean_column_stata(filename="data/alcohol.dta",column="income") must return 649.528
End of explanation
"""
#read the two dataframes
#print their heads
#merge the two files using as an argument how="outer" (this will keep missing values)
#Create a new variable "fraction_green" measuring the densitiy of green area (m^2 green)/(m^2 city)
#Sort the dataset by "fraction_green" (.sort_values(by= ,ascending=)) to find the city with the largest fraction of green area.
#Sort the dataset by "fraction_green" (.sort_values(by= ,ascending=)) to find the city with the lowest fraction of green area. Use the argument "na_position" to get rid of the na values.
#Make a column with the country code
#Keep the metropolitan areas in the Netherlands and Italy
#Make a plot (choose the appropriate type) of year vs "fraction_green", colouring by country
#Make a jointplot of "GREEN_AREA_PC" vs "fraction_green" in Italy, of the type hex
#Plot the distribution of "fraction_green" for both countries
"""
Explanation: Assignment 4
Read the two dataframes: "data/green_area_pc.csv" and "data/pop_dens.csv"
Check their heads
Merge them using as an argument how="outer" (this will keep missing values)
Create a new variable measuring the densitiy of green area (m^2 green)/(km^2 city).
Sort the dataset by this new variable (.sort_values(by= ,ascending=)) to find the city with the largest fraction of green area.
Sort the dataset by "fraction_green" (.sort_values(by= ,ascending=)) to find the city with the lowest fraction of green area. Use the argument "na_position". To see its possible values run: df.sort_values?
Make a column with the country code
Keep the metropolitan areas in the Netherlands and Italy
Make a plot (choose the appropriate type) of "GREEN_AREA_PC" vs "fraction_green", colouring by country
Make a jointplot of "GREEN_AREA_PC" vs "fraction_green" in Italy, of the type hex
Plot the distribution of "fraction_green" for both countries
End of explanation
"""
def read_our_csv():
#reading the raw data from oecd
df = pd.read_csv("data/CITIES_19122016195113034.csv",sep="\t")
#fixing the columns (the first one is ""METRO_ID"" instead of "METRO_ID")
cols = list(df.columns)
cols[0] = "METRO_ID"
df.columns = cols
#pivot the table
column_with_values = "Value"
column_to_split = ["VAR"]
variables_already_present = ["METRO_ID","Metropolitan areas","Year"]
df_fixed = df.pivot_table(column_with_values,
variables_already_present,
column_to_split).reset_index()
return df_fixed
#read the dataset on cities using read_our_csv()
df = read_our_csv()
#Make a column with the country code
#Keep the metropolitan areas in the following countries
keep_countries = ['UK', 'AU', 'IT', 'NL', 'FR', 'PL', 'GR', 'HU', 'PO', 'PT', 'AT', 'DK', 'NO', 'EE', 'CZ', 'IE', 'DE', 'FI', 'ES', 'SW', 'CH', 'SK', 'US', 'GB', 'BE']
d_country2region = {"UK" : "Anglosphere",
"AU": "Anglosphere",
...}
#Keep the years 2000 and 2010
#Create a dictionary matching those countries to a region: "Anglosphere", "South_EU", "Eastern_EU", "Scandinavia","Rest_EU"
#Make a column with the region of the country
#Make a plot visualizing the distribution of "GDP_PC" for each region. The plot must show the variability within region
#Make a factorplot visualizing "UNEMP_R" vs "Year" for each region. Use ci=10 for the confidence intervals.
df.drop_duplicates("Metropolitan areas").loc[:,["Metropolitan areas","CO2_PC","GREEN_AREA_PC"]].head(10)
"""
Explanation: What are the plots telling us?
answer here
Assignment 5
Read the dataset on cities using read_our_csv() (don't forget to run it first)
Make a column with the country code
Keep the metropolitan areas in the following countries
keep_countries = ['UK', 'AU', 'IT', 'NL', 'FR', 'PL', 'GR', 'HU', 'PO', 'PT', 'AT', 'DK', 'NO','CZ', 'IE', 'DE', 'FI', 'ES', 'SW', 'CH', 'SK', 'US', 'GB', 'BE']
Keep the years 2000 and 2010
Create a dictionary matching those countries to a region: "Anglosphere", "South_EU", "Eastern_EU", "Scandinavia","Rest_EU"
Make a column with the region of the country
Make a plot visualizing "GDP_PC" vs "Year" for each region. The plot must show the variability within region
Make a factorplot visualizing "UNEMP_R" vs "Year" for each region. Use ci=10 for the confidence intervals.
End of explanation
"""
def plot_group(cities,data):
"""
plots the labour productivity vs unemployment as a function of the year
input
cities: list of cities
data: dataframe with the columns "LABOUR_PRODUCTIVITY","UNEMP_R","Year"
"""
cmaps = ["Blues","Greens","Reds","Oranges","Purples"]
for i,city in enumerate(cities):
#keep the rows where the metropolitan areas == city
gr = data.loc[data["Metropolitan areas"]==city]
#make the plot
plt.plot(gr["LABOUR_PRODUCTIVITY"],gr["UNEMP_R"],color="gray")
plt.scatter(gr["LABOUR_PRODUCTIVITY"],gr["UNEMP_R"],c=gr["Year"],edgecolor="none",cmap=cmaps[i])
#Add labels
plt.colorbar()
#show the plot
plt.show()
#user read_our_csv() to read the csv
df = read_our_csv()
#keep the years greater than 2004 and save it with the name data
data = df.loc[df["Year"]>2004]
plot_group(["Valencia","Madrid","Barcelona","Zaragoza","Las Palmas"],data)
plot_group(["Rome","Bari","Genova","Turin"],data)
"""
Explanation: What are the plots telling us?
answer here
Assignment 6
We want to plot labour productivity vs unemployment as a function of the year. For this we are using a custom scatter plot where the color indicates the year.
- Complete the cell starting with #user read_our_csv() to read the csv.
- Complete the cell starting with def plot_group(cities,data):
Then answer the following questions
- Is there any relationship between unemployment and productivity?
- Can you think of any circunstances where high unemployment causes high productivity next year?
- Can you think of any circunstances where low unemployment causes high productivity next year?
- Can you think of any circunstances where high productivity causes high unemployment next year?
- Can you think of any circunstances where low productivity causes high unemployment next year?
- Can you think of any way to distinguish the four alternatives? (don't dwell too much about this question)
- Can you think of any reason for the differences between the Spanish and Italian cities?
End of explanation
"""
display(Image(url="http://www.datavis.ca/gallery/images/nasa01.gif"))
"""
Explanation: Assignment 7: Data visualization
Explain what do you think that is wrong with the following figure. Would you use a different type of visualization? There are many correct ways to answer and I'm not looking for a perfect critique since we haven't covered data visualization theory yet.
Edit the cell below with the answer
This figure is interesting, based on this figure the NASA decided that there was no relationship between temperature and the failure of one of the components of the Challenger. There were also 17 launches with no incidents between 65 and 75 degrees. Because of this "no relationship", they decided to launch even when the temperature was 31F (0C), the component failed and the shuttle exploded: https://en.wikipedia.org/wiki/Space_Shuttle_Challenger
- What do you think this plot shows?
answer here
- What is wrong with the message it gives? (tip: this is all the data https://tamino.files.wordpress.com/2011/12/rogersfig7.jpg)
answer here
- What is wrong (or can be improved) with the type of plot?
answer here
End of explanation
"""
|
pycam/python-basic | live/python_basic_1_2_live.ipynb | unlicense | # how to print?
# but first thing, first. This is comment in my code using # (hash symbol) at the beginning of the line!
# don't forget to comment your code, it is important!
print('hello! my name is Anne.')
# how to use variable?
# if I want to print multiple time something for example
my_name = 'Anne'
print('hi', my_name)
?print
print('hi', my_name, sep=',')
# Everyone's happy? Questions?
"""
Explanation: Quick recap
Printing things and using variables in python 3
End of explanation
"""
# four simple data types: integers, floats, booleans, and string of characters
# strings
my_name = 'APajon' # is a string
print(my_name)
# you can also check its type
type(my_name)
# you can use different quotation ' " """
my_name = 'Anne'
my_family_name = "Pajon"
my_address = """11 Dream Street
Blue planet
"""
print(my_name, my_family_name)
print(my_address)
some_text = 'my name's Anne'
# concatenate strings together
print(my_name+my_family_name)
print(my_name+10)
print('23'+'5')
# integers
i = 2
j = 5
i+j
type(i)
my_age = '23'
print(type(my_age))
my_age_in_10_years = int(my_age) + 10
print(my_age_in_10_years)
# floats
x = 3.2
x*i
x/5
y = 2.4e3
print(y)
type(y)
i = 2
print(i)
i = float(i)
print(i)
# booleans
print(True)
print(False)
type(True)
?type
?print
print(i, j, y, sep=',')
print(i, j, y, sep='\t')
# undefined
empty = None
print(empty)
# basic arithmetic
x = 3.2
y = 2
x+y
x-y
x*y
x/y
# be careful with division if you are using python 2!
2/3
2.0/3
float(2)/3
# counting things
i = 1
print(i)
# do something first time
i = i + 1
print(i)
# do something else second time
i = i + 1
print(i)
# third time
# there is a shortcut for this notation
i += 1
print(i)
print(i)
i *= 2
print(i)
i -= 4
print(i)
"""
Explanation: Session 1.2
Simple data types, basic arithmetic and saving code in files
End of explanation
"""
# do this exercise in a new notebook
# 1. download the python file and run it on the command line
# 2. python --version§§
# 3. add comment and change print statement
# 4. rerun
cristian_age = 56
anne_age = 45
presenters_avg_age = (cristian_age + anne_age) / 2
print(presenters_avg_age)
"""
Explanation: Saving code in files
show how to download python file from jupyter notebook and run it on command line
show how to modify it in gedit or any other text editor
solve these two exercises in separate notebooks to practice downloading them and running them on the command line
Exercises 1.2.1
calculate the mean of these two variables
End of explanation
"""
|
tensorflow/workshops | extras/amld/notebooks/exercises/2_keras.ipynb | apache-2.0 | # In Jupyter, you would need to install TF 2.0 via !pip.
%tensorflow_version 2.x
import tensorflow as tf
import json, os
# Tested with TensorFlow 2.1.0
print('version={}, CUDA={}, GPU={}, TPU={}'.format(
tf.__version__, tf.test.is_built_with_cuda(),
# GPU attached?
len(tf.config.list_physical_devices('GPU')) > 0,
# TPU accessible? (only works on Colab)
'COLAB_TPU_ADDR' in os.environ))
"""
Explanation: Using tf.keras
This Colab is about how to use Keras to define and train simple models on the data generated in the last Colab 1_data.ipynb
End of explanation
"""
# Load data from Drive (Colab only).
data_path = '/content/gdrive/My Drive/amld_data/zoo_img'
# Or, you can load data from different sources, such as:
# From your local machine:
# data_path = './amld_data'
# Or use a prepared dataset from Cloud (Colab only).
# - 50k training examples, including pickled DataFrame.
# data_path = 'gs://amld-datasets/zoo_img_small'
# - 1M training examples, without pickled DataFrame.
# data_path = 'gs://amld-datasets/zoo_img'
# - 4.1M training examples, without pickled DataFrame.
# data_path = 'gs://amld-datasets/animals_img'
# - 29M training examples, without pickled DataFrame.
# data_path = 'gs://amld-datasets/all_img'
# Store models on Drive (Colab only).
models_path = '/content/gdrive/My Drive/amld_data/models'
# Or, store models to local machine.
# models_path = './amld_models'
if data_path.startswith('/content/gdrive/'):
from google.colab import drive
drive.mount('/content/gdrive')
if data_path.startswith('gs://'):
from google.colab import auth
auth.authenticate_user()
!gsutil ls -lh "$data_path"
else:
!sleep 1 # wait a bit for the mount to become ready
!ls -lh "$data_path"
labels = [label.strip() for label
in tf.io.gfile.GFile('{}/labels.txt'.format(data_path))]
print('All labels in the dataset:', ' '.join(labels))
counts = json.load(tf.io.gfile.GFile('{}/counts.json'.format(data_path)))
print('Splits sizes:', counts)
# This dictionary specifies what "features" we want to extract from the
# tf.train.Example protos (i.e. what they look like on disk). We only
# need the image data "img_64" and the "label". Both features are tensors
# with a fixed length.
# You need to specify the correct "shape" and "dtype" parameters for
# these features.
feature_spec = {
# Single label per example => shape=[1] (we could also use shape=() and
# then do a transformation in the input_fn).
'label': tf.io.FixedLenFeature(shape=[1], dtype=tf.int64),
# The bytes_list data is parsed into tf.string.
'img_64': tf.io.FixedLenFeature(shape=[64, 64], dtype=tf.int64),
}
def parse_example(serialized_example):
# Convert string to tf.train.Example and then extract features/label.
features = tf.io.parse_single_example(serialized_example, feature_spec)
label = features['label']
label = tf.one_hot(tf.squeeze(label), len(labels))
features['img_64'] = tf.cast(features['img_64'], tf.float32) / 255.
return features['img_64'], label
batch_size = 100
steps_per_epoch = counts['train'] // batch_size
eval_steps_per_epoch = counts['eval'] // batch_size
# Create datasets from TFRecord files.
train_ds = tf.data.TFRecordDataset(tf.io.gfile.glob(
'{}/train-*'.format(data_path)))
train_ds = train_ds.map(parse_example)
train_ds = train_ds.batch(batch_size).repeat()
eval_ds = tf.data.TFRecordDataset(tf.io.gfile.glob(
'{}/eval-*'.format(data_path)))
eval_ds = eval_ds.map(parse_example)
eval_ds = eval_ds.batch(batch_size)
# Read a single batch of examples from the training set and display shapes.
for img_feature, label in train_ds:
break
print('img_feature.shape (batch_size, image_height, image_width) =',
img_feature.shape)
print('label.shape (batch_size, number_of_labels) =', label.shape)
# Visualize some examples from the training set.
from matplotlib import pyplot as plt
def show_img(img_64, title='', ax=None):
"""Displays an image.
Args:
img_64: Array (or Tensor) with monochrome image data.
title: Optional title.
ax: Optional Matplotlib axes to show the image in.
"""
(ax if ax else plt).matshow(img_64.reshape((64, -1)), cmap='gray')
if isinstance(img_64, tf.Tensor):
img_64 = img_64.numpy()
ax = ax if ax else plt.gca()
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(title)
rows, cols = 3, 5
for img_feature, label in train_ds:
break
_, axs = plt.subplots(rows, cols, figsize=(2*cols, 2*rows))
for i in range(rows):
for j in range(cols):
show_img(img_feature[i*rows+j].numpy(),
title=labels[label[i*rows+j].numpy().argmax()], ax=axs[i][j])
"""
Explanation: Attention: Please avoid using the TPU runtime (TPU=True) for now. The notebook contains an optional part on TPU usage at the end if you're interested. You can change the runtime via: "Runtime > Change runtime type > Hardware Accelerator" in Colab.
Data from Protobufs
End of explanation
"""
# Sample linear model.
linear_model = tf.keras.Sequential()
linear_model.add(tf.keras.layers.Flatten(input_shape=(64, 64,)))
linear_model.add(tf.keras.layers.Dense(len(labels), activation='softmax'))
# "adam, categorical_crossentropy, accuracy" and other string constants can be
# found at https://keras.io.
linear_model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy', tf.keras.metrics.categorical_accuracy])
linear_model.summary()
linear_model.fit(train_ds,
validation_data=eval_ds,
steps_per_epoch=steps_per_epoch,
validation_steps=eval_steps_per_epoch,
epochs=1,
verbose=True)
"""
Explanation: Linear model
End of explanation
"""
# Let's define a convolutional model:
conv_model = tf.keras.Sequential([
tf.keras.layers.Reshape(target_shape=(64, 64, 1), input_shape=(64, 64)),
tf.keras.layers.Conv2D(filters=32,
kernel_size=(10, 10),
padding='same',
activation='relu'),
tf.keras.layers.Conv2D(filters=32,
kernel_size=(10, 10),
padding='same',
activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)),
tf.keras.layers.Conv2D(filters=64,
kernel_size=(5, 5),
padding='same',
activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(len(labels), activation='softmax'),
])
# YOUR ACTION REQUIRED:
# Compile + print summary of the model (analogous to the linear model above).
# YOUR ACTION REQUIRED:
# Train the model (analogous to linear model above).
# Note: You might want to reduce the number of steps if if it takes too long.
# Pro tip: Change the runtime type ("Runtime" menu) to GPU! After the change you
# will need to rerun the cells above because the Python kernel's state is reset.
"""
Explanation: Convolutional model
End of explanation
"""
tf.io.gfile.makedirs(models_path)
# Save model as Keras model.
keras_path = os.path.join(models_path, 'linear.h5')
linear_model.save(keras_path)
# Keras model is a single file.
!ls -hl "$keras_path"
# Load Keras model.
loaded_keras_model = tf.keras.models.load_model(keras_path)
loaded_keras_model.summary()
# Save model as Tensorflow Saved Model.
saved_model_path = os.path.join(models_path, 'saved_model/linear')
linear_model.save(saved_model_path, save_format='tf')
# Inspect saved model directory structure.
!find "$saved_model_path"
saved_model = tf.keras.models.load_model(saved_model_path)
saved_model.summary()
# YOUR ACTION REQUIRED:
# Store the convolutional model and any additional models that you trained
# in the previous sections in Keras format so we can use them in later
# notebooks for prediction.
"""
Explanation: Store model
End of explanation
"""
import collections
Mistake = collections.namedtuple('Mistake', 'label pred img_64')
mistakes = []
eval_ds_iter = iter(eval_ds)
for img_64_batch, label_onehot_batch in eval_ds_iter:
break
img_64_batch.shape, label_onehot_batch.shape
# YOUR ACTION REQUIRED:
# Use model.predict() to get a batch of predictions.
preds =
# Iterate through the batch:
for label_onehot, pred, img_64 in zip(label_onehot_batch, preds, img_64_batch):
# YOUR ACTION REQUIRED:
# Both `label_onehot` and pred are vectors with length=len(labels), with every
# element corresponding to a probability of the corresponding class in
# `labels`. Get the value with the highest value to get the index within
# `labels`.
label_i =
pred_i =
if label_i != pred_i:
mistakes.append(Mistake(label_i, pred_i, img_64.numpy()))
# You can run this and above 2 cells multiple times to get more mistakes.
len(mistakes)
# Let's examine the cases when our model gets it wrong. Would you recognize
# these images correctly?
# YOUR ACTION REQUIRED:
# Run above cell but using a different model to get a different set of
# classification mistakes. Then copy over this cell to plot the mistakes for
# comparison purposes. Can you spot a pattern?
rows, cols = 5, 5
plt.figure(figsize=(cols*2.5, rows*2.5))
for i, mistake in enumerate(mistakes[:rows*cols]):
ax = plt.subplot(rows, cols, i + 1)
title = '{}? {}!'.format(labels[mistake.pred], labels[mistake.label])
show_img(mistake.img_64, title, ax)
"""
Explanation: ----- Optional part -----
Learn from errors
Looking at classification mistakes is a great way to better understand how a model is performing. This section walks you through the necessary steps to load some examples from the dataset, make predictions, and plot the mistakes.
End of explanation
"""
# Note: used memory BEFORE loading the DataFrame.
!free -h
# Loading all the data in memory takes a while (~40s).
import pickle
df = pickle.load(tf.io.gfile.GFile('%s/dataframe.pkl' % data_path, mode='rb'))
print(len(df))
print(df.columns)
df_train = df[df.split == b'train']
len(df_train)
# Note: used memory AFTER loading the DataFrame.
!free -h
# Show some images from the dataset.
from matplotlib import pyplot as plt
def show_img(img_64, title='', ax=None):
(ax if ax else plt).matshow(img_64.reshape((64, -1)), cmap='gray')
ax = ax if ax else plt.gca()
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(title)
rows, cols = 3, 3
_, axs = plt.subplots(rows, cols, figsize=(2*cols, 2*rows))
for i in range(rows):
for j in range(cols):
d = df.sample(1).iloc[0]
show_img(d.img_64, title=labels[d.label], ax=axs[i][j])
df_x = tf.convert_to_tensor(df_train.img_64, dtype=tf.float32)
df_y = tf.one_hot(df_train.label, depth=len(labels), dtype=tf.float32)
# Note: used memory AFTER defining the Tenors based on the DataFrame.
!free -h
# Checkout the shape of these rather large tensors.
df_x.shape, df_x.dtype, df_y.shape, df_y.dtype
# Copied code from section "Linear model" above.
linear_model = tf.keras.Sequential()
linear_model.add(tf.keras.layers.Flatten(input_shape=(64 * 64,)))
linear_model.add(tf.keras.layers.Dense(len(labels), activation='softmax'))
# "adam, categorical_crossentropy, accuracy" and other string constants can be
# found at https://keras.io.
linear_model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy', tf.keras.metrics.categorical_accuracy])
linear_model.summary()
# How much of a speedup do you see because the data is already in memory?
# How would this compare to the convolutional model?
linear_model.fit(df_x, df_y, epochs=1, batch_size=100)
"""
Explanation: Data from DataFrame
For comparison, this section shows how you would load data from a pandas.DataFrame and then use Keras for training. Note that this approach does not scale well and can only be used for quite small datasets.
End of explanation
"""
%tensorflow_version 2.x
import json, os
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
# Disable duplicate logging output in TF.
logger = tf.get_logger()
logger.propagate = False
# This will fail if no TPU is connected...
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
# Set up distribution strategy.
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu);
strategy = tf.distribute.experimental.TPUStrategy(tpu)
# Tested with TensorFlow 2.1.0
print('\n\nTF version={} TPUs={} accelerators={}'.format(
tf.__version__, tpu.cluster_spec().as_dict()['worker'],
strategy.num_replicas_in_sync))
"""
Explanation: TPU Support
For using TF with a TPU we'll need to make some adjustments. Generally, please note that several TF TPU features are experimental and might not work as smooth as it does with a CPU or GPU.
Attention: Please make sure to switch the runtime to TPU for this part. You can do so via: "Runtime > Change runtime type > Hardware Accelerator" in Colab. As this might create a new environment this section can be executed isolated from anything above.
End of explanation
"""
from google.colab import auth
auth.authenticate_user()
# Browse datasets:
# https://console.cloud.google.com/storage/browser/amld-datasets
# - 50k training examples, including pickled DataFrame.
data_path = 'gs://amld-datasets/zoo_img_small'
# - 1M training examples, without pickled DataFrame.
# data_path = 'gs://amld-datasets/zoo_img'
# - 4.1M training examples, without pickled DataFrame.
# data_path = 'gs://amld-datasets/animals_img'
# - 29M training examples, without pickled DataFrame.
# data_path = 'gs://amld-datasets/all_img'
#@markdown **Copied and adjusted data definition code from above**
#@markdown
#@markdown Note: You can double-click this cell to see its code.
#@markdown
#@markdown The changes have been highlighted with `!` in the contained code
#@markdown (things like the `batch_size` and added `drop_remainder=True`).
#@markdown
#@markdown Feel free to just **click "execute"** and ignore the details for now.
labels = [label.strip() for label
in tf.io.gfile.GFile('{}/labels.txt'.format(data_path))]
print('All labels in the dataset:', ' '.join(labels))
counts = json.load(tf.io.gfile.GFile('{}/counts.json'.format(data_path)))
print('Splits sizes:', counts)
# This dictionary specifies what "features" we want to extract from the
# tf.train.Example protos (i.e. what they look like on disk). We only
# need the image data "img_64" and the "label". Both features are tensors
# with a fixed length.
# You need to specify the correct "shape" and "dtype" parameters for
# these features.
feature_spec = {
# Single label per example => shape=[1] (we could also use shape=() and
# then do a transformation in the input_fn).
'label': tf.io.FixedLenFeature(shape=[1], dtype=tf.int64),
# The bytes_list data is parsed into tf.string.
'img_64': tf.io.FixedLenFeature(shape=[64, 64], dtype=tf.int64),
}
def parse_example(serialized_example):
# Convert string to tf.train.Example and then extract features/label.
features = tf.io.parse_single_example(serialized_example, feature_spec)
# Important step: remove "label" from features!
# Otherwise our classifier would simply learn to predict
# label=features['label'].
label = features['label']
label = tf.one_hot(tf.squeeze(label), len(labels))
features['img_64'] = tf.cast(features['img_64'], tf.float32)
return features['img_64'], label
# Adjust the batch size to the given hardware (#accelerators).
batch_size = 64 * strategy.num_replicas_in_sync
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
steps_per_epoch = counts['train'] // batch_size
eval_steps_per_epoch = counts['eval'] // batch_size
# Create datasets from TFRecord files.
train_ds = tf.data.TFRecordDataset(tf.io.gfile.glob(
'{}/train-*'.format(data_path)))
train_ds = train_ds.map(parse_example)
train_ds = train_ds.batch(batch_size, drop_remainder=True).repeat()
# !!!!!!!!!!!!!!!!!!!
eval_ds = tf.data.TFRecordDataset(tf.io.gfile.glob(
'{}/eval-*'.format(data_path)))
eval_ds = eval_ds.map(parse_example)
eval_ds = eval_ds.batch(batch_size, drop_remainder=True)
# !!!!!!!!!!!!!!!!!!!
# Read a single example and display shapes.
for img_feature, label in train_ds:
break
print('img_feature.shape (batch_size, image_height, image_width) =',
img_feature.shape)
print('label.shape (batch_size, number_of_labels) =', label.shape)
# Model definition code needs to be wrapped in scope.
with strategy.scope():
linear_model = tf.keras.Sequential()
linear_model.add(tf.keras.layers.Flatten(input_shape=(64, 64,)))
linear_model.add(tf.keras.layers.Dense(len(labels), activation='softmax'))
linear_model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy', tf.keras.metrics.categorical_accuracy])
linear_model.summary()
linear_model.fit(train_ds,
validation_data=eval_ds,
steps_per_epoch=steps_per_epoch,
validation_steps=eval_steps_per_epoch,
epochs=1,
verbose=True)
# Model definition code needs to be wrapped in scope.
with strategy.scope():
conv_model = tf.keras.Sequential([
tf.keras.layers.Reshape(target_shape=(64, 64, 1), input_shape=(64, 64)),
tf.keras.layers.Conv2D(filters=32,
kernel_size=(10, 10),
padding='same',
activation='relu'),
tf.keras.layers.ZeroPadding2D((1,1)),
tf.keras.layers.Conv2D(filters=32,
kernel_size=(10, 10),
padding='same',
activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)),
tf.keras.layers.Conv2D(filters=64,
kernel_size=(5, 5),
padding='same',
activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(len(labels), activation='softmax'),
])
conv_model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
conv_model.summary()
conv_model.fit(train_ds,
validation_data=eval_ds,
steps_per_epoch=steps_per_epoch,
validation_steps=eval_steps_per_epoch,
epochs=3,
verbose=True)
conv_model.evaluate(eval_ds, steps=eval_steps_per_epoch)
!nvidia-smi n
"""
Explanation: Attention: TPUs require all files (input and models) to be stored in cloud storage buckets (gs://bucket-name/...). If you plan to use TPUs please choose the data_path below accordingly. Otherwise, you might run into File system scheme '[local]' not implemented errors.
End of explanation
"""
|
fadeetch/Mastering-ML-Python | Chapters/Two/Simple Linear Regression.ipynb | mit | import matplotlib.pyplot as plt
%matplotlib inline
X = [[6], [8], [10], [14], [18]]
Y = [[7], [9], [13], [17.5], [18]]
plt.figure()
plt.title("Pizza price plotted against diameter")
plt.xlabel("Diameter in inches")
plt.ylabel("Price in dollars")
plt.plot(X,Y,"k.")
plt.axis([0,25,0,25])
plt.grid(True)
plt.show()
#Make simple linear regression
from sklearn.linear_model import LinearRegression
#Create and fit the model
model = LinearRegression()
model.fit(X,Y)
#Make preidiction
print('A 12" pizza should cost: %0.2f' %(model.predict([12])[0]))
"""
Explanation: Simple linear regression
Predict price of pizza based on size
The sklearn.linear_model.LinearRegression class is an estimator. Estimators predict a value based on the observed data. In scikit-learn, all estimators implement the fit() and predict() methods. The former method is used to learn the parameters of a model, and the latter method is used to predict the value of a response variable
for an explanatory variable using the learned parameters. It is easy to experiment
with different models using scikit-learn because all estimators implement the fit and predict methods.
End of explanation
"""
import numpy as np
print("Residual sum of squares: %.2f" %np.mean((model.predict(X)-Y)**2))
"""
Explanation: Evaluate fitness of model using cost (lost) function
Cost function for simple linear regression is the sum total of residuals, called RSS
End of explanation
"""
from __future__ import division
xbar = np.mean(X)
ybar = np.mean(Y)
print("Mean of X is:", xbar)
print("Mean of Y is:", ybar)
#Make own function for variance and covariance to better understand how it works
def variance(X):
return np.sum((X - np.mean(X))**2 / (len(X)-1))
def covariance(X,Y):
return np.sum((X - np.mean(X)) * (Y - np.mean(Y)) / (len(X)-1))
print("Variance of X: ", variance(X))
print("Covariance of X, Y is: ", covariance(X,Y))
#For simple linear regression, beta is cov/var.
#Following calculation of beta, I can also get alpha a = y - bx
beta = covariance(X,Y) / variance(X)
beta
"""
Explanation: Solving OLS for simple regression
Goal is to calculate vector of coefficients beta that minimizes cost function.
End of explanation
"""
#Load another set
X_test = np.array([8,9,11,16,12])
Y_test = np.array([11,8.5,15,18,11])
model.fit(X_test.reshape(-1,1),Y_test)
model.predict(X_test.reshape(-1,1))
def total_sum_squares(Y):
return np.sum((Y_test - np.mean(Y_test))**2)
#Residual sum of squares
def residual_sum_squares(Y):
return np.sum( (Y_test - model.predict(X_test.reshape(-1,1)))**2)
#Get R square
1 - residual_sum_squares(Y_test)/total_sum_squares(Y_test)
#From sklearn
model.score(X_test.reshape(-1,1),Y_test)
"""
Explanation: Evaluate fit using r square
End of explanation
"""
from numpy.linalg import inv
from numpy import dot, transpose
X = [[1, 6, 2], [1, 8, 1], [1, 10, 0], [1, 14, 2], [1, 18, 0]]
X
y = [[7], [9], [13], [17.5], [18]]
#Solve using linear algebra
dot(inv(dot(transpose(X),X)), dot(transpose(X),y))
#Solve using numpy least squares procedure
from numpy.linalg import lstsq
lstsq(X,y)[0]
#Compare simple vs multinomial
X = [[6, 2], [8, 1], [10, 0], [14, 2], [18, 0]]
Y = [[7], [9], [13], [17.5], [18]]
model = LinearRegression()
model.fit(X,Y)
X_test = [[8, 2], [9, 0], [11, 2], [16, 2], [12, 0]]
Y_test = [[11], [8.5], [15], [18], [11]]
predictions = model.predict(X_test)
for i, prediction in enumerate(predictions):
print("Predicted: %s, Target: %s" %(prediction, Y_test[i]))
print("R square:", model.score(X_test, Y_test))
"""
Explanation: Multiple linear regression
Add topings to our model of pizza price prediction
End of explanation
"""
from sklearn.preprocessing import PolynomialFeatures
X_train = [[6], [8], [10], [14], [18]]
y_train = [[7], [9], [13], [17.5], [18]]
X_test = [[6], [8], [11], [16]]
y_test = [[8], [12], [15], [18]]
regressor = LinearRegression()
regressor.fit(X_train, y_train)
xx = np.linspace(0, 26, 100)
xx
yy = regressor.predict(xx.reshape(xx.shape[0],1))
plt.plot(xx,yy)
quadratic_featureziser = PolynomialFeatures(degree=2)
X_train_quadratic = quadratic_featureziser.fit_transform(X_train)
X_train_quadratic
X_test_quadratic = quadratic_featureziser.transform(X_test)
regressor_quadratic = LinearRegression()
regressor_quadratic.fit(X_train_quadratic,y_train)
xx_quadratic = quadratic_featureziser.transform(xx.reshape(xx.shape[0],1))
plt.plot(xx, regressor_quadratic.predict(xx_quadratic), c='r',linestyle = '--')
plt.title("Pizza price regressed on diameter")
plt.xlabel("Diameter in inches")
plt.ylabel("Price in dollars")
plt.axis([0,25,0,25])
plt.grid(True)
plt.scatter(X_train, y_train)
plt.show()
"""
Explanation: Polynomial regression
Use PolynomialFeatures to transform the data
End of explanation
"""
import pandas as pd
target_url = ("http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv")
df = pd.read_csv(target_url,header=0, sep=";")
df.head()
df.describe()
plt.scatter(df['alcohol'], df['quality'])
plt.xlabel("Alcohol")
plt.ylabel("Quality")
plt.title("Alcohol against Quality")
plt.show()
"""
Explanation: Apply linear regression on Wine dataset from UCI
End of explanation
"""
from sklearn.cross_validation import train_test_split
#Split into feature and target, train and test
X = df[list(df.columns)[:-1]]
y = df['quality']
X.head()
y.tail()
X_train, X_test, y_train, y_test = train_test_split(X, y)
regressor = LinearRegression()
regressor.fit(X_train, y_train)
y_predictions = regressor.predict(X_test)
#Check R squared
print("R squared is: ", regressor.score(X_test, y_test))
"""
Explanation: Fit and evaluate the model
End of explanation
"""
#Make cross validation
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(regressor, X, y, cv = 5)
print(scores.mean(), scores)
"""
Explanation: Cross validation
End of explanation
"""
from sklearn.datasets import load_boston
from sklearn.linear_model import SGDRegressor
from sklearn.preprocessing import StandardScaler
data = load_boston()
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target)
X_scaler = StandardScaler()
y_scaler = StandardScaler()
X_train = X_scaler.fit_transform(X_train)
y_train = y_scaler.fit_transform(y_train)
X_test = X_scaler.transform(X_test)
y_test = y_scaler.transform(y_test)
regressor = SGDRegressor(loss='squared_loss')
scores = cross_val_score(regressor, X_train, y_train, cv=5)
print('Cross validation r-squared scores:', scores)
print('Average cross validation r-squared score:', np.mean(scores))
regressor.fit_transform(X_train, y_train)
print('Test set r-squared score', regressor.score(X_test, y_test))
"""
Explanation: Fitting using gradient descent
End of explanation
"""
|
jacobdein/alpine-soundscapes | utilities/Pull data from OpenStreetMap.ipynb | mit | bounding_box_file = ""
result_shapefile_filepath = ""
"""
Explanation: Pull highway data from OpenStreetMap as shapefile
This notebook pulls highway line data from the OpenStreetMap database and creates a shapefile containing the query results.
Required packages
<a href="https://github.com/DinoTools/python-overpy">overpy</a> <br />
<a href="https://github.com/Toblerity/Fiona">Fiona</a>
Variable settings
bounding_box_filepath — path to shapefile to can define the desired bounding box to query the OpenStreetMap database <br />
result_shapefile_filepath — path to export shapefile containing the query results
End of explanation
"""
import overpy
import fiona
"""
Explanation: Import statements
End of explanation
"""
def print_results(results):
for way in result.ways:
print("Name: %s" % way.tags.get("name", "n/a"))
print(" Highway: %s" % way.tags.get("highway", "n/a"))
print(" Nodes:")
for node in way.nodes:
print(" Lat: %f, Lon: %f" % (node.lat, node.lon))
"""
Explanation: Utility functions
function to see what results were returned from the Overpass API query
End of explanation
"""
api = overpy.Overpass()
"""
Explanation: Query OpenStreetMap using OverpassAPI via overpy python package
setup Overpass api
End of explanation
"""
with fiona.open(bounding_box_file, mode='r') as bounding_box:
bounds = bounding_box.bounds
bounding_box.close()
print(bounds)
"""
Explanation: define bounding box from a 1km-buffered envelope around the study area boundary
End of explanation
"""
query = """way({bottom},{left},{top},{right}) ["highway"]; (._;>;); out body;""".format(bottom=bounds[1],
left=bounds[0],
top=bounds[3],
right=bounds[2])
"""
Explanation: define query
End of explanation
"""
result = api.query(query)
"""
Explanation: execute query
End of explanation
"""
from fiona.crs import from_epsg
schema = {'geometry': 'LineString', 'properties': {'Name':'str:80', 'Type':'str:80'}}
with fiona.open(result_shapefile_filepath, 'w', crs=from_epsg(4326), driver='ESRI Shapefile', schema=schema) as output:
for way in result.ways:
# the shapefile geometry use (lon,lat)
line = {'type': 'LineString', 'coordinates':[(node.lon, node.lat) for node in way.nodes]}
prop = {'Name': way.tags.get("name", "n/a"), 'Type': way.tags.get("highway", "n/a")}
output.write({'geometry': line, 'properties':prop})
output.close()
"""
Explanation: Write OpenStreetMap data to a shapefile
End of explanation
"""
|
NORCatUofC/rain | n-year/notebooks/Examining the 100-year event.ipynb | mit | from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
from datetime import datetime, timedelta
import pandas as pd
import matplotlib.pyplot as plt
import operator
import seaborn as sns
%matplotlib inline
n_year_storms = pd.read_csv('data/n_year_storms_ohare_noaa.csv')
n_year_storms['start_time'] = pd.to_datetime(n_year_storms['start_time'])
n_year_storms['end_time'] = pd.to_datetime(n_year_storms['end_time'])
n_year_storms.head()
year_event_100 = n_year_storms[n_year_storms['n'] == 100]
year_event_100
rain_df = pd.read_csv('data/ohare_hourly_20160929.csv')
rain_df['datetime'] = pd.to_datetime(rain_df['datetime'])
rain_df = rain_df.set_index(pd.DatetimeIndex(rain_df['datetime']))
rain_df = rain_df['19700101':]
chi_rain_series = rain_df['HOURLYPrecip'].resample('1H', label='right').max().fillna(0)
chi_rain_series.head()
# N-Year Storm variables
# These define the thresholds laid out by bulletin 70, and transfer mins and days to hours
n_year_threshes = pd.read_csv('../../n-year/notebooks/data/n_year_definitions.csv')
n_year_threshes = n_year_threshes.set_index('Duration')
dur_str_to_hours = {
'5-min':5/60.0,
'10-min':10/60.0,
'15-min':15/60.0,
'30-min':0.5,
'1-hr':1.0,
'2-hr':2.0,
'3-hr':3.0,
'6-hr':6.0,
'12-hr':12.0,
'18-hr':18.0,
'24-hr':24.0,
'48-hr':48.0,
'72-hr':72.0,
'5-day':5*24.0,
'10-day':10*24.0
}
n_s = [int(x.replace('-year','')) for x in reversed(list(n_year_threshes.columns.values))]
duration_strs = sorted(dur_str_to_hours.items(), key=operator.itemgetter(1), reverse=False)
n_year_threshes
# Find n-year storms and store them in a data frame.
def find_n_year_storms(start_time_str, end_time_str, n):
start_time = pd.to_datetime(start_time_str)
end_time = pd.to_datetime(end_time_str)
n_index = n_s.index(n)
next_n = n_s[n_index-1] if n_index != 0 else None
storms = []
for duration_tuple in duration_strs:
duration_str = duration_tuple[0]
low_thresh = n_year_threshes.loc[duration_str, str(n) + '-year']
high_thresh = n_year_threshes.loc[duration_str, str(next_n) + '-year'] if next_n is not None else None
duration = int(dur_str_to_hours[duration_str])
sub_series = chi_rain_series[start_time: end_time]
rolling = sub_series.rolling(window=int(duration), min_periods=0).sum()
if high_thresh is not None:
event_endtimes = rolling[(rolling >= low_thresh) & (rolling < high_thresh)].sort_values(ascending=False)
else:
event_endtimes = rolling[(rolling >= low_thresh)].sort_values(ascending=False)
for index, event_endtime in event_endtimes.iteritems():
this_start_time = index - timedelta(hours=duration)
if this_start_time < start_time:
continue
storms.append({'n': n, 'end_time': index, 'inches': event_endtime, 'duration_hrs': duration,
'start_time': this_start_time})
return pd.DataFrame(storms)
"""
Explanation: Examining the 100-year storm
From past analysis, we've seen that there have been 3 100-year storms in the last 46 years. This notebook takes a look at these 3 storms.
End of explanation
"""
storm1 = chi_rain_series['1987-08-11 23:00:00':'1987-08-21 23:00:00']
storm1.cumsum().plot(title="Cumulative rainfall over 1987 100-year storm")
# The rainfall starts at...
storm1[storm1 > 0].index[0]
storm1 = storm1['1987-08-13 22:00:00':]
storm1.head()
# There are two periods of drastic rise in rain. Print out the percent of the storm that has fallen hourly to see that the
# first burst ends at 8/14 10AM
storm1.cumsum()/storm1.sum()
# Looking for an n-year storm in the small period of drastic increase #1
find_n_year_storms('1987-08-13 22:00:00', '1987-08-14 10:00:00', 100)
# Let's look for the second jump in precip
storm1['1987-08-16 12:00:00':].cumsum()/storm1.sum()
# Looking for an n-year storm in the small period of drastic increase #2
find_n_year_storms('1987-08-16 20:00:00', '1987-08-17 00:00:00', 10)
"""
Explanation: Storm 1 -> August, 1987
End of explanation
"""
storm2 = chi_rain_series['2008-09-04 13:00:00':'2008-09-14 13:00:00']
storm2.cumsum().plot(title="Cumulative rainfall over 2008 100-year storm")
"""
Explanation: Looking at Storm 1 we see that within the 100-year storm, the real bulk of the rainfall falls overnight Aug 13-14 from 10PM until 10AM. This in itself was a 100-year storm. A few days later, we had an additional equivalent of a 10-year storm -- all this within the same 10 day period
Storm 2 -> September 2008
End of explanation
"""
total_rainfall = storm2.sum()
total_rainfall
storm2.cumsum()/total_rainfall
# First downpour is a 1-year storm
find_n_year_storms('2008-09-04 13:00:00', '2008-09-04 21:00:00', 1)
storm2['2008-09-08 00:00:00':'2008-09-09 00:00:00'].cumsum()/total_rainfall
find_n_year_storms('2008-09-08 10:00:00', '2008-09-08 20:00:00', 1)
chi_rain_series['2008-09-08 10:00:00':'2008-09-08 20:00:00'].sum()
# No n-year events for second downpour
# Downpour 3
storm2['2008-09-12 12:00:00':'2008-09-13 15:00:00'].cumsum()/total_rainfall
find_n_year_storms('2008-09-12 12:00:00','2008-09-13 15:00:00',50)
"""
Explanation: This event has 3 big downpours. Let's split this up
End of explanation
"""
storm3 = chi_rain_series['2011-07-22 08:00:00':'2011-07-23 08:00:00']
storm3.cumsum().plot(title="Cumulative rainfall over 2011 100-year storm")
storm3['2011-07-22 22:00:00':'2011-07-23 05:00:00'].cumsum()/storm3.sum()
find_n_year_storms('2011-07-22 22:00:00', '2011-07-23 05:00:00', 100)
chi_rain_series['2011-07-22 08:00:00':'2011-07-23 08:00:00'].cumsum().plot(title="Cumulative rainfall over 2011 100-year storm")
"""
Explanation: Storm 3 - July 2011
End of explanation
"""
chi_rain_series['2010-07-23 16:00:00':'2010-07-24 16:00:00'].cumsum().plot(title="Cumulative rainfall over 2010 50-year storm")
# The following code is copied verbatim from @pjsier Rolling Rain N-Year Threshold.pynb
# Loading in hourly rain data from CSV, parsing the timestamp, and adding it as an index so it's more useful
rain_df = pd.read_csv('data/ohare_hourly_observations.csv')
rain_df['datetime'] = pd.to_datetime(rain_df['datetime'])
rain_df = rain_df.set_index(pd.DatetimeIndex(rain_df['datetime']))
print(rain_df.dtypes)
rain_df.head()
chi_rain_series = rain_df['hourly_precip'].resample('1H').max()
# This is where I break with @pjsier
# I am assuming here that a single hour cannot be part of more than one storm in the event_endtimes list.
# Therefore, I am looping through the list and throwing out any storms that include hours from heavier storms in the
# same block of time.=
def get_storms_without_overlap(event_endtimes, hours):
times_taken = []
ret_val = []
for i in range(len(event_endtimes)):
timestamp = event_endtimes.iloc[i].name
times_here = []
for h in range(hours):
times_here.append(timestamp - pd.DateOffset(hours=h))
if not bool(set(times_here) & set(times_taken)):
times_taken.extend(times_here)
ret_val.append({'start': timestamp - pd.DateOffset(hours=hours), 'end': timestamp, 'inches': event_endtimes.iloc[i]['hourly_precip']})
return ret_val
# Find the 100 year event. First, define the storm as based in Illinois Bulletin 70 as the number of inches
# of precipition that falls over a given span of straight hours.
_100_year_storm_milestones = [{'hours': 240, 'inches': 11.14}, {'hours':120, 'inches': 9.96},
{'hours': 72, 'inches': 8.78}, {'hours': 48, 'inches': 8.16}, {'hours': 24, 'inches': 7.58},
{'hours': 18, 'inches': 6.97}, {'hours': 12, 'inches': 6.59}, {'hours': 6, 'inches': 5.68},
{'hours': 3, 'inches': 4.9}, {'hours': 2, 'inches': 4.47}, {'hours': 1, 'inches': 3.51}]
all_storms = []
print("\tSTART\t\t\tEND\t\t\tINCHES")
for storm_hours in _100_year_storm_milestones:
rolling = pd.DataFrame(chi_rain_series.rolling(window=storm_hours['hours']).sum())
event_endtimes = rolling[(rolling['hourly_precip'] >= storm_hours['inches'])]
event_endtimes = event_endtimes.sort_values(by='hourly_precip', ascending=False)
storms = get_storms_without_overlap(event_endtimes, storm_hours['hours'])
if len(storms) > 0:
print("Across %s hours" % storm_hours['hours'])
for storm in storms:
print('\t%s\t%s\t%s inches' % (storm['start'], storm['end'], storm['inches']))
all_storms.extend(storms)
# Analysis Questions
# 1/25/2015 - 2/4/2015 - Worst storm by far in quantity, but Jan-Feb -- is it snow?
# 9/4/2008 - 9/14/2008 - This only appeared on the 10-day event, so it must've been well distributed across the days?
# 7/21/2011 - 7/23/2011 - Very heavy summer storm!
# Examining the storm from 7/21-2011 - 7/23/2011
import datetime
july_2011_storm = chi_rain_series.loc[(chi_rain_series.index >= datetime.datetime(2011,7,20)) & (chi_rain_series.index <= datetime.datetime(2011,7,24))]
july_2011_storm.head()
july_2011_storm.plot()
# Let's take a look at the cumulative buildup of the storm over time
cumulative_rainj11 = pd.DataFrame(july_2011_storm).hourly_precip.cumsum()
cumulative_rainj11.head()
cumulative_rainj11.plot()
cumulative_rainj11.loc[(cumulative_rainj11.index >= datetime.datetime(2011,7,22,21,0,0)) & (cumulative_rainj11.index <= datetime.datetime(2011,7,23,5,0,0))]
# We got a crazy, crazy downpour from about 11:00PM until 2:00AM. That alone was a 100-year storm, where we got 6.79 inches
# in 3 hours. That would've been a 100-year storm if we'd have gotten that in 12 hours!
"""
Explanation: Interestingly, all of the 100-year storms are marked with a drastic period of a few hours which really makes it the big one.
Let's examine the 50-years to look for the same trend
End of explanation
"""
|
NORCatUofC/rain | flooding/Basement vs Street Flooding.ipynb | mit | wib_comm_df = pd.read_csv('311_data/wib_calls_311_comm.csv')
wos_comm_df = pd.read_csv('311_data/wos_calls_311_comm.csv')
wib_comm_df.head()
wib_comm_stack = wib_comm_df[wib_comm_df.columns.values[1:]].stack().reset_index()
wos_comm_stack = wos_comm_df[wos_comm_df.columns.values[1:]].stack().reset_index()
wib_comm_stack.head()
wib_comm_grp = pd.DataFrame(wib_comm_stack.groupby(['level_1'])[0].sum()).reset_index()
wib_comm_grp = wib_comm_grp.rename(columns={'level_1':'Community Area', 0: 'Basement Calls'})
wos_comm_grp = pd.DataFrame(wos_comm_stack.groupby(['level_1'])[0].sum()).reset_index()
wos_comm_grp = wos_comm_grp.rename(columns={'level_1':'Community Area', 0: 'Street Calls'})
comm_grp_merge = wib_comm_grp.merge(wos_comm_grp, on='Community Area')
comm_grp_merge.head()
## Making basement to street call ratio column
comm_grp_merge['Basement-Street Ratio'] = comm_grp_merge['Basement Calls'] / comm_grp_merge['Street Calls']
comm_grp_merge.head()
comm_more_basement = comm_grp_merge.sort_values(by='Basement-Street Ratio', ascending=False)[:15]
comm_more_street = comm_grp_merge.sort_values(by='Basement-Street Ratio')[:15]
comm_more_basement.head()
"""
Explanation: Rates of Basement Flooding Calls vs. Street Flooding
Seems like basement and street flooding calls have different patterns across the city. Street flooding calls seem to be more distributed throughout the city, and occur more regularly than basement calls. Looking at whether or not specific areas have much higher rates than others of basement flooding calls vs. street flooding calls.
End of explanation
"""
fig, axs = plt.subplots(1,2)
plt.rcParams["figure.figsize"] = [15, 5]
comm_more_basement.plot(title='More Basement Flooding Areas', ax=axs[0], kind='bar',x='Community Area',y='Basement-Street Ratio')
comm_more_street.plot(title='More Street Flooding Areas', ax=axs[1], kind='bar',x='Community Area',y='Basement-Street Ratio')
"""
Explanation: Community Areas by Basement to Street Flood Call Ratio
From looking at the community areas with much more basement flooding calls than street flooding calls and vice versa, it seems like lower-income neighborhoods on the south side generally have more basement flooding calls, while higher-income neighborhoods on the north side have more for street flooding.
Anecdotally, this makes sense from the area names, and overall it suggests that street flooding is more of a typical 311 call that's subject to change over neighborhoods. Basement flooding matches up with FEMA data though, so it seems like street flooding should be ignored for most analyses in favor of basement flooding to look at the actual distribution of flooding events across the city.
Note for WBEZ Data
This also means that the WBEZ data is more off than previously expected because it combined both basement and street flooding. Only looking at basement flooding actualy changes the top zip codes substantially.
End of explanation
"""
|
enchantner/python-zero | lesson_6/Slides.ipynb | mit | import yaml
import random
with open("answers.yaml", "r") as conf:
config = yaml.load(conf)
def get_answer(message):
lower_msg = message.lower()
for key in config['answers']:
if key in lower_msg:
return random.choice(config['answers'][key])
"""
Explanation: Вопросы по прошлому занятию
Какой магический метод объекта A дает возможность сделать операцию A * B?
Как задаются поля и методы объекта класса и поля и методы объекта экземпляра этого класса? В чем разница между этими понятиями?
Для чего нужен super()?
Что такое @property?
Какой модуль Python позволяет создавать контексты без использования классов?
В чем разница между CPU-bound задачами и I/O-bound задачами?
Какие основные недостатки использования процессов вместо потоков?
Ненадолго возвращаясь к боту
End of explanation
"""
import random
import threading
import time
class SleepThread(threading.Thread):
def __init__(self, num):
super().__init__()
self.num = num
def run(self):
time.sleep(self.num)
print(self.num)
a = [random.randint(0, 10) for _ in range(10)]
threads = [SleepThread(i) for i in a]
for t in threads:
t.start()
def sleep_print(num):
time.sleep(num)
print(num)
a = [random.randint(0, 10) for _ in range(10)]
threads = [
threading.Thread(target=sleep_print, args=(i,))
for i in a
]
for t in threads:
t.start()
import concurrent.futures as cf
def hold_my_beer(num):
time.sleep(num)
return num
a = [random.randint(0, 10) for _ in range(10)]
with cf.ThreadPoolExecutor(max_workers=len(a)) as pool:
for future in cf.as_completed([
pool.submit(hold_my_beer, i) for i in a
]):
print(future.result())
"""
Explanation: Решение задачи на sleepsort
End of explanation
"""
import asyncio
asyncio.Queue() # асинхронная очередь
asyncio.sleep(10) # асинхронный "сон"
asyncio.create_subprocess_exec() # асинхронный subprocess
asyncio.Lock() # асинхронный мьютекс
asyncio.ensure_future() # ручное добавление корутины в event loop
asyncio.gather() # дождаться окончания работы списка корутин
"""
Explanation: Асинхронность и параллельность
Параллельность - это выполнение двух фрагментов кода одновременно.
Асинхронность - это выполнение кода НЕ последовательно.
Асинхронность может быть реализована с помощью параллельности, а может - с помощью ручного переключения контекста в самом коде, с сохранением последнего состояния. Ничего не напоминает?
Когда куски кода сами решают, когда передавать управление друг другу, и не зависят от внешнего системного планировщика, то это называется "кооперативной многозадачностью", а эти куски кода - корутинами или сопрограммами.
Недостаток - долгоиграющая процедура НЕ под контролем event loop'а вешает вообще ВСЕ
Событийно-ориентированное программирование
Две основные составляющие асинхронного кода - это event loop (цикл отлова событий) и корутины
Пока корутина ждет внешнее событие - контекст переключается на другую
Помимо переключения контекста корутины могут отправлять друг другу сообщения
К сожалению, в современной реализации асинхронности в Python обычные и асинхронные функции не являются взаимозаменяемыми
Альтернативные реализации для старых версий - Gevent, Eventlet и Tornado. И еще несколько.
Asyncio
End of explanation
"""
import asyncio
async def hello(name):
return "Hello, {}!".format(name)
hello("Vasya")
await hello("Vasya")
import asyncio
async def hello(name):
return "Hello, {}!".format(name)
async def call_vasya():
greeting = await hello("Vasya")
return greeting
loop = asyncio.get_event_loop()
print(loop.run_until_complete(call_vasya()))
"""
Explanation: Ключевые слова async и await
End of explanation
"""
import asyncio
import random
async def hold(num):
await asyncio.sleep(num)
return num
a = [random.randint(0, 10) for _ in range(10)]
"""
Explanation: Упражнение
Напишите асинхронную реализацию sleepsort
End of explanation
"""
# hello/views.py
from django.http import HttpResponse
def index(request):
return HttpResponse("Hello!")
# hello/urls.py
from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.index, name='index'),
]
# urls.py
from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^hello/', include('hello.urls')),
url(r'^admin/', admin.site.urls),
]
"""
Explanation: Django
http://www.djangoproject.com/
http://djbook.ru # немного устарело!
~$ pip install django
~$ django-admin startproject mysite
~$ python manage.py runserver
Основные термины веб-разработчика на Python
HTTP (https://ru.wikipedia.org/wiki/HTTP), оно же HyperText Transfer Protocol
сетевой порт (http://bit.ly/1Mxp4Ks) и сокет (http://bit.ly/1Oxntiq)
WSGI (https://ru.wikipedia.org/wiki/WSGI), оно же Web Server Gateway Interface
MVC (https://ru.wikipedia.org/wiki/Model-View-Controller), оно же Model-View-Controller
~$ python manage.py startapp hello
End of explanation
"""
# Добавляем в settings.py строчки
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static")
]
# а также похожую строчку в TEMPLATES["DIRS"]:
os.path.join(BASE_DIR, "templates")
# а в urls.py делаем так:
from django.conf import settings
from django.conf.urls import include, url
from django.conf.urls.static import static
from django.contrib import admin
urlpatterns = [
url(r'^hello/', include('hello.urls')),
url(r'^admin/', admin.site.urls),
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
# hello/views.py
from django.http import HttpResponse
from django.shortcuts import render
def index(request):
return render(request, 'index.html', {})
"""
Explanation: А где там html?
Создаем папки templates и static
Я взял шаблон http://html5up.net/photon , можете найти и скачать любой другой
Переносим папки assets и images внутрь папки static
End of explanation
"""
# my_application.py
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello, World!"
"""
Explanation: Меняем пути в шаблоне
Ну и последнее - меняем шаблон
На верх index.html дописываем такое: {% load static %}
А пути к статике меняем на шаблонные теги:
"assets/css/main.css" -> {% static 'assets/css/main.css' %}
На боевом сервере такими вещами будет заниматься не сам Django, а сервер статики, например, Nginx. Это важно!
Рекомендую еще почитать про uWSGI - https://uwsgi-docs.readthedocs.io/en/latest/
Flask
~$ pip install flask
End of explanation
"""
|
deculler/DataScienceTableDemos | HealthSample.ipynb | bsd-2-clause | health_map = Table(["raw label", "label", "encoding", "Description"]).with_rows(
[["hhidpn", "id", None, "identifier"],
["r8agey_m", "age", None, "age in years in wave 8"],
["ragender", "gender", ['male','female'], "1 = male, 2 = female)"],
["raracem", "race", ['white','black','other'], "(1 = white, 2 = black, 3 = other)"],
["rahispan", "hispanic", None, "(1 = yes)"],
["raedyrs", "education", None, "education in years"],
["h8cpl", "couple", None, "in a couple household (1 = yes)"],
["r8bpavgs", "blood pressure", None,"average systolic BP"],
["r8bpavgp", "pulse", None, "average pulse"],
["r8smoken", "smoker",None, "currently smokes cigarettes"],
["r8mdactx", "exercise", None, "frequency of moderate exercise (1=everyday, 2=>1perweek, 3=1perweek, 4=1-3permonth\
, 5=never)"],
["r8weightbio", "weight", None, "objective weight in kg"],
["r8heightbio","height", None, "objective height in m"]])
health_map
def table_lookup(table,key_col,key,map_col):
row = np.where(table[key_col]==key)
if len(row[0]) == 1:
return table[map_col][row[0]][0]
else:
return -1
def map_raw_table(raw_table,map_table):
mapped = Table()
for raw_label in raw_table :
if raw_label in map_table["raw label"] :
new_label = table_lookup(map_table,'raw label',raw_label,'label')
encoding = table_lookup(map_table,'raw label',raw_label,'encoding')
if encoding is None :
mapped[new_label] = raw_table[raw_label]
else:
mapped[new_label] = raw_table.apply(lambda x: encoding[x-1], raw_label)
return mapped
# create a more usable table by mapping the raw to finished
health = map_raw_table(hrec06,health_map)
health
"""
Explanation: Indirection
They say "all problems in computer science can be solved with an extra level of indirection."
It certainly provides some real leverage in data wrangling. Rather than write a bunch of spaghetti
code, we will build a table that defines the transformation we would like to perform on the
raw data in order to have something cleaner to work with. In this we can map the indecipherable identifiers
into something more understandable; we can establish formatters; we can translate field encodings into
clear mnemonics, and so on.
We need a tool for finding elements in the translation table; that's table_lookup. Then we can
build our mapping tool, map_raw_table.
End of explanation
"""
def firstQtile(x) : return np.percentile(x,25)
def thirdQtile(x) : return np.percentile(x,25)
summary_ops = (min, firstQtile, np.median, np.mean, thirdQtile, max, sum)
# Let's try what is the effect of smoking
smokers = health.where('smoker',1)
nosmokers = health.where('smoker',0)
print(smokers.num_rows, ' smokers')
print(nosmokers.num_rows, ' non-smokers')
smokers.stats(summary_ops)
nosmokers.stats(summary_ops)
help(smokers.hist)
"""
Explanation: Descriptive statistics - smoking
End of explanation
"""
smokers.hist('weight', bins=20)
nosmokers.hist('weight', bins=20)
np.mean(nosmokers['weight'])-np.mean(smokers['weight'])
"""
Explanation: What is the effect of smoking on weight?
End of explanation
"""
# Lets draw two samples of equal size
n_sample = 200
smoker_sample = smokers.sample(n_sample)
nosmoker_sample = nosmokers.sample(n_sample)
weight = Table().with_columns([('NoSmoke', nosmoker_sample['weight']),('Smoke', smoker_sample['weight'])])
weight.hist(overlay=True,bins=30,normed=True)
weight.stats(summary_ops)
"""
Explanation: Permutation tests
End of explanation
"""
combined = Table().with_column('all', np.append(nosmoker_sample['weight'],smoker_sample['weight']))
combined.num_rows
# permutation test, split the combined into two random groups, do the comparison of those
def getdiff():
A,B = combined.split(n_sample)
return (np.mean(A['all'])-np.mean(B['all']))
# Do the permutation many times and form the distribution of results
num_samples = 300
diff_samples = Table().with_column('diffs', [getdiff() for i in range(num_samples)])
diff_samples.hist(bins=np.arange(-5,5,0.5), normed=True)
"""
Explanation: Is the difference observed between these samples representative of the larger population?
End of explanation
"""
# A sense of the overall population represented - older
health.select(['age','education']).hist(bins=20)
# How does education correlate with age?
health.select(['age','education']).scatter('age', fit_line=True)
health.pivot_hist('race','education',normed=True)
# How are races represented in the dataset and how does hispanic overlay the three?
race = health.select(['race', 'hispanic'])
race['count']=1
by_race = race.group('race',sum)
by_race['race frac'] = by_race['count sum']/np.sum(by_race['count sum'])
by_race['hisp frac'] = by_race['hispanic sum'] / by_race['count sum']
by_race
health.select(['height','weight']).scatter('height','weight',fit_line=True)
"""
Explanation: The 4.5 kg difference is certainly not an artifact of the sample we started with. The smokers definitely weigh less. At the same time, these are not light people in this study. Better go back and understand what was the purpose of the study that led to the selection of these six thousand individuals.
Other Factors
End of explanation
"""
|
CalPolyPat/phys202-2015-work | assignments/assignment04/MatplotlibEx02.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Matplotlib Exercise 2
Imports
End of explanation
"""
!head -n 30 open_exoplanet_catalogue.txt
"""
Explanation: Exoplanet properties
Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.
http://iopscience.iop.org/1402-4896/2008/T130/014001
Your job is to reproduce Figures 2 and 4 from this paper using an up-to-date dataset of extrasolar planets found on this GitHub repo:
https://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue
A text version of the dataset has already been put into this directory. The top of the file has documentation about each column of data:
End of explanation
"""
data = np.genfromtxt('open_exoplanet_catalogue.txt', delimiter = ',')
assert data.shape==(1993,24)
"""
Explanation: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data:
End of explanation
"""
f = plt.figure(figsize = (9,6))
hist, bin_width = np.histogram(data[:,2], 20, (0, 20))
print(len(hist))
plt.plot(np.linspace(0, 20, 200), 550*np.exp((np.linspace(1,20,200))**-1.30)-560, "k--")
plt.bar(np.linspace(0, 20, 20), hist, width = 1, color = "k", ec = "w")
plt.ylim(top = 600)
plt.ylabel("Number of Planets")
plt.xlabel("M sin i ($M_{JUP}$)")
plt.minorticks_on()
plt.tick_params(width = 2)
plt.text(20,550, "%d Planets" % len(data[:,2]))
plt.arrow(8,200, -3, -100, 10, head_width=1, head_length=20, fc='k', ec='k')
plt.text(9,250, r"$\frac{dN}{dM} \alpha M^{-1.30}$", fontsize = 20)
plt.title("Mass Distribution for Exoplanets")
assert True # leave for grading
"""
Explanation: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
Pick the number of bins for the histogram appropriately.
End of explanation
"""
f = plt.figure(figsize = (15,10))
ax = plt.subplot(111)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.scatter(data[:,5], data[:,6], color = "k")
plt.ylim(0,1)
plt.xscale('log')
plt.xlim(.01,1000)
plt.ylabel("Eccentricity")
plt.xlabel("Semimajor Axis(AU)")
plt.minorticks_on()
plt.tick_params(width = 2)
plt.title("Orbital Eccentricities vs. Semimajor Axis(AU)")
plt.tick_params(width = 2, length = 10)
plt.text(350,.97, "%d Planets" % len(data[:,2]))
assert True # leave for grading
"""
Explanation: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
"""
|
visualfabriq/bquery | bquery/benchmarks/taxi/Taxi Set.ipynb | bsd-3-clause | import os
import urllib
import glob
import pandas as pd
from bquery import ctable
import bquery
import bcolz
from multiprocessing import Pool, cpu_count
from collections import OrderedDict
import contextlib
import time
# do not forget to install numexpr
# os.environ["BLOSC_NOLOCK"] = "1"
bcolz.set_nthreads(1)
workdir = '/home/carst/Documents/taxi/'
elapsed_times = OrderedDict()
@contextlib.contextmanager
def ctime(message=None):
"Counts the time spent in some context"
assert message is not None
global elapsed_times
t_elapsed = 0.0
print('\n')
t = time.time()
yield
if message:
print (message + ": ")
t_elapsed = time.time() - t
print (round(t_elapsed, 4), "sec")
elapsed_times[message] = t_elapsed
def sub_query(input_args):
rootdir = input_args['rootdir']
group_cols = input_args['group_cols']
measure_cols = input_args['measure_cols']
ct = ctable(rootdir=rootdir, mode='a')
result = ct.groupby(group_cols, measure_cols)
result_df = result.todataframe()
return result_df.to_msgpack()
def execute_query(ct_list, group_cols, measure_cols):
p = Pool(cpu_count())
query_list = [{
'rootdir': rootdir,
'group_cols': group_cols,
'measure_cols': measure_cols} for rootdir in ct_list]
result_list = p.map(sub_query, query_list)
p.close()
result_list = [pd.read_msgpack(x) for x in result_list]
result_df = pd.concat(result_list, ignore_index=True)
result_df = result_df.groupby(group_cols)[measure_cols].sum()
return result_df
# create workfiles if not available
ct_list = glob.glob(workdir + 'taxi_*')
# import bquery.benchmarks.taxi.load as taxi_load
# taxi_load.download_data(workdir)
# taxi_load.create_bcolz(workdir)
# taxi_load.create_bcolz_chunks(workdir)
ct_list = glob.glob(workdir + 'taxi_*')
ct = ctable(rootdir=workdir + 'taxi', mode='a')
measure_list = ['extra',
'fare_amount',
'improvement_surcharge',
'mta_tax',
'nr_rides',
'passenger_count',
'tip_amount',
'tolls_amount',
'total_amount',
'trip_distance']
"""
Explanation: Bquery/Bcolz Taxi Set Performance
Based on the great work by Matthew Rocklin, see http://matthewrocklin.com/blog/work/2016/02/22/dask-distributed-part-2
NB: The auto-caching features will make the second (and subsequent) runs faster for multi-column groupings, which is reflected in the scores below.
End of explanation
"""
with ctime(message='CT payment_type nr_rides sum, single process'):
ct.groupby(['payment_type'], ['nr_rides'])
with ctime(message='CT yearmonth nr_rides sum, single process'):
ct.groupby(['pickup_yearmonth'], ['nr_rides'])
with ctime(message='CT yearmonth + payment_type nr_rides sum, single process'):
ct.groupby(['pickup_yearmonth', 'payment_type'], ['nr_rides'])
"""
Explanation: Single Process
End of explanation
"""
with ctime(message='CT payment_type nr_rides sum, ' + str(cpu_count()) + ' processors'):
execute_query(ct_list, ['payment_type'], ['nr_rides'])
with ctime(message='CT yearmonth nr_rides sum, ' + str(cpu_count()) + ' processors'):
execute_query(ct_list, ['pickup_yearmonth'], ['nr_rides'])
with ctime(message='CT yearmonth + payment_type nr_rides sum, ' + str(cpu_count()) + ' processors'):
execute_query(ct_list, ['pickup_yearmonth', 'payment_type'], ['nr_rides'])
"""
Explanation: Multi Process
End of explanation
"""
with ctime(message='CT payment_type all measure sum, single process'):
ct.groupby(['payment_type'], measure_list)
with ctime(message='CT yearmonth all measure sum, single process'):
ct.groupby(['pickup_yearmonth'], measure_list)
with ctime(message='CT yearmonth + payment_type all measure sum, single process'):
ct.groupby(['pickup_yearmonth', 'payment_type'], measure_list)
"""
Explanation: Single Process, All Measures
End of explanation
"""
with ctime(message='CT payment_type all measure sum, ' + str(cpu_count()) + ' processors'):
execute_query(ct_list, ['payment_type'], measure_list)
with ctime(message='CT yearmonth all measure sum, ' + str(cpu_count()) + ' processors'):
execute_query(ct_list, ['pickup_yearmonth'], measure_list)
with ctime(message='CT yearmonth + payment_type all measure sum, ' + str(cpu_count()) + ' processors'):
execute_query(ct_list, ['pickup_yearmonth', 'payment_type'], measure_list)
"""
Explanation: Multi Process, All Measures
End of explanation
"""
|
brookehus/msmbuilder | examples/Coarse-graining-with-MVCA.ipynb | lgpl-2.1 | from msmbuilder.example_datasets import QuadWell
from msmbuilder.msm import MarkovStateModel
from msmbuilder.lumping import MVCA
import numpy as np
import scipy.cluster.hierarchy
import matplotlib.pyplot as plt
% matplotlib inline
"""
Explanation: Minimum Variance Cluster Analysis
We are going to use a minimum variance criterion with the
Jensen-Shannon divergence to coarse-grain the quad well
dataset.
End of explanation
"""
q = QuadWell(random_state=998).get()
ds = q['trajectories']
"""
Explanation: Get the dataset
End of explanation
"""
def regular_spatial_clustering(ds, n_bins=20, halfwidth=np.pi):
new_ds = []
for t in ds:
new_t = []
for i, f in enumerate(t):
width = 2*halfwidth
temp = f + halfwidth
reg = np.floor(n_bins*temp/width)
new_t.append(int(reg))
new_ds.append(np.array(new_t))
return new_ds
halfwidth = max(np.abs([max(np.abs(f)) for f in ds]))[0]
assignments = regular_spatial_clustering(ds, halfwidth=halfwidth)
msm_mdl = MarkovStateModel()
msm_mdl.fit(assignments)
"""
Explanation: Define a regular spatial clusterer
End of explanation
"""
def get_centers(n_bins=20, halfwidth=np.pi):
centers = []
tot = 2*halfwidth
interval = tot/n_bins
for i in range(n_bins):
c = (i+1)*interval - interval/2. - halfwidth
centers.append(c)
return(centers)
ccs = get_centers(halfwidth=halfwidth)
nrgs = [-0.6*np.log(p) for p in msm_mdl.populations_]
m,s,b = plt.stem(ccs, nrgs, 'deepskyblue', bottom=-1)
for i in s:
i.set_linewidth(8)
potential = lambda x: 4 * (x ** 8 + 0.8 * np.exp(-80 * x ** 2) + 0.2 * np.exp(
-80 * (x - 0.5) ** 2) +
0.5 * np.exp(-40 * (x + 0.5) ** 2))
exes = np.linspace(-np.pi,np.pi,1000)
whys = potential(exes)
plt.plot(exes, whys, linewidth=2, color='k')
plt.xlim([-halfwidth, halfwidth])
plt.ylim([0,4])
"""
Explanation: Plot our MSM energies
End of explanation
"""
mvca = MVCA.from_msm(msm_mdl, n_macrostates=None, get_linkage=True)
"""
Explanation: Make a model with out macrostating to get linkage information
End of explanation
"""
scipy.cluster.hierarchy.dendrogram(mvca.linkage,
color_threshold=1.1,
no_labels=True)
plt.show()
"""
Explanation: Use mvca.linkage to get a scipy linkage object
End of explanation
"""
for i in range(19):
s = str(i+1)
plt.scatter([i+1], mvca.elbow_data[i], color='k', marker=r'$%s$'%(i+1),
s=60*(np.floor((i+1)/10)+1)) # so numbers are approximately the same size
plt.xlabel('Number of macrostates')
plt.xticks([])
plt.show()
"""
Explanation: Use mvca.elbow_distance to get the objective function change with agglomeration
End of explanation
"""
color_list = ['deepskyblue', 'hotpink', 'turquoise', 'indigo', 'gold',
'olivedrab', 'orangered', 'whitesmoke']
def plot_macrostates(n_macrostates=4):
mvca_mdl = MVCA.from_msm(msm_mdl, n_macrostates=n_macrostates)
for i, _ in enumerate(mvca_mdl.microstate_mapping_):
m,s,b = plt.stem([ccs[i]], [nrgs[i]],
color_list[mvca_mdl.microstate_mapping_[i]],
markerfmt=' ', bottom=-1)
for i in s:
i.set_linewidth(5)
plt.plot(exes, whys, color='black', linewidth=1.5)
plt.ylim([0,4])
plt.xlim([-halfwidth,halfwidth])
plt.subplots(nrows=2, ncols=3, figsize=(12,6))
for i in range(6):
plt.subplot(2,3,i+1)
plot_macrostates(n_macrostates=i+2)
plt.title('%i macrostates'%(i+2))
plt.tight_layout()
"""
Explanation: Plot some macrostate models
End of explanation
"""
|
tuanavu/python-cookbook-3rd | notebooks/ch01/10_removing_duplicates_from_a_sequence_while_maintaining_order.ipynb | mit | def dedupe(items):
seen = set()
for item in items:
if item not in seen:
yield item
seen.add(item)
a = [1, 5, 2, 1, 9, 1, 5, 10]
list(dedupe(a))
"""
Explanation: Removing Duplicates from a Sequence while Maintaining Order
Problem
You want to eliminate the duplicate values in a sequence, but preserve the order of the remaining items.
Solution
If the values in the sequence are hashable, the problem can be easily solved using a set and a generator.
Dedup list
End of explanation
"""
def dedupe(items, key=None):
seen = set()
for item in items:
val = item if key is None else key(item)
if val not in seen:
yield item
seen.add(val)
a = [ {'x':1, 'y':2}, {'x':1, 'y':3}, {'x':1, 'y':2}, {'x':2, 'y':4}]
print(a)
print(list(dedupe(a, key=lambda a: (a['x'],a['y']))))
print(list(dedupe(a, key=lambda a: a['x'])))
"""
Explanation: Dedup dict with key
This only works if the items in the sequence are hashable. If you are trying to eliminate duplicates in a sequence of unhashable types (such as dicts), you can make a slight change to this recipe
End of explanation
"""
|
eds-uga/csci1360-fa16 | assignments/A7/A7_Q3.ipynb | mit | try:
count_datasets
except:
assert False
else:
assert True
c = count_datasets("submission_partial.json")
assert c == 4
c = count_datasets("submission_full.json")
assert c == 9
try:
c = count_datasets("submission_nonexistent.json")
except:
assert False
else:
assert c == -1
"""
Explanation: Q3
CodeNeuro is a project run out of HHMI Janelia Farms which looks at designing algorithms to automatically identify neurons in time-lapse microscope data. The competition is called "NeuroFinder":
http://neurofinder.codeneuro.org/
The goal of the project is to use data that look like this
and automatically segment out all the neurons in the image, like so
As you can probably imagine, storing this information is tricky, requiring a great deal of specifics. On the website, JSON format is used to submit predictions. The format is as follows:
The first layer is a list, where each item in the list is a dictionary. Each item (dictionary) corresponds to a single dataset.
One of the dictionaries will contain two keys: dataset, which gives the name of the dataset as the value (a string), and regions, which contains a list of all the regions found in that dataset.
A single item in the list of regions is a dictionary, with one key: coordinates.
The value for coordinates is, again, a list, where each element of the list is an (x, y) pair that specifies a pixel in the region.
That's a lot, for sure. Here's an example of a JSON structure representing two different datasets, where one dataset has only 1 region and the other dataset has 2 regions:
```
'[
{"dataset": "d1", "regions":
[
{"coordinates": [[1, 2], [3, 4], [4, 5]]}
]
},
{"dataset": "d2", "regions":
[
{"coordinates": [[2, 3], [4, 10]]},
{"coordinates": [[20, 20], [20, 21], [22, 23]]}
]
}
]'
```
You have two datasets, d1 and d2, represented as two elements in the outermost list. Those two dictionaries have two keys, dataset (the name of the dataset) and regions (the list of regions outlining neurons present in that dataset). The regions field is a list of dictionaries, and the length of the list is how many distinct regions/neurons there are in that dataset. For example, in d1 above, there is only 1 neuron/region, but in d2, there are 2 neurons/regions. Each region is just a list of (x, y) tuple integers that specify a pixel in the image dataset that is part of the region.
WHEW. That's a lot. We'll try to start things off slowly.
A
Write a function, count_datasets, which returns the number of datasets in the provided JSON file.
The function will accept one argument: json_file, which is the name of a JSON file on the hard disk that represents a submission file for CodeNeuro.
Your function should return an integer: the number of datasets present in the JSON input file.
This function should read the file off the hard disk, count the number of datasets in the file, and return that number. It should also be able to handle file exceptions gracefully; if an error is encountered, return -1 to represent this. Otherwise, the return value should always be 0 or greater.
You can use the json Python library; otherwise, no other imports are allowed.
End of explanation
"""
try:
get_dataset_by_index
except:
assert False
else:
assert True
import json
d = json.loads(open("partial_1.json", "r").read())
assert d == get_dataset_by_index("submission_partial.json", 1)
d = json.loads(open("full_8.json", "r").read())
assert d == get_dataset_by_index("submission_full.json", 8)
try:
c = get_dataset_by_index("submission_partial.json", 5)
except:
assert False
else:
assert c is None
try:
c = get_dataset_by_index("submission_nonexistent.json", 4983)
except:
assert False
else:
assert c is None
"""
Explanation: B
Write a function, get_dataset_by_index, which returns a certain dataset from the file.
This function should take two arguments: the name of the JSON file on the filesystem, and the integer index of the dataset to return from that JSON file.
This function should return the dictionary corresponding to the dataset in the JSON file, or None if an invalid index is supplied (e.g. specified 10 when there are only 4 datasets, or a negative number, or a float/string/list/non-integer type). It should also be able to handle file-related errors.
You can use the json Python library; otherwise, no other imports are allowed.
End of explanation
"""
try:
get_dataset_by_name
except:
assert False
else:
assert True
import json
d = json.loads(open("partial_1.json", "r").read())
assert d == get_dataset_by_name("submission_partial.json", "01.01.test")
d = json.loads(open("full_8.json", "r").read())
assert d == get_dataset_by_name("submission_full.json", "04.01.test")
try:
c = get_dataset_by_name("submission_partial.json", "nonexistent")
except:
assert False
else:
assert c is None
try:
c = get_dataset_by_name("submission_nonexistent.json", "02.00.test")
except:
assert False
else:
assert c is None
"""
Explanation: C
Write a function, get_dataset_by_name, which is functionally identical to get_dataset_by_index except rather than retrieving a dataset by the integer index, you instead return a dataset by its string name.
This function should take two arguments: the name of the JSON file on the filesystem, and the string name of the dataset to return from that JSON file.
This function should return the dictionary corresponding to the dataset in the JSON file, or None if an invalid name is supplied. It should also be able to handle file-related errors.
You can use the json Python library; otherwise, no other imports are allowed.
End of explanation
"""
try:
count_pixels_in_dataset
except:
assert False
else:
assert True
assert 29476 == count_pixels_in_dataset("submission_full.json", "01.01.test")
assert 30231 == count_pixels_in_dataset("submission_full.json", "04.01.test")
try:
c = count_pixels_in_dataset("submission_partial.json", "02.00.test")
except:
assert False
else:
assert c == -1
"""
Explanation: D
Write a function, count_pixels_in_dataset, which returns the number of pixels found in all regions of a particular dataset.
This function should take two arguments:
- the string name of the JSON file containing all the datasets on the filesystem
- the string name of the dataset to search
This function should return one integer: the number of pixels identified in regions in that dataset. This should be returned from the function. Each individual pixel is a single pair of (x, y) numbers (that counts as 1).
If any file-related errors are encountered, or an incorrect dataset name specified, the function should return -1.
You can use the json Python library, or other functions you've already written in this question; otherwise, no other imports are allowed.
End of explanation
"""
|
josdaza/deep-toolbox | TensorFlow/02_Linear_Regression.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
# Regresa 101 numeros igualmmente espaciados en el intervalo[-1,1]
x_train = np.linspace(-1, 1, 101)
# Genera numeros pseudo-aleatorios multiplicando la matriz x_train * 2 y
# sumando a cada elemento un ruido (una matriz del mismo tamanio con puros numeros random)
y_train = 2 * x_train + np.random.randn(*x_train.shape) * 0.33
print(np.random.randn(*x_train.shape))
plt.scatter(x_train, y_train)
plt.show()
"""
Explanation: Visualizando datos de entrada
End of explanation
"""
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
learning_rate = 0.01
training_epochs = 100
x_train = np.linspace(-1,1,101)
y_train = 2 * x_train + np.random.randn(*x_train.shape) * 0.33
X = tf.placeholder("float")
Y = tf.placeholder("float")
def model(X,w):
return tf.multiply(X,w)
w = tf.Variable(0.0, name="weights")
y_model = model(X,w)
cost = tf.square(Y-y_model)
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
for epoch in range(training_epochs):
for (x,y) in zip(x_train, y_train):
sess.run(train_op, feed_dict={X:x, Y:y})
w_val = sess.run(w)
sess.close()
plt.scatter(x_train, y_train)
y_learned = x_train*w_val
plt.plot(x_train, y_learned, 'r')
plt.show()
"""
Explanation: Algoritmo de Regresion Lineal en TensorFlow
End of explanation
"""
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
learning_rate = 0.01
training_epochs = 40
trX = np.linspace(-1, 1, 101)
num_coeffs = 6
trY_coeffs = [1, 2, 3, 4, 5, 6]
trY = 0
#Construir datos polinomiales pseudo-aleatorios para probar el algoritmo
for i in range(num_coeffs):
trY += trY_coeffs[i] * np.power(trX, i)
trY += np.random.randn(*trX.shape) * 1.5
plt.scatter(trX, trY)
plt.show()
# Construir el grafo para TensorFlow
X = tf.placeholder("float")
Y = tf.placeholder("float")
def model(X, w):
terms = []
for i in range(num_coeffs):
term = tf.multiply(w[i], tf.pow(X, i))
terms.append(term)
return tf.add_n(terms)
w = tf.Variable([0.] * num_coeffs, name="parameters")
y_model = model(X, w)
cost = (tf.pow(Y-y_model, 2))
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
#Correr el Algoritmo en TensorFlow
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
for epoch in range(training_epochs):
for (x, y) in zip(trX, trY):
sess.run(train_op, feed_dict={X: x, Y: y})
w_val = sess.run(w)
print(w_val)
sess.close()
# Mostrar el modelo construido
plt.scatter(trX, trY)
trY2 = 0
for i in range(num_coeffs):
trY2 += w_val[i] * np.power(trX, i)
plt.plot(trX, trY2, 'r')
plt.show()
"""
Explanation: Regresion Lineal en Polinomios de grado N
End of explanation
"""
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
def split_dataset(x_dataset, y_dataset, ratio):
arr = np.arange(x_dataset.size)
np.random.shuffle(arr)
num_train = int(ratio* x_dataset.size)
x_train = x_dataset[arr[0:num_train]]
y_train = y_dataset[arr[0:num_train]]
x_test = x_dataset[arr[num_train:x_dataset.size]]
y_test = y_dataset[arr[num_train:x_dataset.size]]
return x_train, x_test, y_train, y_test
learning_rate = 0.001
training_epochs = 1000
reg_lambda = 0.
x_dataset = np.linspace(-1, 1, 100)
num_coeffs = 9
y_dataset_params = [0.] * num_coeffs
y_dataset_params[2] = 1
y_dataset = 0
for i in range(num_coeffs):
y_dataset += y_dataset_params[i] * np.power(x_dataset, i)
y_dataset += np.random.randn(*x_dataset.shape) * 0.3
(x_train, x_test, y_train, y_test) = split_dataset(x_dataset, y_dataset, 0.7)
X = tf.placeholder("float")
Y = tf.placeholder("float")
def model(X, w):
terms = []
for i in range(num_coeffs):
term = tf.multiply(w[i], tf.pow(X,i))
terms.append(term)
return tf.add_n(terms)
w = tf.Variable([0.] * num_coeffs, name="parameters")
y_model = model(X, w)
cost = tf.div(tf.add(tf.reduce_sum(tf.square(Y-y_model)),
tf.multiply(reg_lambda, tf.reduce_sum(tf.square(w)))),
2*x_train.size)
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
i,stop_iters = 0,15
for reg_lambda in np.linspace(0,1,100):
i += 1
for epoch in range(training_epochs):
sess.run(train_op, feed_dict={X: x_train, Y: y_train})
final_cost = sess.run(cost, feed_dict={X: x_test, Y:y_test})
print('reg lambda', reg_lambda)
print('final cost', final_cost)
if i > stop_iters: break
sess.close()
"""
Explanation: Regularizacion
Para manejar un poco mejor el impacto que tienen los outliers sobre nuestro modelo (y asi evitar que el modelo produzca curvas demasiado complicadas, y el overfitting) existe el termino Regularizacion que se define como:
$$ Cost(X,Y) = Loss(X,Y) + \lambda |x| $$
en donde |x| es la norma del vector (la distancia del vector al origen, ver el tema de Norms en otro lado, por ejemplo L1 o L2 norm) que se utiliza como cantidad penalizadora y lambda es como parametro para ajustar que tanto afectara la penalizacion. Entre mas grande sea lambda mas penalizado sera ese punto, y si lambda es 0 entonces se tiene el modelo inicial que no aplica reguarizacion.
Para obtener un valor optimo de gama, se tiene que hacer un split al dataset y...
End of explanation
"""
|
hvillanua/deep-learning | reinforcement/Q-learning-cart.ipynb | mit | import gym
import tensorflow as tf
import numpy as np
"""
Explanation: Deep Q-learning
In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.
We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
End of explanation
"""
# Create the Cart-Pole game environment
env = gym.make('CartPole-v0')
"""
Explanation: Note: Make sure you have OpenAI Gym cloned into the same directory with this notebook. I've included gym as a submodule, so you can run git submodule --init --recursive to pull the contents into the gym repo.
End of explanation
"""
env.reset()
rewards = []
for _ in range(100):
env.render()
state, reward, done, info = env.step(env.action_space.sample()) # take a random action
rewards.append(reward)
if done:
rewards = []
env.reset()
"""
Explanation: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Run the code below to watch the simulation run.
End of explanation
"""
env.close()
"""
Explanation: To shut the window showing the simulation, use env.close().
End of explanation
"""
print(rewards[-20:])
"""
Explanation: If you ran the simulation above, we can look at the rewards:
End of explanation
"""
class QNetwork:
def __init__(self, learning_rate=0.01, state_size=4,
action_size=2, hidden_size=10,
name='QNetwork'):
# state inputs to the Q-network
with tf.variable_scope(name):
self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs')
# One hot encode the actions to later choose the Q-value for the action
self.actions_ = tf.placeholder(tf.int32, [None], name='actions')
one_hot_actions = tf.one_hot(self.actions_, action_size)
# Target Q values for training
self.targetQs_ = tf.placeholder(tf.float32, [None], name='target')
# ReLU hidden layers
self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size)
self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size)
# Linear output layer
self.output = tf.contrib.layers.fully_connected(self.fc2, action_size,
activation_fn=None)
### Train with loss (targetQ - Q)^2
# output has length 2, for two actions. This next line chooses
# one value from output (per row) according to the one-hot encoded actions.
self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1)
self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q))
self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)
"""
Explanation: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
Q-Network
We train our Q-learning agent using the Bellman Equation:
$$
Q(s, a) = r + \gamma \max{Q(s', a')}
$$
where $s$ is a state, $a$ is an action, and $s'$ is the next state from state $s$ and action $a$.
Before we used this equation to learn values for a Q-table. However, for this game there are a huge number of states available. The state has four values: the position and velocity of the cart, and the position and velocity of the pole. These are all real-valued numbers, so ignoring floating point precisions, you practically have infinite states. Instead of using a table then, we'll replace it with a neural network that will approximate the Q-table lookup function.
<img src="assets/deep-q-learning.png" width=450px>
Now, our Q value, $Q(s, a)$ is calculated by passing in a state to the network. The output will be Q-values for each available action, with fully connected hidden layers.
<img src="assets/q-network.png" width=550px>
As I showed before, we can define our targets for training as $\hat{Q}(s,a) = r + \gamma \max{Q(s', a')}$. Then we update the weights by minimizing $(\hat{Q}(s,a) - Q(s,a))^2$.
For this Cart-Pole game, we have four inputs, one for each value in the state, and two outputs, one for each action. To get $\hat{Q}$, we'll first choose an action, then simulate the game using that action. This will get us the next state, $s'$, and the reward. With that, we can calculate $\hat{Q}$ then pass it back into the $Q$ network to run the optimizer and update the weights.
Below is my implementation of the Q-network. I used two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out.
End of explanation
"""
from collections import deque
class Memory():
def __init__(self, max_size = 1000):
self.buffer = deque(maxlen=max_size)
def add(self, experience):
self.buffer.append(experience)
def sample(self, batch_size):
idx = np.random.choice(np.arange(len(self.buffer)),
size=batch_size,
replace=False)
return [self.buffer[ii] for ii in idx]
"""
Explanation: Experience replay
Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
End of explanation
"""
train_episodes = 1000 # max number of episodes to learn from
max_steps = 200 # max steps in an episode
gamma = 0.99 # future reward discount
# Exploration parameters
explore_start = 1.0 # exploration probability at start
explore_stop = 0.01 # minimum exploration probability
decay_rate = 0.0001 # exponential decay rate for exploration prob
# Network parameters
hidden_size = 64 # number of units in each Q-network hidden layer
learning_rate = 0.0001 # Q-network learning rate
# Memory parameters
memory_size = 10000 # memory capacity
batch_size = 20 # experience mini-batch size
pretrain_length = batch_size # number experiences to pretrain the memory
tf.reset_default_graph()
mainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate)
"""
Explanation: Exploration - Exploitation
To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy.
At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.
Q-Learning training algorithm
Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent:
Initialize the memory $D$
Initialize the action-value network $Q$ with random weights
For episode = 1, $M$ do
For $t$, $T$ do
With probability $\epsilon$ select a random action $a_t$, otherwise select $a_t = \mathrm{argmax}_a Q(s,a)$
Execute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$
Store transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$
Sample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$
Set $\hat{Q}j = r_j$ if the episode ends at $j+1$, otherwise set $\hat{Q}_j = r_j + \gamma \max{a'}{Q(s'_j, a')}$
Make a gradient descent step with loss $(\hat{Q}_j - Q(s_j, a_j))^2$
endfor
endfor
Hyperparameters
One of the more difficult aspects of reinforcememt learning are the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation.
End of explanation
"""
# Initialize the simulation
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
memory = Memory(max_size=memory_size)
# Make a bunch of random actions and store the experiences
for ii in range(pretrain_length):
# Uncomment the line below to watch the simulation
# env.render()
# Make a random action
action = env.action_space.sample()
next_state, reward, done, _ = env.step(action)
if done:
# The simulation fails so no next state
next_state = np.zeros(state.shape)
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
"""
Explanation: Populate the experience memory
Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
End of explanation
"""
# Now train with experiences
saver = tf.train.Saver()
rewards_list = []
with tf.Session() as sess:
# Initialize variables
sess.run(tf.global_variables_initializer())
step = 0
for ep in range(1, train_episodes):
total_reward = 0
t = 0
while t < max_steps:
step += 1
# Uncomment this next line to watch the training
# env.render()
# Explore or Exploit
explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step)
if explore_p > np.random.rand():
# Make a random action
action = env.action_space.sample()
else:
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
total_reward += reward
if done:
# the episode ends so no next state
next_state = np.zeros(state.shape)
t = max_steps
print('Episode: {}'.format(ep),
'Total reward: {}'.format(total_reward),
'Training loss: {:.4f}'.format(loss),
'Explore P: {:.4f}'.format(explore_p))
rewards_list.append((ep, total_reward))
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
t += 1
# Sample mini-batch from memory
batch = memory.sample(batch_size)
states = np.array([each[0] for each in batch])
actions = np.array([each[1] for each in batch])
rewards = np.array([each[2] for each in batch])
next_states = np.array([each[3] for each in batch])
# Train network
target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states})
# Set target_Qs to 0 for states where episode ends
episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)
target_Qs[episode_ends] = (0, 0)
targets = rewards + gamma * np.max(target_Qs, axis=1)
loss, _ = sess.run([mainQN.loss, mainQN.opt],
feed_dict={mainQN.inputs_: states,
mainQN.targetQs_: targets,
mainQN.actions_: actions})
saver.save(sess, "checkpoints/cartpole.ckpt")
"""
Explanation: Training
Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
eps, rews = np.array(rewards_list).T
smoothed_rews = running_mean(rews, 10)
plt.plot(eps[-len(smoothed_rews):], smoothed_rews)
plt.plot(eps, rews, color='grey', alpha=0.3)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
"""
Explanation: Visualizing training
Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue.
End of explanation
"""
test_episodes = 10
test_max_steps = 400
env.reset()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
for ep in range(1, test_episodes):
t = 0
while t < test_max_steps:
env.render()
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
if done:
t = test_max_steps
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
state = next_state
t += 1
env.close()
"""
Explanation: Testing
Let's checkout how our trained agent plays the game.
End of explanation
"""
|
martinjrobins/hobo | examples/toy/distribution-annulus.ipynb | bsd-3-clause | import pints
import pints.toy
import numpy as np
import matplotlib.pyplot as plt
# Create log pdf (default is 2-dimensional with r0=10 and sigma=1)
log_pdf = pints.toy.AnnulusLogPDF()
# Contour plot of pdf
num_points = 100
x = np.linspace(-15, 15, num_points)
y = np.linspace(-15, 15, num_points)
X, Y = np.meshgrid(x, y)
Z = np.zeros(X.shape)
Z = np.exp([[log_pdf([i, j]) for i in x] for j in y])
plt.contour(X, Y, Z)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
"""
Explanation: Annulus distribution
In this notebook we create an annulus distribution, which has the following density,
$$p(x|r_0, \sigma) \propto \text{exp}(-\frac{(|x|-r_0)^2}{2\sigma^2}),$$
where $\beta > 0$ and $|x|$ is the Euclidean norm of the d-dimensional $x$. This distribution was created due to its similarity in geometry to the distribution of a high-dimensional normal (albeit in lower dimensions), for its use in testing MCMC algorithms.
Plotting a two-dimensional version of this function with $r0=10$ and $\sigma=1$.
End of explanation
"""
samples = log_pdf.sample(100)
num_points = 100
x = np.linspace(-15, 15, num_points)
y = np.linspace(-15, 15, num_points)
X, Y = np.meshgrid(x, y)
Z = np.zeros(X.shape)
Z = np.exp([[log_pdf([i, j]) for i in x] for j in y])
plt.contour(X, Y, Z)
plt.scatter(samples[:, 0], samples[:, 1])
plt.xlabel('x')
plt.ylabel('y')
plt.show()
"""
Explanation: Generate independent samples from this distribution and plot them
End of explanation
"""
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform([2, 2], [8, 8], size=(4, 2))
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HaarioBardenetACMC)
# Set maximum number of iterations
mcmc.set_max_iterations(4000)
# Disable logging
mcmc.set_log_to_screen(False)
# Number of chains
num_chains = 4
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[1000:] for chain in chains]
"""
Explanation: Use adaptive covariance MCMC to sample from this (un-normalised) pdf.
End of explanation
"""
stacked = np.vstack(chains)
plt.contour(X, Y, Z, colors='k', alpha=0.5)
plt.scatter(stacked[:,0], stacked[:,1], marker='.', alpha=0.2)
plt.xlim(-15, 15)
plt.ylim(-15, 15)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
"""
Explanation: Scatter plot of the samples. Adaptive covariance MCMC seems to do ok at sampling from this distribution.
End of explanation
"""
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform([2, 2], [8, 8], size=(4, 2))
sigma0 = [2, 2]
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HamiltonianMCMC, sigma0=sigma0)
# Set maximum number of iterations
mcmc.set_max_iterations(500)
# Disable logging
# mcmc.set_log_to_screen(False)
# Number of chains
num_chains = 4
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[200:] for chain in chains]
"""
Explanation: Try Hamiltonian Monte Carlo on same problem.
End of explanation
"""
plt.contour(X, Y, Z, colors='k', alpha=0.5)
plt.plot(chains[0][:, 0], chains[0][:, 1])
plt.xlim(-15, 15)
plt.ylim(-15, 15)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
"""
Explanation: A single chain of HMC moves much more naturally around the annulus.
End of explanation
"""
log_pdf = pints.toy.AnnulusLogPDF(dimensions=3, r0=20, sigma=0.5)
# Create an adaptive covariance MCMC routine
x0 = np.zeros(log_pdf.n_parameters()) + np.random.normal(0, 1, size=(4, log_pdf.n_parameters()))
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HaarioBardenetACMC)
# Set maximum number of iterations
mcmc.set_max_iterations(4000)
# Disable logging
mcmc.set_log_to_screen(False)
# Number of chains
num_chains = 4
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[1000:] for chain in chains]
stacked = np.vstack(chains)
"""
Explanation: 3-dimensional annulus
Now creating a 3 dimensional annulus with $r0=20$ and $\sigma=0.5$, then using Adaptive covariance MCMC to sample from it.
End of explanation
"""
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(stacked[:, 0], stacked[:, 1], stacked[:, 2], '.', alpha=0.1)
ax.legend()
plt.show()
"""
Explanation: The samples are near to the surface of a sphere of radius 20.
End of explanation
"""
a_mean = np.mean(stacked, axis=0)
print("True mean = " + str(log_pdf.mean()))
print("Sample mean = " + str(a_mean))
"""
Explanation: We can see that the mean of the samples is a long way from the true value (0, 0, 0)
End of explanation
"""
log_pdf = pints.toy.AnnulusLogPDF(dimensions=10, r0=15, sigma=2)
# Create an adaptive covariance MCMC routine
x0 = np.zeros(log_pdf.n_parameters()) + np.random.normal(0, 1, size=(4, log_pdf.n_parameters()))
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HaarioBardenetACMC)
# Set maximum number of iterations
mcmc.set_max_iterations(8000)
# Disable logging
mcmc.set_log_to_screen(False)
# Number of chains
num_chains = 4
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[1000:] for chain in chains]
"""
Explanation: 10-dimensional annulus
Now creating a 10 dimensional annulus with $r0=15$ and $\sigma=2$, then using Adaptive covariance MCMC to sample from it.
End of explanation
"""
chain = np.vstack(chains)
d = list(map(lambda x: np.linalg.norm(x), chain))
a_mean = np.mean(d)
a_var = np.var(d)
print("True normed mean = " + str(log_pdf.mean_normed()))
print("Sample normed mean = " + str(a_mean))
print("True normed var = " + str(log_pdf.var_normed()))
print("Sample normed var = " + str(a_var))
"""
Explanation: Compare the theoretical mean and variance of the normed distance from the origin with the sample-based estimates. Does ok!
End of explanation
"""
a_mean = np.mean(chain, axis=0)
print("True mean = " + str(log_pdf.mean()))
print("Sample mean = " + str(a_mean))
"""
Explanation: Less good at recapitulating the actual mean.
End of explanation
"""
|
adityaka/misc_scripts | python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/02_01/Final/Object Creation.ipynb | bsd-3-clause | import pandas as pd
import numpy as np
"""
Explanation: Rapid Overview
build intuition about pandas
details later
documentation: http://pandas.pydata.org/pandas-docs/stable/10min.html
End of explanation
"""
my_series = pd.Series([1,3,5,np.nan,6,8])
my_series
"""
Explanation: Basic series; default integer index
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html
End of explanation
"""
my_dates_index = pd.date_range('20160101', periods=6)
my_dates_index
"""
Explanation: datetime index
documentation: http://pandas.pydata.org/pandas-docs/stable/timeseries.html
End of explanation
"""
sample_numpy_data = np.array(np.arange(24)).reshape((6,4))
sample_numpy_data
"""
Explanation: sample NumPy data
End of explanation
"""
sample_df = pd.DataFrame(sample_numpy_data, index=my_dates_index, columns=list('ABCD'))
sample_df
"""
Explanation: sample data frame, with column headers; uses our dates_index
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html
End of explanation
"""
df_from_dictionary = pd.DataFrame({
'float' : 1.,
'time' : pd.Timestamp('20160825'),
'series' : pd.Series(1,index=list(range(4)),dtype='float32'),
'array' : np.array([3] * 4,dtype='int32'),
'categories' : pd.Categorical(["test","train","taxes","tools"]),
'dull' : 'boring data'
})
df_from_dictionary
"""
Explanation: data frame from a Python dictionary
End of explanation
"""
df_from_dictionary.dtypes
"""
Explanation: pandas retains data type for each column
End of explanation
"""
sample_df.head()
sample_df.tail(2)
"""
Explanation: head and tail; default is 5 rows
End of explanation
"""
sample_df.values
sample_df.index
sample_df.columns
"""
Explanation: underlying data: values, index and columns
End of explanation
"""
sample_df.describe()
"""
Explanation: describe(): a quick statistical summary
notice: integer data summarized with floating point numbers
End of explanation
"""
pd.set_option('display.precision', 2)
sample_df.describe()
"""
Explanation: control precision of floating point numbers
for options and settings, please see: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.set_option.html
End of explanation
"""
sample_df.T
"""
Explanation: transpose rows and columns
End of explanation
"""
sample_df.sort_index(axis=1, ascending=False)
"""
Explanation: sort by axis
End of explanation
"""
sample_df.sort_values(by='B', ascending=False)
"""
Explanation: sort by data within a column (our data was already sorted)
End of explanation
"""
|
tensorflow/tensorrt | tftrt/examples/presentations/GTC-April2021-Dynamic-shape-ResNetV2.ipynb | apache-2.0 | # Verbose output
# import os
# os.environ["TF_CPP_VMODULE"]="trt_engine_utils=2,trt_engine_op=2,convert_nodes=2,convert_graph=2,segment=2,trt_shape_optimization_profiles=2,trt_engine_resource_ops=2"
!pip install pillow matplotlib
import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
import numpy as np
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.framework import convert_to_constants
from tensorflow.keras.applications.resnet_v2 import ResNet50V2
from timeit import default_timer as timer
"""
Explanation: TF-TRT Dynamic shapes example: ResNetV2
This notebook demonstrates how to use TF-TRT dynamic shape feature to generate a single engine with multiple optimization profiles, each optimized for a different batch size.
Requirements
This notebook requires at least TF 2.5 and TRT 7.1.3.
End of explanation
"""
model = ResNet50V2(weights='imagenet')
tf.saved_model.save(model, 'resnet_v2_50_saved_model')
"""
Explanation: Load and save the TF model
End of explanation
"""
def get_func_from_saved_model(saved_model_dir):
saved_model_loaded = tf.saved_model.load(
saved_model_dir, tags=[tag_constants.SERVING])
graph_func = saved_model_loaded.signatures[
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
return graph_func, saved_model_loaded
def predict_and_benchmark_throughput(batched_input, model, N_warmup_run=50, N_run=500,
result_key='predictions', batch_size=None):
elapsed_time = []
all_preds = []
if batch_size is None or batch_size > batched_input.shape[0]:
batch_size = batched_input.shape[0]
print('Benchmarking with batch size', batch_size)
elapsed_time = np.zeros(N_run)
for i in range(N_warmup_run):
preds = model(batched_input)
# Force device synchronization
tmp = preds[result_key][0,0].numpy()
for i in range(N_run):
start_time = timer()
preds = model(batched_input)
# Synchronize
tmp += preds[result_key][0,0].numpy()
end_time = timer()
elapsed_time[i] = end_time - start_time
all_preds.append(preds)
if i>=50 and i % 50 == 0:
print('Steps {}-{} average: {:4.1f}ms'.format(i-50, i, (elapsed_time[i-50:i].mean()) * 1000))
print('Latency: {:5.2f}+/-{:4.2f}ms'.format(elapsed_time.mean()* 1000, elapsed_time.std()* 1000))
print('Throughput: {:.0f} images/s'.format(N_run * batch_size / elapsed_time.sum()))
return all_preds
def get_dummy_images(batch_size = 32, img_shape = [224, 224, 3]):
img=tf.random.uniform(shape=[batch_size] + img_shape, dtype=tf.float32)
print("Generated input random images with shape (N, H, W, C) =", img.shape)
return img
"""
Explanation: Helper functions
End of explanation
"""
!mkdir data
!wget -O ./data/img0.JPG "https://d17fnq9dkz9hgj.cloudfront.net/breed-uploads/2018/08/siberian-husky-detail.jpg?bust=1535566590&width=630"
!wget -O ./data/img1.JPG "https://www.hakaimagazine.com/wp-content/uploads/header-gulf-birds.jpg"
!wget -O ./data/img2.JPG "https://www.artis.nl/media/filer_public_thumbnails/filer_public/00/f1/00f1b6db-fbed-4fef-9ab0-84e944ff11f8/chimpansee_amber_r_1920x1080.jpg__1920x1080_q85_subject_location-923%2C365_subsampling-2.jpg"
!wget -O ./data/img3.JPG "https://www.familyhandyman.com/wp-content/uploads/2018/09/How-to-Avoid-Snakes-Slithering-Up-Your-Toilet-shutterstock_780480850.jpg"
"""
Explanation: Input data
Get some images from the web that will be used for testing the model
End of explanation
"""
input = get_dummy_images(128)
"""
Explanation: We will also use a batch of random pixels
End of explanation
"""
func, _ = get_func_from_saved_model('resnet_v2_50_saved_model')
output = func(input)
print('output shape',output['predictions'].shape)
res = predict_and_benchmark_throughput(input, func, N_run=100)
"""
Explanation: Run inference with TF
End of explanation
"""
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet_v2 import preprocess_input, decode_predictions
def infer_real_images(func, img_shape=(224,224)):
for i in range(4):
img_path = './data/img%d.JPG'%i
img = image.load_img(img_path, target_size=img_shape)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = func(tf.convert_to_tensor(x))['predictions'].numpy()
# decode the results into a list of tuples (class, description, probability)
# (one such list for each sample in the batch)
print('{} - Predicted: {}'.format(img_path, decode_predictions(preds, top=3)[0]))
plt.subplot(2,2,i+1)
plt.imshow(img);
plt.axis('off');
plt.title(decode_predictions(preds, top=3)[0][0][1])
infer_real_images(func)
"""
Explanation: Test the model on real data
End of explanation
"""
def trt_convert(input_path, output_path, input_shapes, explicit_batch=False):
conv_params=trt.TrtConversionParams(
precision_mode='FP16', minimum_segment_size=3,
max_workspace_size_bytes=1<<30, maximum_cached_engines=1)
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=input_path, conversion_params=conv_params,
use_dynamic_shape=explicit_batch, dynamic_shape_profile_strategy='Optimal')
converter.convert()
def input_fn():
for shapes in input_shapes:
# return a list of input tensors
yield [np.ones(shape=x).astype(np.float32) for x in shapes]
converter.build(input_fn)
converter.save(output_path)
"""
Explanation: Convert the model with TF-TRT
End of explanation
"""
trt_convert(input_path="resnet_v2_50_saved_model", output_path="resnet_v2_50_trt",
input_shapes=[[input.shape]],
explicit_batch=False)
"""
Explanation: Convert using implicit batch mode
End of explanation
"""
trt_func, _ = get_func_from_saved_model('resnet_v2_50_trt')
trt_output = trt_func(input)
diff = output['predictions'] - trt_output['predictions']
np.max(np.abs(diff.numpy()))
"""
Explanation: Load converted model and check difference
End of explanation
"""
infer_real_images(trt_func)
"""
Explanation: Test the the converted model on real images
End of explanation
"""
res = predict_and_benchmark_throughput(input, trt_func)
"""
Explanation: Benchmark converted model
End of explanation
"""
input1 = get_dummy_images(1)
res = predict_and_benchmark_throughput(input1, trt_func)
input8 = get_dummy_images(8)
res = predict_and_benchmark_throughput(input8, trt_func)
input32 = get_dummy_images(32)
res = predict_and_benchmark_throughput(input32, trt_func)
input64 = get_dummy_images(64)
res = predict_and_benchmark_throughput(input64, trt_func)
"""
Explanation: Check with different batch sizes
End of explanation
"""
trt_convert(input_path="resnet_v2_50_saved_model", output_path="resnet_v2_50_trt_ds",
input_shapes=[[input.shape]],
explicit_batch=True)
trt_func_ds, _ = get_func_from_saved_model('resnet_v2_50_trt_ds')
trt_output = trt_func_ds(input)
diff = output['predictions'] - trt_output['predictions']
np.max(np.abs(diff.numpy()))
infer_real_images(trt_func_ds)
res = predict_and_benchmark_throughput(input, trt_func_ds)
"""
Explanation: Convert in dynamic mode
We convert for a single input shape first, just to test whether we get the same perf as in implicit batch mode.
End of explanation
"""
img_shape = [224, 224, 3]
trt_convert(input_path="resnet_v2_50_saved_model", output_path="resnet_v2_50_trt_ds2",
input_shapes=[[[1] + img_shape,],
[[8] + img_shape,],
[[32] + img_shape,],
[[64] + img_shape,],
[[128] + img_shape,]],
explicit_batch=True)
trt_func_ds2, _ = get_func_from_saved_model('resnet_v2_50_trt_ds2')
"""
Explanation: Convert with multiple profiles and benchmark
Here we show that we can have a single engine that is optimized to infer with various batch sizes: 1, 8, 32
End of explanation
"""
res = predict_and_benchmark_throughput(input, trt_func_ds2)
"""
Explanation: Batch size 128
Same results as before.
End of explanation
"""
res = predict_and_benchmark_throughput(input1, trt_func_ds2)
"""
Explanation: Batch size 1
We compare the implicit batch engine to the dynamic shape engine. Note, that the implicit batch engine handles batch sizes 1..32, and it was optimized for N=32. The dynamic shape engine has a profile optimized for N=1, therefore it is faster (1.5 ms vs 2.2ms in implicit batch mode, measured on a V100 16GB card).
End of explanation
"""
res = predict_and_benchmark_throughput(input8, trt_func_ds2)
res = predict_and_benchmark_throughput(input32, trt_func_ds2)
res = predict_and_benchmark_throughput(input64, trt_func_ds2)
"""
Explanation: Batch size 8
This is also faster with the dynamic shape engine 2.7 ms vs 3.1 in implicit batch mode.
End of explanation
"""
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
nvgreen = np.array((118, 185, 0)) / 255
medium_gray = np.array([140, 140, 140])/255
labels = ['1', '8', '32', '64', '128']
# Latency time for dynamic shae and implicit batch benchmarks
ds_time = [1.6, 2.9, 7.5, 13.3, 24.0]
impl_time = [2.99, 3.9, 7.96, 13.5, 23.98]
x = np.arange(len(labels)) # the label locations
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(x - width/2, ds_time, width, color=nvgreen, label='Dynamic shape')
rects2 = ax.bar(x + width/2, impl_time, width, color=medium_gray, label='implicit batch')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('time (ms)')
ax.set_title('Inference latency (single TRT engine)')
ax.set_xticks(x)
ax.set_xlabel('batch size')
ax.set_xticklabels(labels)
ax.legend()
"""
Explanation: Plots
End of explanation
"""
|
jegibbs/phys202-2015-work | assignments/assignment06/InteractEx05.ipynb | mit | %matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from IPython.html.widgets import interact, interactive, fixed
from IPython.html import widgets
from IPython.display import SVG
from IPython.display import display
"""
Explanation: Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
"""
s ="""
<svg width="100" height="100">
<circle cx="50" cy="50" r="20" fill="aquamarine" />
</svg>"""
SVG(s)
#why am I getting an error?
"""
Explanation: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:
End of explanation
"""
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
"""Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
"""
_svg = """
<svg width="%s" height="%s">
<circle cx="%s" cy="%s" r="%s" fill="%s" />
</svg>"""
s = _svg % (width, height, cx, cy, r, fill)
display(SVG(s))
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
"""
Explanation: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
End of explanation
"""
w=interactive(draw_circle, width=fixed(300), height=fixed(300), cx=widgets.IntSlider(min=0,max=300,value=150), cy=widgets.IntSlider(min=0,max=300,value=150), r=widgets.IntSlider(min=0,max=50,value=25), fill=widgets.Textarea('red'))
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
"""
Explanation: Use interactive to build a user interface for exploing the draw_circle function:
width: a fixed value of 300px
height: a fixed value of 300px
cx/cy: a slider in the range [0,300]
r: a slider in the range [0,50]
fill: a text area in which you can type a color's name
Save the return value of interactive to a variable named w.
End of explanation
"""
display(w)
assert True # leave this to grade the display of the widget
"""
Explanation: Use the display function to show the widgets created by interactive:
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.22/_downloads/6684371ec2bc8e72513b3bdbec0d3a9f/plot_20_events_from_raw.ipynb | bsd-3-clause | import os
import numpy as np
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(tmax=60).load_data()
"""
Explanation: Parsing events from raw data
This tutorial describes how to read experimental events from raw recordings,
and how to convert between the two different representations of events within
MNE-Python (Events arrays and Annotations objects).
:depth: 1
In the introductory tutorial <overview-tut-events-section> we saw an
example of reading experimental events from a :term:"STIM" channel <stim
channel>; here we'll discuss :term:events and :term:annotations more
broadly, give more detailed information about reading from STIM channels, and
give an example of reading events that are in a marker file or included in
the data file as an embedded array. The tutorials tut-event-arrays and
tut-annotate-raw discuss how to plot, combine, load, save, and
export :term:events and :class:~mne.Annotations (respectively), and the
latter tutorial also covers interactive annotation of :class:~mne.io.Raw
objects.
We'll begin by loading the Python modules we need, and loading the same
example data <sample-dataset> we used in the introductory tutorial
<tut-overview>, but to save memory we'll crop the :class:~mne.io.Raw object
to just 60 seconds before loading it into RAM:
End of explanation
"""
raw.copy().pick_types(meg=False, stim=True).plot(start=3, duration=6)
"""
Explanation: The Events and Annotations data structures
Generally speaking, both the Events and :class:~mne.Annotations data
structures serve the same purpose: they provide a mapping between times
during an EEG/MEG recording and a description of what happened at those
times. In other words, they associate a when with a what. The main
differences are:
Units: the Events data structure represents the when in terms of
samples, whereas the :class:~mne.Annotations data structure represents
the when in seconds.
Limits on the description: the Events data structure represents the
what as an integer "Event ID" code, whereas the
:class:~mne.Annotations data structure represents the what as a
string.
How duration is encoded: Events in an Event array do not have a
duration (though it is possible to represent duration with pairs of
onset/offset events within an Events array), whereas each element of an
:class:~mne.Annotations object necessarily includes a duration (though
the duration can be zero if an instantaneous event is desired).
Internal representation: Events are stored as an ordinary
:class:NumPy array <numpy.ndarray>, whereas :class:~mne.Annotations is
a :class:list-like class defined in MNE-Python.
What is a STIM channel?
A :term:stim channel (short for "stimulus channel") is a channel that does
not receive signals from an EEG, MEG, or other sensor. Instead, STIM channels
record voltages (usually short, rectangular DC pulses of fixed magnitudes
sent from the experiment-controlling computer) that are time-locked to
experimental events, such as the onset of a stimulus or a button-press
response by the subject (those pulses are sometimes called TTL_ pulses,
event pulses, trigger signals, or just "triggers"). In other cases, these
pulses may not be strictly time-locked to an experimental event, but instead
may occur in between trials to indicate the type of stimulus (or experimental
condition) that is about to occur on the upcoming trial.
The DC pulses may be all on one STIM channel (in which case different
experimental events or trial types are encoded as different voltage
magnitudes), or they may be spread across several channels, in which case the
channel(s) on which the pulse(s) occur can be used to encode different events
or conditions. Even on systems with multiple STIM channels, there is often
one channel that records a weighted sum of the other STIM channels, in such a
way that voltage levels on that channel can be unambiguously decoded as
particular event types. On older Neuromag systems (such as that used to
record the sample data) this "summation channel" was typically STI 014;
on newer systems it is more commonly STI101. You can see the STIM
channels in the raw data file here:
End of explanation
"""
events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5]) # show the first 5
"""
Explanation: You can see that STI 014 (the summation channel) contains pulses of
different magnitudes whereas pulses on other channels have consistent
magnitudes. You can also see that every time there is a pulse on one of the
other STIM channels, there is a corresponding pulse on STI 014.
.. TODO: somewhere in prev. section, link out to a table of which systems
have STIM channels vs. which have marker files or embedded event arrays
(once such a table has been created).
Converting a STIM channel signal to an Events array
If your data has events recorded on a STIM channel, you can convert them into
an events array using :func:mne.find_events. The sample number of the onset
(or offset) of each pulse is recorded as the event time, the pulse magnitudes
are converted into integers, and these pairs of sample numbers plus integer
codes are stored in :class:NumPy arrays <numpy.ndarray> (usually called
"the events array" or just "the events"). In its simplest form, the function
requires only the :class:~mne.io.Raw object, and the name of the channel(s)
from which to read events:
End of explanation
"""
testing_data_folder = mne.datasets.testing.data_path()
eeglab_raw_file = os.path.join(testing_data_folder, 'EEGLAB', 'test_raw.set')
eeglab_raw = mne.io.read_raw_eeglab(eeglab_raw_file)
print(eeglab_raw.annotations)
"""
Explanation: .. sidebar:: The middle column of the Events array
MNE-Python events are actually *three* values: in between the sample
number and the integer event code is a value indicating what the event
code was on the immediately preceding sample. In practice, that value is
almost always ``0``, but it can be used to detect the *endpoint* of an
event whose duration is longer than one sample. See the documentation of
:func:`mne.find_events` for more details.
If you don't provide the name of a STIM channel, :func:~mne.find_events
will first look for MNE-Python config variables <tut-configure-mne>
for variables MNE_STIM_CHANNEL, MNE_STIM_CHANNEL_1, etc. If those are
not found, channels STI 014 and STI101 are tried, followed by the
first channel with type "STIM" present in raw.ch_names. If you regularly
work with data from several different MEG systems with different STIM channel
names, setting the MNE_STIM_CHANNEL config variable may not be very
useful, but for researchers whose data is all from a single system it can be
a time-saver to configure that variable once and then forget about it.
:func:~mne.find_events has several options, including options for aligning
events to the onset or offset of the STIM channel pulses, setting the minimum
pulse duration, and handling of consecutive pulses (with no return to zero
between them). For example, you can effectively encode event duration by
passing output='step' to :func:mne.find_events; see the documentation
of :func:~mne.find_events for details. More information on working with
events arrays (including how to plot, combine, load, and save event arrays)
can be found in the tutorial tut-event-arrays.
Reading embedded events as Annotations
Some EEG/MEG systems generate files where events are stored in a separate
data array rather than as pulses on one or more STIM channels. For example,
the EEGLAB format stores events as a collection of arrays in the :file:.set
file. When reading those files, MNE-Python will automatically convert the
stored events into an :class:~mne.Annotations object and store it as the
:attr:~mne.io.Raw.annotations attribute of the :class:~mne.io.Raw object:
End of explanation
"""
print(len(eeglab_raw.annotations))
print(set(eeglab_raw.annotations.duration))
print(set(eeglab_raw.annotations.description))
print(eeglab_raw.annotations.onset[0])
"""
Explanation: The core data within an :class:~mne.Annotations object is accessible
through three of its attributes: onset, duration, and
description. Here we can see that there were 154 events stored in the
EEGLAB file, they all had a duration of zero seconds, there were two
different types of events, and the first event occurred about 1 second after
the recording began:
End of explanation
"""
events_from_annot, event_dict = mne.events_from_annotations(eeglab_raw)
print(event_dict)
print(events_from_annot[:5])
"""
Explanation: More information on working with :class:~mne.Annotations objects, including
how to add annotations to :class:~mne.io.Raw objects interactively, and how
to plot, concatenate, load, save, and export :class:~mne.Annotations
objects can be found in the tutorial tut-annotate-raw.
Converting between Events arrays and Annotations objects
Once your experimental events are read into MNE-Python (as either an Events
array or an :class:~mne.Annotations object), you can easily convert between
the two formats as needed. You might do this because, e.g., an Events array
is needed for epoching continuous data, or because you want to take advantage
of the "annotation-aware" capability of some functions, which automatically
omit spans of data if they overlap with certain annotations.
To convert an :class:~mne.Annotations object to an Events array, use the
function :func:mne.events_from_annotations on the :class:~mne.io.Raw file
containing the annotations. This function will assign an integer Event ID to
each unique element of raw.annotations.description, and will return the
mapping of descriptions to integer Event IDs along with the derived Event
array. By default, one event will be created at the onset of each annotation;
this can be modified via the chunk_duration parameter of
:func:~mne.events_from_annotations to create equally spaced events within
each annotation span (see chunk-duration, below, or see
fixed-length-events for direct creation of an Events array of
equally-spaced events).
End of explanation
"""
custom_mapping = {'rt': 77, 'square': 42}
(events_from_annot,
event_dict) = mne.events_from_annotations(eeglab_raw, event_id=custom_mapping)
print(event_dict)
print(events_from_annot[:5])
"""
Explanation: If you want to control which integers are mapped to each unique description
value, you can pass a :class:dict specifying the mapping as the
event_id parameter of :func:~mne.events_from_annotations; this
:class:dict will be returned unmodified as the event_dict.
.. TODO add this when the other tutorial is nailed down:
Note that this event_dict can be used when creating
:class:~mne.Epochs from :class:~mne.io.Raw objects, as demonstrated
in :doc:epoching_tutorial_whatever_its_name_is.
End of explanation
"""
mapping = {1: 'auditory/left', 2: 'auditory/right', 3: 'visual/left',
4: 'visual/right', 5: 'smiley', 32: 'buttonpress'}
annot_from_events = mne.annotations_from_events(
events=events, event_desc=mapping, sfreq=raw.info['sfreq'],
orig_time=raw.info['meas_date'])
raw.set_annotations(annot_from_events)
"""
Explanation: To make the opposite conversion (from an Events array to an
:class:~mne.Annotations object), you can create a mapping from integer
Event ID to string descriptions, use ~mne.annotations_from_events
to construct the :class:~mne.Annotations object, and call the
:meth:~mne.io.Raw.set_annotations method to add the annotations to the
:class:~mne.io.Raw object.
Because the sample data <sample-dataset> was recorded on a Neuromag
system (where sample numbering starts when the acquisition system is
initiated, not when the recording is initiated), we also need to pass in
the orig_time parameter so that the onsets are properly aligned relative
to the start of recording:
End of explanation
"""
raw.plot(start=5, duration=5)
"""
Explanation: Now, the annotations will appear automatically when plotting the raw data,
and will be color-coded by their label value:
End of explanation
"""
# create the REM annotations
rem_annot = mne.Annotations(onset=[5, 41],
duration=[16, 11],
description=['REM'] * 2)
raw.set_annotations(rem_annot)
(rem_events,
rem_event_dict) = mne.events_from_annotations(raw, chunk_duration=1.5)
"""
Explanation: Making multiple events per annotation
As mentioned above, you can generate equally-spaced events from an
:class:~mne.Annotations object using the chunk_duration parameter of
:func:~mne.events_from_annotations. For example, suppose we have an
annotation in our :class:~mne.io.Raw object indicating when the subject was
in REM sleep, and we want to perform a resting-state analysis on those spans
of data. We can create an Events array with a series of equally-spaced events
within each "REM" span, and then use those events to generate (potentially
overlapping) epochs that we can analyze further.
End of explanation
"""
print(np.round((rem_events[:, 0] - raw.first_samp) / raw.info['sfreq'], 3))
"""
Explanation: Now we can check that our events indeed fall in the ranges 5-21 seconds and
41-52 seconds, and are ~1.5 seconds apart (modulo some jitter due to the
sampling frequency). Here are the event times rounded to the nearest
millisecond:
End of explanation
"""
|
fluffy-hamster/A-Beginners-Guide-to-Python | A Beginners Guide to Python/21. The joy of fast cars.ipynb | mit | # Attempt 1
def is_prime(num):
"""Returns True if number is prime, False otherwise"""
if num <= 1: return False # negetive numbers are not prime
# check for factors
for i in range(2,num): # for loop that iterates 2-to-num. Each number in the iteration is called "i"
if (num % i) == 0: # modular arithmetic; this asks if num is divisible by i (with no remainder).
return False
# If we have iterated through every number upto num without finding a divisor it must be prime.
return True
# Making the list:
def get_primes(b):
primes = []
for num in range(0, b+1):
if is_prime(num): # Yes, you can call functions inside other functions!
primes.append(num) # If prime, add it to the list
return primes
print(get_primes(400))
"""
Explanation: The joy of fast cars
Hi guys, so in this lecture I wanted to talk about speed in Python. Now, as beginners I want to emphasize that you guys should be focusing on writing correct programs and not pay too much mind to how fast they run. But with that said I do think that thinking about the speed of code can actually be fun; its problem solving and a challenge.
The 'joy of fast cars' is a somewhat cryptic title, but the explanation of it is fairly straight-forward; one of the things I find a lot of fun is trying to make my code more efficient; I genuinely enjoy the process of taking a bit of code and trying to come up with ways to make it faster. In my mind programming is at its most interesting when you can can look past stuff like language syntax and instead focus of the very nature of the problem itself. The aim of today is to try to get you a glimpse of that.
The problem we shall be looking at today is the following:
How can we list all of the prime numbers from 0 to N?
Lets start the process by spliting the process into two parts; first, we need a way of knowing if a number is prime or not. And once we have that, we need to check all the numbers 0 to N.
End of explanation
"""
import sys
sys.path.append(".\misc") # Adding to sys.path allows us to find "profile_code.py"
from profile_code import profile
result = profile(get_primes, 100_000) # Get all primes, 0..100,000
print(result)
"""
Explanation: Okay cool, now that we have working solution the next question is how to improve the speed. To improve speed, the first and most obvious starting point is to time the code.
Lets do that now...
End of explanation
"""
for i in range(1, 20):
print(i, "--", 20/i)
"""
Explanation: In the above output, we can see that it took about 32secs to get all the primes less than 100,000.
We can also see that almost all of the time is taken by the "is_prime" function (tottime column). This is important, this means that improving the performance of "is_prime" is going to considerably increase performance, whereas improving the speed of the get_primes function will have almost no impact.
For example, list comprehensions are faster than for-loops and so I could speed the ‘get_primes’ function by doing that. For arguments sake lets suppose using a list comprehension can speed it up that function by a massive 90%!! That’s a huge improvement! But when we look at the total time we see that get_primes took 0.015 seconds on my machine. So a 90% improvement would speed us up by about 0.012 seconds.
Okay, so the function we need to improve is the ‘is_prime’ function. I think the first line of code to study is the for-loop:
for i in range(2, num):
So this is where the fun begins! To solve this puzzle we need to think logically and be a bit creative here. Improving the speed of this line of code is not simply “know more Python”. Rather, we need to think logically and apply a splash mathematics. Here, let me show you something:
End of explanation
"""
n = 4
for i in range(2, n//2):
print("(1)...", i)
# Nothing is printed! WTF!!
# Okay, attempted fix:
for i in range(2, n//2+1):
print("(2)..." , i)
# Attempt 2
def is_prime2(num):
"""Returns True if number is prime, False otherwise"""
if num <= 1: return False
# check for factors
for i in range(2, (num//2) + 1): ## tweaked
if (num % i) == 0:
return False
return True
# Making the list:
def get_primes2(b):
primes = []
for num in range(0, b+1):
if is_prime2(num): ## call our new prime function...
primes.append(num)
return primes
import sys
sys.path.append(".\misc") # Adding to sys.path allows us to find "profile_code.py"
from profile_code import profile
result = profile(get_primes2, 100_000) # Get all primes, 0..100,000
print(result)
"""
Explanation: So this code is dividing the number 20 by 'i', where 'i' is 1-to-20. The salient point here is that numbers past 11 are not whole numbers. This makes a lot of sense when you think about it; the minimum number of ‘integer parts’ we can split X into (besides 1) is two. Thus, when we start looking at numbers greater than n/2 the solution will never be a whole number. And that stands for all numbers, not just 20.
Now, we can we use this information to make our prime search smarter. As things currently stand proving 1499 is prime requires about that many steps; our code is (at the moment) asking if numbers like 1001, 1002, 1003, ... are divisors of 1499 but as the above logic demonstrates since we want to find divisors of n these checks are actually unnecessary. So, If we stop iterating at the number 750 we can approximately half the time it takes to find a prime number and still have a correct solution.
for i in range(2, num//2):
As a quick note, we are using integer division here because the range function cannot handle floats. Now, before we run the benchmark though, we need to check for correctness; whenever you make a changes, even If it is a small one, you should test it on a few inputs. We want to check we haven’t broken anything with our change (more on ‘regression testing’ later). With this in mind, I ran the following code on my machine (where is_prime is the old function and is_prime2 is with the change):
x = [i for i in range(0,30000) if is_prime(i)]
y = [i for i in range(0,30000) if is_prime2(i)]
print(x == y) ---> False
We have a bug batman! What went wrong? To find out, I ran the following bit of code:
x2 = set(x)
y2 = set(y)
x2.symmetric_difference(y2) ---> {4}
I converted the lists to sets because sets have this handy method for quickly telling the difference between two items. It turns out we have two lists, each with 3200+ numbers and the only difference is that one of these lists contains the number 4 and the other does not. So what’s the problem?
Well, our new function uses:
range(2, n//2)
and:
4//2 == 2
In short, our change to the function works great for large input but breaks for tiny inputs. I think the simplest fix to this problem is to use (n//2)+1, which should fix our error with a insignificant performance cost.
End of explanation
"""
# Attempt 3
def is_prime3(num):
"""Returns True if number is prime, False otherwise"""
if num <= 1:
return False
if num == 2:
return True
if num % 2 == 0:
# notice that this check occurs AFTER we check is num == 2.
return False
# check for factors
for i in range(3,num//2+1, 2): # range function starts at odd number with a step of 2.
if (num % i) == 0:
return False
return True
# Making the list:
def get_primes3(b):
primes = []
for num in range(0, b+1):
if is_prime3(num): ## call our new prime function...
primes.append(num)
return primes
"""
Explanation: So this small change has roughly halved the amount of time it to get all primes upto 100,000.
Are we done? Well actually I can think of a few more tweaks...
Let's think about the nature of primes for one moment. The definition of a prime is that it is only divisible by itself and 1. And since an even number is, by definition, divisible by 2 we know that the only prime that is even is 2.
Let’s think of a large odd number (not necessarily prime). Our code is going to ask if 2,4,8,10,12… are divisors. But from the definition of even numbers we know that if 12, 18, 22, etc are divisors of X then so must 2. Which therefore means if 2 is not a divisor then neither is 6,8,100,102, etc.
In short, checking for 2 is equivalent to checking for all even numbers. Can we apply this insight to our code? I think so:
End of explanation
"""
import sys
sys.path.append(".\misc") # Adding to sys.path allows us to find "profile_code.py"
from profile_code import profile
result = profile(get_primes3, 100_000) # Get all primes, 0..100,000
print(result)
"""
Explanation: So in the above code we check if a number is divisible by 2 just once. After that, we only check if odd numbers are divisors of n. This change looks like it roughly halves the search space. Okay, let's benchmark it!
End of explanation
"""
# Attempt 4
from math import sqrt, ceil
def is_prime4(num):
"""Returns True if number is prime, False otherwise"""
if num <= 1:
return False
if num == 2:
return True
if num % 2 == 0:
return False
# check for factors
for i in range(3,ceil(sqrt(num))+1, 2): # upto sqrt of N (always rounded up)
if (num % i) == 0:
return False
return True
# Making the list:
def get_primes4(b):
primes = []
for num in range(0, b+1):
if is_prime4(num): ## call our new prime function...
primes.append(num)
return primes
import sys
sys.path.append(".\misc") # Adding to sys.path allows us to find "profile_code.py"
from profile_code import profile
result = profile(get_primes4, 100_000) # Get all primes, 0..100,000
print(result)
"""
Explanation: So now we are down to 8secs.
Can we do better? Perhaps, but I’m out of ideas at this point. But hey, google is a treasure trove of information, I wonder if there is some other maths ‘trick’ out there we could use…
After googling, I found out that apparently we can use the square root of n! And here I’m going to reproduce a maths proof which I found here.
Proof
Imagine we have two numbers A, B such that A * B = N
Now there are three possible cases.
A > B
A = B
A < B
Notice that because A * B is the same as B * A cases 1 and 3 are equivelent. So that means we only need to check cases 1 and 2.
If A = B then A * B can be rewritten as A * A. If A * A = N then thats the definition of square root.
Benchmarking Sqrt(N)
Alright, how can we implement this. Well, it took a bit of testing, but eventually I came up with this line (after importing the math module, of course):
for num in range(3, math.ceil(math.sqrt(num))+1, 2):
Sqrt(n) in many cases is not a whole number in some cases and as discussed elsewhere range requires an integer. That’s where math.ceil comes in, it rounds n up to the next integer (eg. math.ceil(6.0003) ---> 7) I then add one to make sure small numbers like 3,4 are handled correctly.
How much faster do we think this function will be? Lets run it!
End of explanation
"""
def sieve_of_eratosthenes(n):
mark = [True] * (n)
mark[0] = False
mark[1] = False # 0 and 1 are not prime, so we set these values to false.
i = 2 # start at index 2 since 2 is the smallest prime.
while i < len(mark):
p = i
i += 1
if not mark[p]:
## Ignore non-primes
continue
multiplier = 2
while p * multiplier < n:
## Set all the multiples of P to false (since they cannot be prime).
mark[p * multiplier] = False
multiplier += 1
return [i for i in range(len(mark)) if mark[i]] ## If all True values in the 'mark' array
import sys
sys.path.append(".\misc") # Adding to sys.path allows us to find "profile_code.py"
from profile_code import profile
result = profile(sieve_of_eratosthenes, 100_000) # Get all primes, 0..100,000
print(result)
"""
Explanation: When we started, it took 32secs to get all the primes from 0 to 100_000 (on my pc). With the square root trick it now takes 0.15secs, thats a massive difference.
You might wonder why square root made such a big difference. Why is to do with how functions scale with input size. For example, If we skip all the even numbers less than N, and we skip all the numbers greater than N / 2 then we have reduced the search space to approximately N / 4. This is a linear improvement; if the number is 1,000,000 then we still need to check 250,000 numbers. For 2,000,000 we check 500,000 numbers, and so on.
What about square root? Well the square root of a 1,000,000 is 1000 and the sqrt of 2,000,000 is about 1414. So as you can see, these numbers are growing at a much slower rate. And thats why its considerably faster. For a more theoretical explanation, please google "Big O Notation".
Are we done?
Rethinking the problem
So, our task is create lists of primes upto N and so far our strategy for improving performance is to reduce the number of divisors we need to check.
Lets think outside the box for a moment. Image that there is an empty swimming pool full of small plastic balls and equally sized lead balls. We want to sort them. How might we do it?
Well, we could jump into the ball and pick the balls up. If its heavy we put it to oneside. Maybe we could improve this process by getting a friend to help. Maybe we get fifty friends, and at some point we realise that the way to improve performance is to manage people better (i.e. teamwork), and so you go down a rabbit-hole of small incremental improvements.
But then a new idea comes along! Its a solution that doesn't need fifty people, in fact its a solution whose running time is independant of the number of people working on the task. Can you guess what the idea maybe?
Okay here it is; fill the pool with water, the lead balls will stay at the bottom but the plastic balls will float. Get a big net and voila! The balls are sorted.
How can we apply this analogy to the current problem? Well, right now we are sort of searching for 'needles in haystacks'. We ask is n is prime, and then ask if n + 1 is prime and so on. Thus far, our optimisation technique has been "wait a minute, we do not need to check every straw", there are some numbers we can just ignore.
But what if there was some other way?
After a bit of research, it seems like the "Sieve of Atkin" is the fastest known algorithm. But this algorithm is rather complex. Another approach is to use the "Sieve of Eratosthenes". This algorithm is easier to implement, it may also be faster than our current method.
This method uses a different trick, If you want a detailed explantion check the wiki article, but basically it boils down to this:
All numbers are either prime or have prime factors.
If P is prime, then P * Q cannot be prime. (where Q > 1).
Therefore, to get all prime numbers from 1..N we keep prime P and then remove all multiples of P that are less than N. (e.g. P * 2, P * 3, P * 4, P * 5, etc)
Repeat step 3 for all primes.
So, if N is 20:
2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20
2 is prime, so now we get rid of 2 * 2, 2 * 3, ..., 2 * 10
2, 3, _, 5, _, 7, _, 9, _, 11, _, 13, _, 15, _, 17, _, 19, _
The next number is 3 so we remove 3* 2, 3 * 3, 3 * 4, ..., 3 * 6
2, 3, _, 5, _, 7, _, _, 11, _, 13, _, _, _, 17, _, 19
The next is 5, so we remove 5 * 2 ... 5 * 4
And the next after that is 7 so we remove 7 * 2.
And then since 11 * 2 is greater than N we can stop the process.
Alright, to that explains the algorithm (more or less) lets code it up!
End of explanation
"""
import sys
sys.path.append(".\misc") # Adding to sys.path allows us to find "profile_code.py"
from profile_code import profile
result = profile(get_primes4, 10_000_000) # Get all primes, 0..10,000,000
print(result)
import sys
sys.path.append(".\misc") # Adding to sys.path allows us to find "profile_code.py"
from profile_code import profile
result = profile(sieve_of_eratosthenes, 10_000_000) # Get all primes, 0..10,000,000
print(result)
"""
Explanation: Okay, so it seems faster; 0.072 is less than 0.153. However, when you see very small difference in time the result can be unreliable. Because background process and other things happening on the computer may be affecting the results. The simplest way to check that we really do have a genuine difference it to increase our input size. I know! Lets get all the prime numbers less than 10 million!
End of explanation
"""
# Quick check to ensure all the functions return the correct answer..
a = get_primes(1000)
b = get_primes2(1000)
c = get_primes3(1000)
d = get_primes4(1000)
e = sieve_of_eratosthenes(1000)
a == b == c == d == e
"""
Explanation: So when we check for all primes less than 10 million the sieve_of_eratosthenes really stands out; 9 secs versus 66 sec for our previous champion.
Anyway, the main lesson I want you to learn here is that optimisation can be thought of as being two seperate ideas; the first idea is low-level tinkering, in other words, we look at all the small details and see if we can save a byte or two of memory here or there. But then there is ‘high-level’ optimisation, and that is where we try to come up with an entirely new (and hopefully better) strategy for solving the problem.
In this lecture we started searching for needles in haystacks (checking if each number was a prime). We improved that by checking fewer straws. And then we thought of new idea, which was a bit like searching for needles by setting the haystack on fire; with the idea being that the only thing left will be the needles (i.e. the primes).
End of explanation
"""
import math
# My code, this is the function to beat! How can you improve it?
def squares(x):
lst = []
for number in range(1, x+1):
square = math.sqrt(number) # We call the square_root function on the number.
if square.is_integer():
# is_integer is a float method that returns true if the the number can be represented as an integer.
# for example, 4.0 = True, 4.89 = False
if number % 2 != 0: # checks if number is odd.
lst.append(number)
return lst
print(squares(1000))
import sys
sys.path.append(".\misc") # Adding to sys.path allows us to find "profile_code.py"
from profile_code import profile
import math
def my_squares(x):
"""
X: an Int
function returns a list of all odd square numbers >= X
>>> my_squares(100)
[1, 9, 25, 49, 81]
"""
# YOUR CODE GOES HERE !!!.
# Note, don't change name of this function, if you do, I cant test it!
return hamster_squares(x) ## CHANGE ME!!!
##################################
# MY CODE, a.k.a THE CODE TO BEAT!
# Please do not change this!!!
def hamster_squares(x):
lst = []
for number in range(1, x+1):
square = math.sqrt(number)
if square.is_integer():
if number % 2 != 0:
lst.append(number)
return lst
################## THE CONTROL PANEL ################################
#####################################################################
verbose = True # set to False if you dont want the time on line-by-line basis.
X = 5000000
# Lower X if tests are taking too long on your machine.
# Raise X if you want higher accuracy.
#####################################################################
teacher = hamster_squares(10000)
student = my_squares(10000)
correct = None
# TEST 1: CORRECTNESS
if teacher == student:
print("CORRECTNESS TEST = PASSED")
correct = True
else:
print("CORRECTNESS TEST = FAILED", "NOW TRYING TO DEBUG...", sep="\n")
# here is a bit of code to help you find the problem(s)!
# returning a list?
if not isinstance(student, list):
print("... Try returning a list next time, not a bloody {} !".format(type(student)))
# too many/too few items?
elif len(teacher) != len(student):
print(".... Your list has {} items, it should have {} items".format(len(student), len(teacher)))
# small numbers correct?
elif student[:10] != teacher[:10]:
print("... Start of list incorrect.\nYOURS: {}\nEXPECTED: {}".format(student[:10], teacher[:10]))
# testing for same items. Note that this test DOES NOT take order into consideration.
else:
ts = set(teacher)
st = set(student)
diff = ts.symmetric_difference(st)
if diff:
print("... The lists contain different numbers, these are... \n {}".format(diff))
# SPEED TESTS ... (just ignore this code)
if correct:
print("...Now testing speed. Please, note, this may take a while...\n",
"Also, I'd advise a margin or error of about +- 0.2 seconds\n")
def string(i, func, detail):
i = i.split("\n")
s= "✿ Stats for {} function... \n{}".format(func, i[2])
if detail:
s = s + "\n" + "\n".join(i[3:-7]) + "\n"
return s
print("-------- Solution Comparision, where input size is {}. -------- \n".format(X))
hs = profile(hamster_squares, X)
print(string(hs, "Teacher's Squares", verbose))
ss = profile(my_squares, X)
print(string(ss, "'YOUR'", verbose))
"""
Explanation: Homework assignment
In this weeks (optional) homework, your task it to try and write a bit of code that is faster than my code. And there is going to be two basic ways to do it; you can get your hands dirty and try some low-level optimisation or you can ditch all that and favour a high-level approach.
Unlike most of the homeworks, this more about being clever than it is about understanding Python.
The Challenge: BEAT MY TIME!!
The below code will create a list of all ODD square numbers starting at 1 and ending at x. Example:
If x is 100, the squares are:
[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
Of which, we only want the odd numbers:
[1, 9, 25, 49, 81]
A few hints...
Remember that "a and b" can be slower than "b and a" (see logic lecture). Basically, the order in which you do things can make a difference.
Finding a needle in a haystack is probably slower than [BLANK] ?
Please study the code below. Your jump is to either make it faster by tinkering with it. Or alternatively you may wish to use your own algorithm.
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/agents/tutorials/8_networks_tutorial.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
"""
!pip install tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import tensorflow as tf
import numpy as np
from tf_agents.environments import random_py_environment
from tf_agents.environments import tf_py_environment
from tf_agents.networks import encoding_network
from tf_agents.networks import network
from tf_agents.networks import utils
from tf_agents.specs import array_spec
from tf_agents.utils import common as common_utils
from tf_agents.utils import nest_utils
"""
Explanation: ネットワーク
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/agents/tutorials/8_networks_tutorial"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/agents/tutorials/8_networks_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colabで実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/agents/tutorials/8_networks_tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/agents/tutorials/8_networks_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
</table>
はじめに
このコラボでは、エージェントにカスタムネットワークを定義する方法について説明します。ネットワークはエージェントによってトレーニングされるモデルを定義するのに役立ちます。TF-Agents では、エージェント間で役に立つ複数種類のネットワークがあります。
メインネットワーク
QNetwork: このネットワークは別々のアクションを持つ環境の Qlearning で使用され、観測をマッピングしてそれぞれの可能なアクションの推定値を評価します。
CriticNetworks: 文献では ValueNetworks とも呼ばれており、いくつかの状態をあるポリシーの期待リターンの推定値にマッピングする Value 関数のいくつかのバージョンを推定することを学習します。これらのネットワークは、エージェントが現在どの程度良好な状態にあるのかを推定します。
ActorNetworks: 観測からアクションへのマッピングを学習します。このネットワークは通常、アクションを生成するためにポリシーによって使用されます。
ActorDistributionNetworks: ActorNetworks に似ていますが、ポリシーがサンプリングしてアクションを生成できる分布を生成します。
ヘルパーネットワーク
EncodingNetwork: ユーザーがネットワークの入力に適用する前処理レイヤーのマッピングを簡単に定義できるようにします。
DynamicUnrollLayer: 時系列に適用されると、エピソード境界でネットワークの状態を自動的にリセットします。
ProjectionNetwork: CategoricalProjectionNetwork や NormalProjectionNetwork のようなネットワークは入力を受け付け、カテゴリカル分布や正規分布の生成に必要なパブリックベータを生成します。
TF-Agents のすべての例にはネットワークが事前に構成されています。ただし、それらのネットワークは複雑な観測を処理するようにはセットアップされていません。
複数の観測/アクションを公開する環境があり、ネットワークのカスタマイズが必要な方にはこのチュートリアルが適しています。
セットアップ
tf-agents をまだインストールしていない場合は、次を実行します。
End of explanation
"""
class ActorNetwork(network.Network):
def __init__(self,
observation_spec,
action_spec,
preprocessing_layers=None,
preprocessing_combiner=None,
conv_layer_params=None,
fc_layer_params=(75, 40),
dropout_layer_params=None,
activation_fn=tf.keras.activations.relu,
enable_last_layer_zero_initializer=False,
name='ActorNetwork'):
super(ActorNetwork, self).__init__(
input_tensor_spec=observation_spec, state_spec=(), name=name)
# For simplicity we will only support a single action float output.
self._action_spec = action_spec
flat_action_spec = tf.nest.flatten(action_spec)
if len(flat_action_spec) > 1:
raise ValueError('Only a single action is supported by this network')
self._single_action_spec = flat_action_spec[0]
if self._single_action_spec.dtype not in [tf.float32, tf.float64]:
raise ValueError('Only float actions are supported by this network.')
kernel_initializer = tf.keras.initializers.VarianceScaling(
scale=1. / 3., mode='fan_in', distribution='uniform')
self._encoder = encoding_network.EncodingNetwork(
observation_spec,
preprocessing_layers=preprocessing_layers,
preprocessing_combiner=preprocessing_combiner,
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params,
dropout_layer_params=dropout_layer_params,
activation_fn=activation_fn,
kernel_initializer=kernel_initializer,
batch_squash=False)
initializer = tf.keras.initializers.RandomUniform(
minval=-0.003, maxval=0.003)
self._action_projection_layer = tf.keras.layers.Dense(
flat_action_spec[0].shape.num_elements(),
activation=tf.keras.activations.tanh,
kernel_initializer=initializer,
name='action')
def call(self, observations, step_type=(), network_state=()):
outer_rank = nest_utils.get_outer_rank(observations, self.input_tensor_spec)
# We use batch_squash here in case the observations have a time sequence
# compoment.
batch_squash = utils.BatchSquash(outer_rank)
observations = tf.nest.map_structure(batch_squash.flatten, observations)
state, network_state = self._encoder(
observations, step_type=step_type, network_state=network_state)
actions = self._action_projection_layer(state)
actions = common_utils.scale_to_spec(actions, self._single_action_spec)
actions = batch_squash.unflatten(actions)
return tf.nest.pack_sequence_as(self._action_spec, [actions]), network_state
"""
Explanation: ネットワークの定義
ネットワーク API
TF-Agents では Keras ネットワークからサブクラスを作っています。これを使用して以下を実行できます。
ターゲットネットワークの作成時に必要なコピー操作を単純化する。
network.variables() の呼び出し時に自動変数の作成を実行する。
ネットワークの input_specs を基に入力を検証する。
EncodingNetwork 上述のように EncodingNetwork を使用すると、ネットワークの入力に適用する前処理レイヤーのマッピングを簡単に定義していくつかのエンコードを生成することができます。
EncodingNetwork は、次のほとんどのオプションレイヤーから構成されています。
前処理レイヤー
前処理コンバイナー
Conv2D
Flatten
Dense
エンコーディングネットワークの特徴は、入力の前処理が適用されることです。入力の前処理は、preprocessing_layers および preprocessing_combiner レイヤーを介して実行可能です。これらはそれぞれネスト構造として指定できます。preprocessing_layers ネストが input_tensor_spec よりも浅い場合、これらのレイヤーはサブネストを取得します。例を以下に示します。
input_tensor_spec = ([TensorSpec(3)] * 2, [TensorSpec(3)] * 5)
preprocessing_layers = (Layer1(), Layer2())
この場合、前処理によって以下が呼び出されます。
preprocessed = [preprocessing_layers[0](observations[0]),
preprocessing_layers[1](observations[1])]
しかし、次の場合はどうなるでしょうか。
preprocessing_layers = ([Layer1() for _ in range(2)],
[Layer2() for _ in range(5)])
この場合、前処理によって以下が呼び出されます。
python
preprocessed = [
layer(obs) for layer, obs in zip(flatten(preprocessing_layers),
flatten(observations))
]
カスタムネットワーク
独自ネットワークを作成するには、__init__ メソッドと call メソッドをオーバーライドするだけです。EncodingNetworks について学習した内容を使用してカスタムネットワークを作成し、画像とベクトルを含む観測を取得する ActorNetwork を作成してみましょう。
End of explanation
"""
action_spec = array_spec.BoundedArraySpec((3,), np.float32, minimum=0, maximum=10)
observation_spec = {
'image': array_spec.BoundedArraySpec((16, 16, 3), np.float32, minimum=0,
maximum=255),
'vector': array_spec.BoundedArraySpec((5,), np.float32, minimum=-100,
maximum=100)}
random_env = random_py_environment.RandomPyEnvironment(observation_spec, action_spec=action_spec)
# Convert the environment to a TFEnv to generate tensors.
tf_env = tf_py_environment.TFPyEnvironment(random_env)
"""
Explanation: RandomPyEnvironment を作成し、構造化した観測を生成して実装を検証しましょう。
End of explanation
"""
preprocessing_layers = {
'image': tf.keras.models.Sequential([tf.keras.layers.Conv2D(8, 4),
tf.keras.layers.Flatten()]),
'vector': tf.keras.layers.Dense(5)
}
preprocessing_combiner = tf.keras.layers.Concatenate(axis=-1)
actor = ActorNetwork(tf_env.observation_spec(),
tf_env.action_spec(),
preprocessing_layers=preprocessing_layers,
preprocessing_combiner=preprocessing_combiner)
"""
Explanation: 観測をディクショナリとして定義しましたので、観測を処理する前処理レイヤーを作成する必要があります。
End of explanation
"""
time_step = tf_env.reset()
actor(time_step.observation, time_step.step_type)
"""
Explanation: これでアクターネットワークを用意できたので、環境から観測を処理できるようになりました。
End of explanation
"""
|
paninski-lab/yass | examples/evaluate/christmas-plots.ipynb | apache-2.0 | plot = ChristmasPlot('Fake', n_dataset=3, methods=['yass', 'kilosort', 'spyking circus'], logit_y=True, eval_type="Accuracy")
for method in plot.methods:
for i in range(plot.n_dataset):
x = (np.random.rand(30) - 0.5) * 10
y = 1 / (1 + np.exp(-x + np.random.rand()))
plot.add_metric(x, y, dataset_number=i, method_name=method)
"""
Explanation: Create Some Fake Entires That demonstrates Plotting
In the constructor, give a title, number of total datasets that you want to plot side by side, a list of methods for which you are plotting results. logit_y will logit transforms the y axis for emphasis on low and high end part of the metric. eval_type simply is for naming purposes, and will appear in y-axis titles.
In the following block we just create fake SNR and metrics just fo demonstration purposes.
If you want to compute SNR of a templates (np.ndarray of shape (# time samples, # channels, # units)) just call main_channels(templates).
End of explanation
"""
plot.generate_snr_metric_plot(save_to=None)
"""
Explanation: Generate SNR vs Metric
Change save_to to a file path to save it to file.
End of explanation
"""
plot.generate_curve_plots(save_to=None)
"""
Explanation: Generate the curve plots
Similar to the other plot, give path to file in save_to for saving to file.
End of explanation
"""
|
google/eng-edu | ml/cc/prework/fr/intro_to_pandas.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2017 Google LLC.
End of explanation
"""
from __future__ import print_function
import pandas as pd
pd.__version__
"""
Explanation: # Présentation rapide de Pandas
Objectifs d'apprentissage :
* Introduction aux structures de données DataFrame et Series de la bibliothèque Pandas
* Accéder aux données et les manipuler dans une structure DataFrame et Series
* Importer des données d'un fichier CSV dans un DataFrame Pandas
* Réindexer un DataFrame pour mélanger les données
Pandas est une API d'analyse de données orientée colonnes. C'est un excellent outil pour manipuler et analyser des données d'entrée. Beaucoup de frameworks d'apprentissage automatique acceptent les structures de données Pandas en entrée.
Il faudrait des pages et des pages pour présenter de manière exhaustive l'API Pandas, mais les concepts fondamentaux sont relativement simples. Aussi, nous avons décidé de vous les exposer ci-dessous. Pour une description plus complète, vous pouvez consulter le site de documentation de Pandas, sur lequel vous trouverez de multiples informations ainsi que de nombreux didacticiels.
## Concepts de base
La ligne de code suivante permet d'importer l'API pandas et d'afficher sa version :
End of explanation
"""
pd.Series(['San Francisco', 'San Jose', 'Sacramento'])
"""
Explanation: On distingue deux grandes catégories de structures de données Pandas :
Le DataFrame, un tableau relationnel de données, avec des lignes et des colonnes étiquetées
La Series, constituée d'une seule colonne. Un DataFrame contient une ou plusieurs Series, chacune étant étiquetée.
Le DataFrame est une abstraction fréquemment utilisée pour manipuler des données. Spark et R proposent des implémentations similaires.
Pour créer une Series, vous pouvez notamment créer un objet Series. Par exemple :
End of explanation
"""
city_names = pd.Series(['San Francisco', 'San Jose', 'Sacramento'])
population = pd.Series([852469, 1015785, 485199])
pd.DataFrame({ 'City name': city_names, 'Population': population })
"""
Explanation: Il est possible de créer des objets DataFrame en transmettant un dictionnaire qui met en correspondance les noms de colonnes (des chaînes de caractères) avec leur Series respective. Lorsque la longueur de la Series ne correspond pas, les valeurs manquantes sont remplacées par des valeurs NA/NaN spéciales. Exemple :
End of explanation
"""
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe.describe()
"""
Explanation: Le plus souvent, vous chargez un fichier entier dans un DataFrame. Dans l'exemple qui suit, le fichier chargé contient des données immobilières pour la Californie. Exécutez la cellule suivante pour charger les données et définir les caractéristiques :
End of explanation
"""
california_housing_dataframe.head()
"""
Explanation: Dans l'exemple ci-dessus, la méthode DataFrame.describe permet d'afficher des statistiques intéressantes concernant un DataFrame. La fonction DataFrame.head est également utile pour afficher les premiers enregistrements d'un DataFrame :
End of explanation
"""
california_housing_dataframe.hist('housing_median_age')
"""
Explanation: Autre fonction puissante de Pandas : la représentation graphique. Avec DataFrame.hist, par exemple, vous pouvez vérifier rapidement la façon dont les valeurs d'une colonne sont distribuées :
End of explanation
"""
cities = pd.DataFrame({ 'City name': city_names, 'Population': population })
print(type(cities['City name']))
cities['City name']
print(type(cities['City name'][1]))
cities['City name'][1]
print(type(cities[0:2]))
cities[0:2]
"""
Explanation: ## Accéder aux données
L'accès aux données d'un DataFrame s'effectue au moyen d'opérations de liste ou de dictionnaire Python courantes :
End of explanation
"""
population / 1000.
"""
Explanation: Pandas propose en outre une API extrêmement riche, avec des fonctions avancées d'indexation et de sélection, que nous ne pouvons malheureusement pas aborder ici dans le détail.
## Manipuler des données
Il est possible d'effectuer des opérations arithmétiques de base de Python sur les Series. Par exemple :
End of explanation
"""
import numpy as np
np.log(population)
"""
Explanation: NumPy est un kit d'outils de calculs scientifiques populaire. Les Series Pandas peuvent faire office d'arguments pour la plupart des fonctions NumPy :
End of explanation
"""
population.apply(lambda val: val > 1000000)
"""
Explanation: La méthode Series.apply convient pour les transformations à une colonne plus complexes. Comme la fonction map de Python, elle accepte en argument une fonction lambda, appliquée à chaque valeur.
L'exemple ci-dessous permet de créer une Series signalant si la population dépasse ou non un million d'habitants :
End of explanation
"""
cities['Area square miles'] = pd.Series([46.87, 176.53, 97.92])
cities['Population density'] = cities['Population'] / cities['Area square miles']
cities
"""
Explanation: La modification des DataFrames est également très simple. Ainsi, le code suivant permet d'ajouter deux Series à un DataFrame existant :
End of explanation
"""
# Your code here
"""
Explanation: ## Exercice n° 1
Modifiez le tableau cities en ajoutant une colonne booléenne qui prend la valeur True si et seulement si les deux conditions suivantes sont remplies :
La ville porte le nom d'un saint.
La ville s'étend sur plus de 50 miles carrés.
Remarque : Pour combiner des Series booléennes, utilisez des opérateurs de bits, pas les opérateurs booléens classiques. Par exemple, pour le ET logique, utilisez & au lieu de and.
Astuce : En espagnol, "San" signifie "saint".
End of explanation
"""
cities['Is wide and has saint name'] = (cities['Area square miles'] > 50) & cities['City name'].apply(lambda name: name.startswith('San'))
cities
"""
Explanation: ### Solution
Cliquez ci-dessous pour afficher la solution.
End of explanation
"""
city_names.index
cities.index
"""
Explanation: ## Index
Les objets Series et DataFrame définissent également une propriété index, qui affecte un identifiant à chaque élément d'une Series ou chaque ligne d'un DataFrame.
Par défaut, Pandas affecte les valeurs d'index en suivant l'ordre des données source. Ces valeurs ne varient pas par la suite ; elles restent inchangées en cas de réagencement des données.
End of explanation
"""
cities.reindex([2, 0, 1])
"""
Explanation: Appelez DataFrame.reindex pour réorganiser manuellement les lignes. Le code suivant, par exemple, revient à trier les données par nom de ville :
End of explanation
"""
cities.reindex(np.random.permutation(cities.index))
"""
Explanation: La réindexation est un excellent moyen de mélanger (organiser aléatoirement) les données d'un DataFrame. Dans l'exemple ci-dessous, l'index de type tableau est transmis à la fonction NumPy random.permutation, qui mélange les valeurs. En appelant reindex avec ce tableau mélangé, nous mélangeons également les lignes du DataFrame.
Exécutez plusieurs fois la cellule suivante !
End of explanation
"""
# Your code here
"""
Explanation: Pour en savoir plus, consultez la documentation relative aux index.
## Exercice n° 2
La méthode reindex autorise les valeurs d'index autres que celles associées au DataFrame d'origine. Voyez ce qu'il se passe lorsque vous utilisez ce type de valeurs ! Pourquoi est-ce autorisé à votre avis ?
End of explanation
"""
cities.reindex([0, 4, 5, 2])
"""
Explanation: ### Solution
Cliquez ci-dessous pour afficher la solution.
Lorsque le tableau d'entrée reindex contient des valeurs d'index ne faisant pas partie de la liste des index du DataFrame d'origine, reindex ajoute des lignes pour ces index \'manquants\' et insère la valeur NaN dans les colonnes correspondantes :
End of explanation
"""
|
Daniel-M/IntroPythonBiologos | doc/notes/IntroPythonBiologos.ipynb | gpl-3.0 | print("Hola mundo!")
print("1+1=",2)
print("Hola, otra vez","1+1=",2)
print("Hola, otra vez.","Sabias que 1+1 =",2,"?")
numero=3
print(numero)
numero=3.1415
print(numero)
"""
Explanation: Introducción a Python para Ciencias Biólogicas
Curso de Biofísica - Universidad de Antioquia
Daniel Mejía Raigosa (email: danielmejia55@gmail.com)
Grupo de Biofísica
Universidad de Antioquia
Date: Abril 27, 2016
Acerca de.
Materiales de trabajo para el curso corto de Python para ciencias biológicas presentado en la Universidad
de Antioquia el Miércoles 27 de Abril de 2016.
Revisión del documento: Versión 1.3.0
Contenidos
Motivación: programación en Ciencias Biológicas?
Instalación de Python Anaconda
Consola de IPython
Notebook Jupyter
Elementos de Python
Numpy
Matplotlib
Motivación: programación en Ciencias Biológicas?
<div id="ch:motivacion"></div>
Los sistemas biológicos exhiben gran complejidad. El desarrollo de técnicas experimentales y tecnologías nuevas
ha causado que se disponga de datos experimentales cada vez más masivos, requiriéndose el uso de herramientas complejas
que pueda manipular la cantidad de información disponible con facilidad. La mejor manera de exponer razones por las cuales
un biógolo o profesional de las ciencias biológicas debería aprender a programar como una herramienta se puede hacer
a través de casos reales de aplicación.
Dentro de los casos de éxito tenemos,
Ómicas: genómica, proteómica, metabolómica, secuenciación...
Minería de datos en bases de datos (Proteínas, Georeferenciación, ...).
Dinámica de poblaciones (Ecosistémica?).
Análisis estadístico masivo.
Análisis de datos con inteligencia artificial (Redes Neuronales, Algoritmos Adaptativos y Evolutivos, ...).
Simulación en general (Demasiados casos de éxito...).
<!-- [Why biology students should learn to program](http://www.wired.com/2009/03/why-biology-students-should-learn-how-to-program/) -->
Qué es Python?
<div id="ch:motivacion:python"></div>
Python es un lenguaje de programación interpretado que se ha popularizado mucho debido a su simpleza y
relativa facilidad de aprendizaje. Es un lenguaje de programación de scripting, es decir, el código fuente se ejecuta línea a línea,
y no requiere de su compilación para producir aplicaciones.
La diferencia entre un lenguaje de programación compilado y uno interpretado es que en el primer caso el código fuente debe ser
compilado para producir un archivo ejecutable. Para el lenguaje interpretado se requiere de la existencia de una aplicación
conocida como interprete que se encarga de ejecutar las instrucciones descritas en el código fuente o script.
<!-- ======= ======= -->
<!-- <div id="ch:motivacion:"></div> -->
Instalación de Python Anaconda
<div id="ch:instalacion"></div>
Durante el desarrollo de este curso estaremos trabajando con una distribución de Python conocida como Anaconda.
Anaconda es un empaquetado de Python que incluye varios paquetes de uso frecuente en ciencias y análisis de datos, entre
ellos Matplotlib y Numpy de interés para este curso y cuya instalación independiente es bastante tediosa.
Enlace de descarga Anaconda para Windows
Tamaño aproximado: 350 MB.
Tiempo de instalación aproximado: 15 minutos.
Módulo Python Visual (VPython)
El módulo de python-visual es útil para el dibujado de objetos 3D y animaciones.
El módulo VisualPython (VPython) funciona bajo Python 2.7 que es la versión estándar anterior a Python 3.x.
Para instalar vpython con Anaconda 3.5 se deben seguir los siguientes pasos
Crear un entorno con Python 2.7 en Anaconda, mediante el comando
conda create --name oldpython python=2.7
el nombre oldpython puede ser cambiado por cualquier otro. Posteriormente, es necesario activar el nuevo entorno
activate oldpython
Ya se puede instalar vpython desde la consola de Anaconda mediante el comando,
conda install -c https://conda.binstar.org/mwcraig vpython
Opcionalmente, para usar vpython con IPython es necesario instalar una versión de ipython compatible con python 2.7.
Esto se logra mediante el comando
conda install ipython
Siempre y cuando tengamos el entorno oldpython activado.
Más información y otros enlaces de descarga
Para usar, vpython con las versiones más nuevas de python se recomienda incluir la siguiente línea al inicio del código
fuente,
from __future__ import print_function, division
Editor ATOM (Opcional)
Cualquier editor de texto es útil para escribir código fuente en python, el único requisito es guardar el código fuente con extensión .py.
Un editor que podría ser útil debido al resaltado de sintaxis es ATOM el cual puede descargarse para windows desde
este enlace https://github.com/atom/atom/releases/download/v1.7.2/atom-windows.zip
Tamaño aproximado: 100 MB.
Consola de IPython
<div id="ch:ipython"></div>
<!-- dom:FIGURE: [figures/ipython-console.png, width=640 frac=0.8] Consola de IPython en Linux <div id="ch:ipython:fig:consola_ejemplo"></div> -->
<!-- begin figure -->
<div id="ch:ipython:fig:consola_ejemplo"></div>
<p>Consola de IPython en Linux</p>
<img src="figures/ipython-console.png" width=640>
<!-- end figure -->
Para iniciar una consola de IPython es necesario buscar la carpeta de instalación de Anaconda en el menú de inicio
de windows, y abrir el acceso directo IPyton correspondiente.
Al abrir la consola de IPython nos debe aparecer el siguiente mensaje (tal como en la figura más arriba ),
Esta consola o prompt nos permite ingresar líneas de código python que se ejecutan después de ser ingresadas.
End of explanation
"""
palabra="hola"
print(palabra)
"""
Explanation: Probemos creando una variable que inicialice a una palabra
Esto produce la siguiente salida,
```Python
palabra=hola
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-12-fda39d79c7ea> in <module>()
----> 1 palabra=hola
NameError: name 'hola' is not defined
```
Lo que ocurrió es que la expresión palabra=hola corresponde a la asignación del contenido de una variable llamada
hola en una llamada palabra. La variable hola no existe (no fue definida previamente) por lo tanto se arrojó el error.
Para asignar el contenido de una palabra o frase se debe hacer uso de comillas dobles,
End of explanation
"""
%history
"""
Explanation: El historial de comandos ingresados
Mediante el comando,
End of explanation
"""
%history -f archivo_destino.py
"""
Explanation: Es posible visualizar la lista de los comandos ingresados durante una sesión de trabajo de IPython. También es posible crear
un archivo que almacene los comandos de la sesión añadiendo la opción -f nombre_archivo_destino.py.
Por ejemplo, el comando
End of explanation
"""
Cualquier_Cosa="Almaceno contenido"
print(Cualquier_Cosa)
"""
Explanation: Crea el archivo archivo_destino.py en el directorio de trabajo actual. Los contenidos de dicho archivo son los comandos ingresados durante la sesión de trabajo.
Notebook Jupyter
<div id="ch:jupyternb"></div>
Jupyter es la evolución de lo que se conocía como notebooks de IPython, una interfaz en un navegador web que interacciona con un
kernel o núcleo de computación de IPython el cual se encarga de ejecutar los comandos ingresados en el notebook.
Actualmente Jupyter soporta varios núcleos de computación, entre ellos Python, R, Perl, entre otros.
<!-- dom:FIGURE: [figures/jupyter-console.png, width=640 frac=0.8] Notebook de Jupyter <div id="ch:jupyternb:fig:notebook_ejemplo"></div> -->
<!-- begin figure -->
<div id="ch:jupyternb:fig:notebook_ejemplo"></div>
<p>Notebook de Jupyter</p>
<img src="figures/jupyter-console.png" width=640>
<!-- end figure -->
Iniciando el notebook
Para iniciar un notebook de Jupyter en Anaconda es necesario ejecutar el enlace directo con nombre Jupyter
en el menú de inicio, bajo el directorio de Anaconda
Esto iniciará el kernel establecido por defecto según la instalación de Anaconda que hayamos elegido,
y se abrirá una ventana del navegador web del sistema presentando el notebook de manera similar que vemos
en la figura más arriba
Elementos de Python
<div id="ch:elementospython"></div>
Sintaxis
Variables y tipos de datos
En programación existe el concepto de variable. Una variable puede verse como una etiqueta que le damos a una región
de memoria donde almacenamos información. La utilidad de tener etiquetadas regiones de memoria es que podemos hacer llamadas
a ella en cualquier parte de nuestros programas.
End of explanation
"""
verdadera = True
falsa = False
print(verdadera)
print(falsa)
"""
Explanation: Las variables pueden clasificarse según los siguientes tipos de datos,
Booleanos: almacenan valores de verdad, verdaero, falso, 1, 0.
Enteros: almacenan valores numéricos enteros, como 2, -2, 3, 1000.
Punto flotante: almacenan valores numéricos de punto flotante es decir, números con decimales o en notación científica.
Cadenas: almacenan valores tipo cadena de caracteres como las oración "Hola Mundo!"
Variables Booleanas bool
Almacenan valores de verdad verdadero o falso que en Python corresponden a true y false
End of explanation
"""
numero=19881129
print(numero)
numeronegativo=-19881129
print(numero)
"""
Explanation: Enteros int
Almacenan valores numéricos enteros, tanto positivos como negativos,
End of explanation
"""
numero=3.19881129
print(numero)
numeronegativo=-3.19881129
print(numero)
"""
Explanation: Punto flotante float
Almacenan valores numéricos de punto flotante,
End of explanation
"""
palabra="Alicia"
frase="¿En qué se parecen un cuervo y un escritorio?"
print(palabra)
print(frase)
"""
Explanation: Cadenas de texto string
Almacenan el contenido de texto o caracteres,
End of explanation
"""
lista = [2,3.5,True, "perro feliz"] #existe en esta lista diferentes tipos de datos
"""
Explanation: Listas list
Las listas son un tipo especial de variable que permite almacenar una secuencia de varios objetos. Por ejemplo la lista,
End of explanation
"""
print(lista[0])
print(lista[1])
print(lista[2])
print(lista[3])
"""
Explanation: Las listas permiten acceder a cada elemento utilizando un sistema de indices que van desde el 0 hasta el N-1 donde N
es el tamaño de la lista, es decir, la cantidad de elementos que almacena
End of explanation
"""
print(lista)
"""
Explanation: También es posible mostrar los contenidos de una lista utilizando print
End of explanation
"""
lista=[]
print(lista)
lista.append(1)
lista.append(":D")
lista.append("0211")
print(lista)
lista.remove("0211")
print(lista)
barranquero={"Reino":"Animalia","Filo":"Chordata","Clase":"Aves","Orden":"Coraciiformes","Familia":"Momotidae","Género":"Momotus"}
velociraptor={"Reino":"Animalia","Filo":"Chordata","Clase":"Sauropsida","Orden":"Saurischia","Familia":"Dromaeosauridae","Género":"Velociraptor"}
print(barranquero)
Meses={"Enero":1,"Febrero":2}
Meses["Enero"]
Meses={1:"Enero",2:"Febrero"}
Meses[1]
"""
Explanation: Algunos métodos de las listas
End of explanation
"""
print("Hola")
"""
Explanation: Interacción con el usuario
Mostrar mensajes en pantalla
Para presentar comandos en pantalla se utiliza el comando print() colocando el contenido de lo que se desea mostrar
al interior de los ().
Por ejemplo
End of explanation
"""
print(5.5)
"""
Explanation: Al interior tenemos la cadena de caracteres "Hola".
End of explanation
"""
print("Voy a tener varios argumentos","separados","por",1,"coma","entre ellos")
"""
Explanation: Es posible ingresar varios argumentos a la función print,
End of explanation
"""
print("Esto es un \"texto\" que se divide por\nUna línea nueva")
print("También puedo tener texto\tseparado por un tabulador")
print("Las secuencias de escape\n\t pueden ser combinadas\ncuantas veces se quiera")
"""
Explanation: Secuencias de escape en texto
Las secuencias de escape son un conjunto de caracteres especiales que cotienen información sobre el formateo de texto,
\n significa nueva línea.
\t significa un espacio de tabulación.
\' permite colocar comillas simples en el texto.
\" permite colocar comillas dobles en el texto.
Un código de ejemplo ilustra mejor su uso,
End of explanation
"""
nombre=input("Que quieres saber?: ")
print(barranquero[nombre])
"""
Explanation: Leer información ingresada por el teclado
Se puede leer información ingresada por el teclado mediante el comando input el cual debe ir almacenado en una variable
dentro del paréntesis se puede ingresar un mensaje (opcional).
End of explanation
"""
print("Hola, cual es tu nombre?")
nombre=input() # es necesario colocar comillas al texto ingresado
print("Tu nombre es",nombre)
"""
Explanation: Es equivalente a tener,
End of explanation
"""
print("Tabla de verdad \'and\'")
A = True
B = True
print(A,"and",B,"=",A or B)
A = True
B = False
print(A,"and",B,"=",A or B)
A = False
B = True
print(A,"and",B,"=",A or B)
A = False
B = False
print(A,"and",B,"=",A or B)
"""
Explanation: Estructuras de control
Las estructuras de control determinan el comportamiento de un programa con base en desiciones. Los criterios de decisión
suelen ser condiciones de lógica Booleana, es decir, Verdadero o Falso.
Operaciones lógicas and, or
Corresponde a las operaciones lógicas de conjunción y disyunción. Las tablas de verdad son las siguientes,
Conjunción and
<table border="1">
<thead>
<tr><th align="center">A</th> <th align="center">B</th> <th align="center">A <code>and</code> B</th> </tr>
</thead>
<tbody>
<tr><td align="center"> V </td> <td align="center"> V </td> <td align="center"> V </td> </tr>
<tr><td align="center"> V </td> <td align="center"> F </td> <td align="center"> F </td> </tr>
<tr><td align="center"> F </td> <td align="center"> V </td> <td align="center"> F </td> </tr>
<tr><td align="center"> F </td> <td align="center"> F </td> <td align="center"> F </td> </tr>
</tbody>
</table>
End of explanation
"""
print("Tabla de verdad \'or\'")
A = True
B = True
print(A,"or",B,"=",A or B)
A = True
B = False
print(A,"or",B,"=",A or B)
A = False
B = True
print(A,"or",B,"=",A or B)
A = False
B = False
print(A,"or",B,"=",A or B)
"""
Explanation: Disyunción or
<table border="1">
<thead>
<tr><th align="center">A</th> <th align="center">B</th> <th align="center">A <code>or</code> B</th> </tr>
</thead>
<tbody>
<tr><td align="center"> V </td> <td align="center"> V </td> <td align="center"> F </td> </tr>
<tr><td align="center"> V </td> <td align="center"> F </td> <td align="center"> V </td> </tr>
<tr><td align="center"> F </td> <td align="center"> V </td> <td align="center"> V </td> </tr>
<tr><td align="center"> F </td> <td align="center"> F </td> <td align="center"> V </td> </tr>
</tbody>
</table>
End of explanation
"""
A=5
print(5>2)
print(5<2)
print(5==10)
print(5==A)
print(5>=2)
print(5<=2)
print(5<=A)
"""
Explanation: Operadores lógicos de comparación <=, >=, <, >,y ==
Los operadores lógicos de comparación retornan los valores de verdad correspondientes al resultado de la comparación
End of explanation
"""
A=2
if( 5 > A ):
print("5 es mayor que",A)
"""
Explanation: Sentencia if
La sentencia if es la que nos permite evaluar condiciones booleanas y controlar el flujo de software,
End of explanation
"""
A=2
if( 5 > A ):
print("5 es mayor que",A)
print("Hey")
else:
print("5 es menor que",A)
"""
Explanation: La sentencia permite añadir la instrucción else que se ocupa del caso excluyente del if
End of explanation
"""
A=5
if( 5 > A ):
print("5 es mayor que",A)
elif( 5 == A):
print("5 es igual",A)
else:
print("5 es menor que",A)
peticion=input("Que desea saber? ")
if(peticion=="Familia"):
print(barranquero["Familia"])
elif(peticion=="Orden"):
print(barranquero["Orden"])
else:
print("No entiendo tu petición")
"""
Explanation: Se pueden anidar sentencias if que añadan condiciones adicionales mediante la instrucción elif (de else if).
Por ejemplo, considere lo que pasaría según los distintos valores de la variable A
End of explanation
"""
for x in range(1,6):
print("Este mensaje aparece por",x,"vez")
"""
Explanation: Ciclos o Bucles
Un ciclo es una porción de código que se ejecuta repetidamente, una y otra vez, un número de veces determinado.
El ciclo se detiene cuando se satisface una condición de parada
Ciclo for
Este ciclo es muy útil cuando se conoce de antemano la cantidad de veces que se necesita repetir
una acción determinada dentro del programa. Para controlar el ciclo se pueden utlizar las diferentes secuencias vistas anteriormente
End of explanation
"""
for x in range(1,20,2):
print(x)
for x in range(1,30,3):
print(x**2)
"""
Explanation: También se puede incluír un "paso" en el ciclo
End of explanation
"""
lista=[1,2,3,4,5,6]
animales=["perro","gato","elefante"]
for x in animales:
print(x)
"""
Explanation: También es posible iterar a lo largo de los elementos de una lista,
End of explanation
"""
frase="Alicia\n¿En qué se parecen un cuervo y un escritorio?"
for i in frase:
print(i)
frase[0:6]
"""
Explanation: O a lo largo de los elementos de una cadena de caracteres,
End of explanation
"""
# Esto se conoce como la importación de un módulo
# Aquí importamos el módulo math que contiene las operaciones matematicas de seno y coseno
# las cuales vamos a utilizar para calcular las componentes vertical y horizontal de la velocidad
# inicial
import math
t=0.0
dt=0.2
x=0.0
y=0.0
# Aquí tenemos
vx=20*math.sin(math.radians(60))
vy=20*math.cos(math.radians(60))
while y>=0.0:
print(t,"\t",x,"\t",y)
t=t+dt
x=x+vx*t
y=y+vy*t-(9.8/2)*t**2
"""
Explanation: Notemos que en este caso i contiene momentáneamente el i-ésimo caracter de la cadena frase.
Ciclo while
El ciclo while se utiliza para ejecutar un conjunto de instrucciones repetidamente hasta que se
cumpla cierta condición que nosotros inducimos.
Veamos cómo generar los puntos que describen la trayectoria de un movimiento parabólico con velocidad inicial
de 20 m/s con un ángulo de tiro de 60 grados.
Utilizaremos las ecuaciones de movimiento,
$$x = x_{0} + v_{0}\cos\theta\,t$$
$$y = y_{0} + v_{0}\sin\theta\, t - \frac{1}{2}g\,t^{2}$$
End of explanation
"""
def producto(argumento1,argumento2):
return argumento1*argumento2
print(producto(2,3))
def busqueda(diccionario,peticion):
if(peticion=="Familia"):
print(diccionario["Familia"])
elif(peticion=="Orden"):
print(diccionario["Orden"])
else:
print("No entiendo tu petición")
peticion=input("Que desea saber? ")
busqueda(barranquero,peticion)
peticion=input("Que desea saber? ")
busqueda(velociraptor,peticion)
animales=[barranquero,velociraptor]
for animal in animales:
print(animal["Orden"])
for x in animales:
busqueda(animal,"Orden") #función definida previamente
"""
Explanation: Funciones
En programación es común que existan tareas que se realicen recurrentemente. Las funciones sirven para disminuir la complejidad
del código, organizarlo mejor, además de facilitar su depuración. Al igual que las fuciones en matemáticas, las funciones en
programación pueden recibir argumentos los cuales son procesados para retornar algún resultado. Por otro lado, las fuciones también
pueden existir sin requerir argumentos y ejecutar un conjunto de ordenes sin necesidad de devolver resultado alguno.
Declaración de una función
End of explanation
"""
def mensaje():
print("Hola clase")
mensaje()
def informacion(diccionario):
for i in diccionario.keys():
print(i,"\t",diccionario[i])
informacion(velociraptor)
"""
Explanation: También puedo tener argumentos que no retornen resultado alguno
End of explanation
"""
import math
print("math.fabs(-1)=",math.fabs(-1))
print("math.ceil(3.67)=",math.ceil(3.67))
print("math.floor(3.37)=",math.floor(3.37))
print("math.ceil(3.67)=",math.ceil(3.67))
print("math.floor(3.37)=",math.floor(3.37))
print("math.factorial(4)=",math.factorial(4))
print("math.exp(1)=",math.exp(1))
print("math.log(math.exp(1))=",math.log(math.exp(1)))
print("math.log(10)=",math.log(10))
print("math.log(10,10)=",math.log(10,10))
print("math.sqrt(2)=",math.sqrt(2))
print("math.degrees(3.141592)=",math.degrees(3.141592))
print("math.radians(2*math.pi)=",math.radians(2*math.pi))
print("math.cos(1)=",math.cos(1))
print("math.sin(1)=",math.sin(1))
print("math.tan(1)=",math.tan(1))
print("math.acos(1)=",math.acos(1))
print("math.asin(1)=",math.asin(1))
print("math.atan(1)=",math.atan(1))
"""
Explanation: Módulos (o librerías)
Los módulos o librerías son conjuntos de funciones que han sido creadas por grupos de programadores y que permiten ahorrar
tiempo a la hora de programar.
Los módulos de interés para este curso son
math.
numpy (Numerical Python).
matplotlib (Mathematical Plot Library).
Módulo math
End of explanation
"""
import numpy as np
a = np.arange(15).reshape(3, 5)
print(a)
a = np.array([[ 0, 1, 2, 3, 4],[ 5, 6, 7, 8, 9],[10, 11, 12, 13, 14]])
a.shape
a.ndim
np.zeros( (3,4) )
np.ones( (3,4) )
np.arange( 10, 30, 5 )
random = np.random.random((2,3))
"""
Explanation: Numpy
<div id="ch:intro-numpy"></div>
End of explanation
"""
%matplotlib inline
import math
import matplotlib.pyplot as plt
t=0.0
dt=0.1
# Posición inicial
x=0.0
y=0.0
# Velocidad inicial
vo=20
# Listas vacías que almacenaran los puntos
# x, y de las trayectorias
puntos_x=[]
puntos_y=[]
# Aquí tenemos
vx=vo*math.cos(math.radians(60))
vy=vo*math.sin(math.radians(60))
while y>=0.0:
# Añado las coordenadas a la lista
puntos_x.append(x)
puntos_y.append(y)
t=t+dt
x=x+vx*t
y=y+vy*t-(9.8/2)*t**2
plt.title("Movimiento Parabólico")
plt.xlabel("Posición horizontal(m)")
plt.ylabel("Altura (m)")
plt.plot(puntos_x,puntos_y)
plt.show()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0, 5.0, 0.1)
t2 = np.arange(0.0, 5.0, 0.02)
plt.figure(1)
plt.subplot(211)
plt.plot(t1, f(t1), 'bo', t2, f(t2), 'k')
plt.subplot(212)
plt.plot(t2, np.cos(2*np.pi*t2), 'r--')
plt.show()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
# the histogram of the data
n, bins, patches = plt.hist(x, 50, normed=1, facecolor='g', alpha=0.75)
plt.xlabel('Smarts')
plt.ylabel('Probability')
plt.title('Histogram of IQ')
plt.text(60, .025, r'$\mu=100,\ \sigma=15$')
plt.axis([40, 160, 0, 0.03])
plt.grid(True)
plt.show()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# make up some data in the interval ]0, 1[
y = np.random.normal(loc=0.5, scale=0.4, size=1000)
y = y[(y > 0) & (y < 1)]
y.sort()
x = np.arange(len(y))
# plot with various axes scales
plt.figure(1)
# linear
plt.subplot(221)
plt.plot(x, y)
plt.yscale('linear')
plt.title('linear')
plt.grid(True)
# log
plt.subplot(222)
plt.plot(x, y)
plt.yscale('log')
plt.title('log')
plt.grid(True)
# symmetric log
plt.subplot(223)
plt.plot(x, y - y.mean())
plt.yscale('symlog', linthreshy=0.05)
plt.title('symlog')
plt.grid(True)
# logit
plt.subplot(224)
plt.plot(x, y)
plt.yscale('logit')
plt.title('logit')
plt.grid(True)
plt.show()
"""
Explanation: Matplotlib
<div id="ch:intro-matplotlib"></div>
End of explanation
"""
|
lneuhaus/pyrpl | docs/old_files/tutorial.ipynb | mit | import pyrpl
print pyrpl.__file__
"""
Explanation: Introduction to pyrpl
1) Introduction
The RedPitaya is an affordable FPGA board with fast analog inputs and outputs. This makes it interesting also for quantum optics experiments. The software package PyRPL (Python RedPitaya Lockbox) is an implementation of many devices that are needed for optics experiments every day. The user interface and all high-level functionality is written in python, but an essential part of the software is hidden in a custom FPGA design (based on the official RedPitaya software version 0.95). While most users probably never want to touch the FPGA design, the Verilog source code is provided together with this package and may be modified to customize the software to your needs.
2) Table of contents
In this document, you will find the following sections:
1. Introduction
2. ToC
3. Installation
4. First steps
5. RedPitaya Modules
6. The Pyrpl class
7. The Graphical User Interface
If you are using Pyrpl for the first time, you should read sections 1-4. This will take about 15 minutes and should leave you able to communicate with your RedPitaya via python.
If you plan to use Pyrpl for a project that is not related to quantum optics, you probably want to go to section 5 then and omit section 6 altogether. Inversely, if you are only interested in a powerful tool for quantum optics and dont care about the details of the implementation, go to section 6. If you plan to contribute to the repository, you should definitely read section 5 to get an idea of what this software package realy does, and where help is needed. Finaly, Pyrpl also comes with a Graphical User Interface (GUI) to interactively control the modules described in section 5. Please, read section 7 for a quick description of the GUI.
3) Installation
Option 3: Simple clone from GitHub (developers)
If instead you plan to synchronize with github on a regular basis, you can also leave the downloaded code where it is and add the parent directory of the pyrpl folder to the PYTHONPATH environment variable as described in this thread: http://stackoverflow.com/questions/3402168/permanently-add-a-directory-to-pythonpath. For all beta-testers and developers, this is the preferred option. So the typical PYTHONPATH environment variable should look somewhat like this:
$\texttt{PYTHONPATH=C:\OTHER_MODULE;C:\GITHUB\PYRPL}$
If you are experiencing problems with the dependencies on other python packages, executing the following command in the pyrpl directory might help:
$\texttt{python setup.py install develop}$
If at a later point, you have the impression that updates from github are not reflected in the program's behavior, try this:
End of explanation
"""
!pip install pyrpl #if you look at this file in ipython notebook, just execute this cell to install pyrplockbox
"""
Explanation: Should the directory not be the one of your local github installation, you might have an older version of pyrpl installed. Just delete any such directories other than your principal github clone and everything should work.
Option 2: from GitHub using setuptools (beta version)
Download the code manually from https://github.com/lneuhaus/pyrpl/archive/master.zip and unzip it or get it directly from git by typing
$\texttt{git clone https://github.com/lneuhaus/pyrpl.git YOUR_DESTINATIONFOLDER}$
In a command line shell, navigate into your new local pyrplockbox directory and execute
$\texttt{python setup.py install}$
This copies the files into the side-package directory of python. The setup should make sure that you have the python libraries paramiko (http://www.paramiko.org/installing.html) and scp (https://pypi.python.org/pypi/scp) installed. If this is not the case you will get a corresponding error message in a later step of this tutorial.
Option 1: with pip (coming soon)
If you have pip correctly installed, executing the following line in a command line should install pyrplockbox and all dependencies:
$\texttt{pip install pyrpl}$
End of explanation
"""
from pyrpl import RedPitaya
"""
Explanation: Compiling the server application (optional)
The software comes with a precompiled version of the server application (written in C) that runs on the RedPitaya. This application is uploaded automatically when you start the connection. If you made changes to this file, you can recompile it by typing
$\texttt{python setup.py compile_server}$
For this to work, you must have gcc and the cross-compiling libraries installed. Basically, if you can compile any of the official RedPitaya software written in C, then this should work, too.
If you do not have a working cross-compiler installed on your UserPC, you can also compile directly on the RedPitaya (tested with ecosystem v0.95). To do so, you must upload the directory pyrpl/monitor_server on the redpitaya, and launch the compilation with the command
$\texttt{make CROSS_COMPILE=}$
Compiling the FPGA bitfile (optional)
If you would like to modify the FPGA code or just make sure that it can be compiled, you should have a working installation of Vivado 2015.4. For windows users it is recommended to set up a virtual machine with Ubuntu on which the compiler can be run in order to avoid any compatibility problems. For the FPGA part, you only need the /fpga subdirectory of this software. Make sure it is somewhere in the file system of the machine with the vivado installation. Then type the following commands. You should adapt the path in the first and second commands to the locations of the Vivado installation / the fpga directory in your filesystem:
$\texttt{source /opt/Xilinx/Vivado/2015.4/settings64.sh}$
$\texttt{cd /home/myusername/fpga}$
$\texttt{make}$
The compilation should take between 15 and 30 minutes. The result will be the file $\texttt{fpga/red_pitaya.bin}$. To test the new FPGA design, make sure that this file in the fpga subdirectory of your pyrpl code directory. That is, if you used a virtual machine for the compilation, you must copy the file back to the original machine on which you run pyrpl.
Unitary tests (optional)
In order to make sure that any recent changes do not affect prior functionality, a large number of automated tests have been implemented. Every push to the github repository is automatically installed tested on an empty virtual linux system. However, the testing server has currently no RedPitaya available to run tests directly on the FPGA. Therefore it is also useful to run these tests on your local machine in case you modified the code.
Currently, the tests confirm that
- all pyrpl modules can be loaded in python
- all designated registers can be read and written
- future: functionality of all major submodules against reference benchmarks
To run the test, navigate in command line into the pyrpl directory and type
$\texttt{set REDPITAYA=192.168.1.100}$ (in windows) or
$\texttt{export REDPITAYA=192.168.1.100}$ (in linux)
$\texttt{python setup.py nosetests}$
The first command tells the test at which IP address it can find a RedPitaya. The last command runs the actual test. After a few seconds, there should be some output saying that the software has passed more than 140 tests.
After you have implemented additional features, you are encouraged to add unitary tests to consolidate the changes. If you immediately validate your changes with unitary tests, this will result in a huge productivity improvement for you. You can find all test files in the folder $\texttt{pyrpl/pyrpl/test}$, and the existing examples (notably $\texttt{test_example.py}$) should give you a good point to start. As long as you add a function starting with 'test_' in one of these files, your test should automatically run along with the others. As you add more tests, you will see the number of total tests increase when you run the test launcher.
Workflow to submit code changes (for developers)
As soon as the code will have reached version 0.9.0.3 (high-level unitary tests implemented and passing, approx. end of May 2016), we will consider the master branch of the github repository as the stable pre-release version. The goal is that the master branch will guarantee functionality at all times.
Any changes to the code, if they do not pass the unitary tests or have not been tested, are to be submitted as pull-requests in order not to endanger the stability of the master branch. We will briefly desribe how to properly submit your changes in that scenario.
Let's say you already changed the code of your local clone of pyrpl. Instead of directly committing the change to the master branch, you should create your own branch. In the windows application of github, when you are looking at the pyrpl repository, there is a small symbol looking like a steet bifurcation in the upper left corner, that says "Create new branch" when you hold the cursor over it. Click it and enter the name of your branch "leos development branch" or similar. The program will automatically switch to that branch. Now you can commit your changes, and then hit the "publish" or "sync" button in the upper right. That will upload your changes so everyone can see and test them.
You can continue working on your branch, add more commits and sync them with the online repository until your change is working. If the master branch has changed in the meantime, just click 'sync' to download them, and then the button "update from master" (upper left corner of the window) that will insert the most recent changes of the master branch into your branch. If the button doesn't work, that means that there are no changes available. This way you can benefit from the updates of the stable pre-release version, as long as they don't conflict with the changes you have been working on. If there are conflicts, github will wait for you to resolve them. In case you have been recompiling the fpga, there will always be a conflict w.r.t. the file 'red_pitaya.bin' (since it is a binary file, github cannot simply merge the differences you implemented). The best way to deal with this problem is to recompile the fpga bitfile after the 'update from master'. This way the binary file in your repository will correspond to the fpga code of the merged verilog files, and github will understand from the most recent modification date of the file that your local version of red_pitaya.bin is the one to keep.
At some point, you might want to insert your changes into the master branch, because they have been well-tested and are going to be useful for everyone else, too. To do so, after having committed and synced all recent changes to your branch, click on "Pull request" in the upper right corner, enter a title and description concerning the changes you have made, and click "Send pull request". Now your job is done. I will review and test the modifications of your code once again, possibly fix incompatibility issues, and merge it into the master branch once all is well. After the merge, you can delete your development branch. If you plan to continue working on related changes, you can also keep the branch and send pull requests later on. If you plan to work on a different feature, I recommend you create a new branch with a name related to the new feature, since this will make the evolution history of the feature more understandable for others. Or, if you would like to go back to following the master branch, click on the little downward arrow besides the name of your branch close to the street bifurcation symbol in the upper left of the github window. You will be able to choose which branch to work on, and to select master.
Let's all try to stick to this protocol. It might seem a little complicated at first, but you will quikly appreciate the fact that other people's mistakes won't be able to endanger your working code, and that by following the commits of the master branch alone, you will realize if an update is incompatible with your work.
4) First steps
If the installation went well, you should now be able to load the package in python. If that works you can pass directly to the next section 'Connecting to the RedPitaya'.
End of explanation
"""
cd c:\lneuhaus\github\pyrpl
"""
Explanation: Sometimes, python has problems finding the path to pyrplockbox. In that case you should add the pyrplockbox directory to your pythonpath environment variable (http://stackoverflow.com/questions/3402168/permanently-add-a-directory-to-pythonpath). If you do not know how to do that, just manually navigate the ipython console to the directory, for example:
End of explanation
"""
from pyrpl import RedPitaya
"""
Explanation: Now retry to load the module. It should really work now.
End of explanation
"""
HOSTNAME = "192.168.1.100"
from pyrpl import RedPitaya
r = RedPitaya(hostname=HOSTNAME)
"""
Explanation: Connecting to the RedPitaya
You should have a working SD card (any version of the SD card content is okay) in your RedPitaya (for instructions see http://redpitaya.com/quick-start/). The RedPitaya should be connected via ethernet to your computer. To set this up, there is plenty of instructions on the RedPitaya website (http://redpitaya.com/quick-start/). If you type the ip address of your module in a browser, you should be able to start the different apps from the manufacturer. The default address is http://192.168.1.100.
If this works, we can load the python interface of pyrplockbox by specifying the RedPitaya's ip address.
End of explanation
"""
#check the value of input1
print r.scope.voltage1
"""
Explanation: If you see at least one '>' symbol, your computer has successfully connected to your RedPitaya via SSH. This means that your connection works. The message 'Server application started on port 2222' means that your computer has sucessfully installed and started a server application on your RedPitaya. Once you get 'Client started with success', your python session has successfully connected to that server and all things are in place to get started.
Basic communication with your RedPitaya
End of explanation
"""
#see how the adc reading fluctuates over time
import time
from matplotlib import pyplot as plt
times,data = [],[]
t0 = time.time()
n = 3000
for i in range(n):
times.append(time.time()-t0)
data.append(r.scope.voltage1)
print "Rough time to read one FPGA register: ", (time.time()-t0)/n*1e6, "µs"
%matplotlib inline
f, axarr = plt.subplots(1,2, sharey=True)
axarr[0].plot(times, data, "+");
axarr[0].set_title("ADC voltage vs time");
axarr[1].hist(data, bins=10,normed=True, orientation="horizontal");
axarr[1].set_title("ADC voltage histogram");
"""
Explanation: With the last command, you have successfully retrieved a value from an FPGA register. This operation takes about 300 µs on my computer. So there is enough time to repeat the reading n times.
End of explanation
"""
#blink some leds for 5 seconds
from time import sleep
for i in range(1025):
r.hk.led=i
sleep(0.005)
# now feel free to play around a little to get familiar with binary representation by looking at the leds.
from time import sleep
r.hk.led = 0b00000001
for i in range(10):
r.hk.led = ~r.hk.led>>1
sleep(0.2)
import random
for i in range(100):
r.hk.led = random.randint(0,255)
sleep(0.02)
"""
Explanation: You see that the input values are not exactly zero. This is normal with all RedPitayas as some offsets are hard to keep zero when the environment changes (temperature etc.). So we will have to compensate for the offsets with our software. Another thing is that you see quite a bit of scatter beetween the points - almost as much that you do not see that the datapoints are quantized. The conclusion here is that the input noise is typically not totally negligible. Therefore we will need to use every trick at hand to get optimal noise performance.
After reading from the RedPitaya, let's now try to write to the register controlling the first 8 yellow LED's on the board. The number written to the LED register is displayed on the LED array in binary representation. You should see some fast flashing of the yellow leds for a few seconds when you execute the next block.
End of explanation
"""
r.hk #"housekeeping" = LEDs and digital inputs/outputs
r.ams #"analog mixed signals" = auxiliary ADCs and DACs.
r.scope #oscilloscope interface
r.asg1 #"arbitrary signal generator" channel 1
r.asg2 #"arbitrary signal generator" channel 2
r.pid0 #first of four PID modules
r.pid1
r.pid2
r.pid3
r.iq0 #first of three I+Q quadrature demodulation/modulation modules
r.iq1
r.iq2
r.iir #"infinite impules response" filter module that can realize complex transfer functions
"""
Explanation: 5) RedPitaya modules
Let's now look a bit closer at the class RedPitaya. Besides managing the communication with your board, it contains different modules that represent the different sections of the FPGA. You already encountered two of them in the example above: "hk" and "scope". Here is the full list of modules:
End of explanation
"""
asg = r.asg1 # make a shortcut
print "Trigger sources:", asg.trigger_sources
print "Output options: ", asg.output_directs
"""
Explanation: ASG and Scope module
Arbitrary Signal Generator
There are two Arbitrary Signal Generator modules: asg1 and asg2. For these modules, any waveform composed of $2^{14}$ programmable points is sent to the output with arbitrary frequency and start phase upon a trigger event.
End of explanation
"""
asg.output_direct = 'out2'
asg.setup(waveform='halframp', frequency=20e4, amplitude=0.8, offset=0, trigger_source='immediately')
"""
Explanation: Let's set up the ASG to output a sawtooth signal of amplitude 0.8 V (peak-to-peak 1.6 V) at 1 MHz on output 2:
End of explanation
"""
s = r.scope # shortcut
print "Available decimation factors:", s.decimations
print "Trigger sources:", s.trigger_sources
print "Available inputs: ", s.inputs
"""
Explanation: Oscilloscope
The scope works similar to the ASG but in reverse: Two channels are available. A table of $2^{14}$ datapoints for each channel is filled with the time series of incoming data. Downloading a full trace takes about 10 ms over standard ethernet. The rate at which the memory is filled is the sampling rate (125 MHz) divided by the value of 'decimation'. The property 'average' decides whether each datapoint is a single sample or the average of all samples over the decimation interval.
End of explanation
"""
from time import sleep
from pyrpl import RedPitaya
#reload everything
#r = RedPitaya(hostname="192.168.1.100")
asg = r.asg1
s = r.scope
# turn off asg so the scope has a chance to measure its "off-state" as well
asg.output_direct = "off"
# setup scope
s.input1 = 'asg1'
# pass asg signal through pid0 with a simple integrator - just for fun (detailed explanations for pid will follow)
r.pid0.input = 'asg1'
r.pid0.ival = 0 # reset the integrator to zero
r.pid0.i = 1000 # unity gain frequency of 1000 hz
r.pid0.p = 1.0 # proportional gain of 1.0
r.pid0.inputfilter = [0,0,0,0] # leave input filter disabled for now
# show pid output on channel2
s.input2 = 'pid0'
# trig at zero volt crossing
s.threshold_ch1 = 0
# positive/negative slope is detected by waiting for input to
# sweept through hysteresis around the trigger threshold in
# the right direction
s.hysteresis_ch1 = 0.01
# trigger on the input signal positive slope
s.trigger_source = 'ch1_positive_edge'
# take data symetrically around the trigger event
s.trigger_delay = 0
# set decimation factor to 64 -> full scope trace is 8ns * 2^14 * decimation = 8.3 ms long
s.decimation = 64
# setup the scope for an acquisition
s.setup()
print "\nBefore turning on asg:"
print "Curve ready:", s.curve_ready() # trigger should still be armed
# turn on asg and leave enough time for the scope to record the data
asg.setup(frequency=1e3, amplitude=0.3, start_phase=90, waveform='halframp', trigger_source='immediately')
sleep(0.010)
# check that the trigger has been disarmed
print "\nAfter turning on asg:"
print "Curve ready:", s.curve_ready()
print "Trigger event age [ms]:",8e-9*((s.current_timestamp&0xFFFFFFFFFFFFFFFF) - s.trigger_timestamp)*1000
# plot the data
%matplotlib inline
plt.plot(s.times*1e3,s.curve(ch=1),s.times*1e3,s.curve(ch=2));
plt.xlabel("Time [ms]");
plt.ylabel("Voltage");
"""
Explanation: Let's have a look at a signal generated by asg1. Later we will use convenience functions to reduce the amount of code necessary to set up the scope:
End of explanation
"""
# useful functions for scope diagnostics
print "Curve ready:", s.curve_ready()
print "Trigger source:",s.trigger_source
print "Trigger threshold [V]:",s.threshold_ch1
print "Averaging:",s.average
print "Trigger delay [s]:",s.trigger_delay
print "Trace duration [s]: ",s.duration
print "Trigger hysteresis [V]", s.hysteresis_ch1
print "Current scope time [cycles]:",hex(s.current_timestamp)
print "Trigger time [cycles]:",hex(s.trigger_timestamp)
print "Current voltage on channel 1 [V]:", r.scope.voltage1
print "First point in data buffer 1 [V]:", s.ch1_firstpoint
"""
Explanation: What do we see? The blue trace for channel 1 shows just the output signal of the asg. The time=0 corresponds to the trigger event. One can see that the trigger was not activated by the constant signal of 0 at the beginning, since it did not cross the hysteresis interval. One can also see a 'bug': After setting up the asg, it outputs the first value of its data table until its waveform output is triggered. For the halframp signal, as it is implemented in pyrpl, this is the maximally negative value. However, we passed the argument start_phase=90 to the asg.setup function, which shifts the first point by a quarter period. Can you guess what happens when we set start_phase=180? You should try it out!
In green, we see the same signal, filtered through the pid module. The nonzero proportional gain leads to instant jumps along with the asg signal. The integrator is responsible for the constant decrease rate at the beginning, and the low-pass that smoothens the asg waveform a little. One can also foresee that, if we are not paying attention, too large an integrator gain will quickly saturate the outputs.
End of explanation
"""
print r.pid0.help()
"""
Explanation: PID module
We have already seen some use of the pid module above. There are four PID modules available: pid0 to pid3.
End of explanation
"""
#make shortcut
pid = r.pid0
#turn off by setting gains to zero
pid.p,pid.i = 0,0
print "P/I gain when turned off:", pid.i,pid.p
# small nonzero numbers set gain to minimum value - avoids rounding off to zero gain
pid.p = 1e-100
pid.i = 1e-100
print "Minimum proportional gain: ",pid.p
print "Minimum integral unity-gain frequency [Hz]: ",pid.i
# saturation at maximum values
pid.p = 1e100
pid.i = 1e100
print "Maximum proportional gain: ",pid.p
print "Maximum integral unity-gain frequency [Hz]: ",pid.i
"""
Explanation: Proportional and integral gain
End of explanation
"""
import numpy as np
#make shortcut
pid = r.pid0
# set input to asg1
pid.input = "asg1"
# set asg to constant 0.1 Volts
r.asg1.setup(waveform="DC", offset = 0.1)
# set scope ch1 to pid0
r.scope.input1 = 'pid0'
#turn off the gains for now
pid.p,pid.i = 0, 0
#set integral value to zero
pid.ival = 0
#prepare data recording
from time import time
times, ivals, outputs = [], [], []
# turn on integrator to whatever negative gain
pid.i = -10
# set integral value above the maximum positive voltage
pid.ival = 1.5
#take 1000 points - jitter of the ethernet delay will add a noise here but we dont care
for n in range(1000):
times.append(time())
ivals.append(pid.ival)
outputs.append(r.scope.voltage1)
#plot
import matplotlib.pyplot as plt
%matplotlib inline
times = np.array(times)-min(times)
plt.plot(times,ivals,times,outputs);
plt.xlabel("Time [s]");
plt.ylabel("Voltage");
"""
Explanation: Control with the integral value register
End of explanation
"""
# off by default
r.pid0.inputfilter
# minimum cutoff frequency is 2 Hz, maximum 77 kHz (for now)
r.pid0.inputfilter = [1,1e10,-1,-1e10]
print r.pid0.inputfilter
# not setting a coefficient turns that filter off
r.pid0.inputfilter = [0,4,8]
print r.pid0.inputfilter
# setting without list also works
r.pid0.inputfilter = -2000
print r.pid0.inputfilter
# turn off again
r.pid0.inputfilter = []
print r.pid0.inputfilter
"""
Explanation: Again, what do we see? We set up the pid module with a constant (positive) input from the ASG. We then turned on the integrator (with negative gain), which will inevitably lead to a slow drift of the output towards negative voltages (blue trace). We had set the integral value above the positive saturation voltage, such that it takes longer until it reaches the negative saturation voltage. The output of the pid module is bound to saturate at +- 1 Volts, which is clearly visible in the green trace. The value of the integral is internally represented by a 32 bit number, so it can practically take arbitrarily large values compared to the 14 bit output. You can set it within the range from +4 to -4V, for example if you want to exloit the delay, or even if you want to compensate it with proportional gain.
Input filters
The pid module has one more feature: A bank of 4 input filters in series. These filters can be either off (bandwidth=0), lowpass (bandwidth positive) or highpass (bandwidth negative). The way these filters were implemented demands that the filter bandwidths can only take values that scale as the powers of 2.
End of explanation
"""
#reload to make sure settings are default ones
from pyrpl import RedPitaya
r = RedPitaya(hostname="192.168.1.100")
#shortcut
iq = r.iq0
# modulation/demodulation frequency 25 MHz
# two lowpass filters with 10 and 20 kHz bandwidth
# input signal is analog input 1
# input AC-coupled with cutoff frequency near 50 kHz
# modulation amplitude 0.1 V
# modulation goes to out1
# output_signal is the demodulated quadrature 1
# quadrature_1 is amplified by 10
iq.setup(frequency=25e6, bandwidth=[10e3,20e3], gain=0.0,
phase=0, acbandwidth=50000, amplitude=0.5,
input='adc1', output_direct='out1',
output_signal='quadrature', quadrature_factor=10)
"""
Explanation: You should now go back to the Scope and ASG example above and play around with the setting of these filters to convince yourself that they do what they are supposed to.
IQ module
Demodulation of a signal means convolving it with a sine and cosine at the 'carrier frequency'. The two resulting signals are usually low-pass filtered and called 'quadrature I' and and 'quadrature Q'. Based on this simple idea, the IQ module of pyrpl can implement several functionalities, depending on the particular setting of the various registers. In most cases, the configuration can be completely carried out through the setup function of the module.
<img src="IQmodule.png">
Lock-in detection / PDH / synchronous detection
End of explanation
"""
# shortcut for na
na = r.na
na.iq_name = 'iq1'
#take transfer functions. first: iq1 -> iq1, second iq1->out1->(your cable)->adc1
f, iq1, amplitudes = na.curve(start=1e3,stop=62.5e6,points=1001,rbw=1000,avg=1,amplitude=0.2,input='iq1',output_direct='off', acbandwidth=0)
f, adc1, amplitudes = na.curve(start=1e3,stop=62.5e6,points=1001,rbw=1000,avg=1,amplitude=0.2,input='adc1',output_direct='out1', acbandwidth=0)
#plot
from pyrpl.iir import bodeplot
%matplotlib inline
bodeplot([(f, iq1, "iq1->iq1"), (f, adc1, "iq1->out1->in1->iq1")], xlog=True)
"""
Explanation: After this setup, the demodulated quadrature is available as the output_signal of iq0, and can serve for example as the input of a PID module to stabilize the frequency of a laser to a reference cavity. The module was tested and is in daily use in our lab. Frequencies as low as 20 Hz and as high as 50 MHz have been used for this technique. At the present time, the functionality of a PDH-like detection as the one set up above cannot be conveniently tested internally. We plan to upgrade the IQ-module to VCO functionality in the near future, which will also enable testing the PDH functionality.
Network analyzer
When implementing complex functionality in the RedPitaya, the network analyzer module is by far the most useful tool for diagnostics. The network analyzer is able to probe the transfer function of any other module or external device by exciting the device with a sine of variable frequency and analyzing the resulting output from that device. This is done by demodulating the device output (=network analyzer input) with the same sine that was used for the excitation and a corresponding cosine, lowpass-filtering, and averaging the two quadratures for a well-defined number of cycles. From the two quadratures, one can extract the magnitude and phase shift of the device's transfer function at the probed frequencies. Let's illustrate the behaviour. For this example, you should connect output 1 to input 1 of your RedPitaya, such that we can compare the analog transfer function to a reference. Make sure you put a 50 Ohm terminator in parallel with input 1.
End of explanation
"""
# shortcut for na and bpf (bandpass filter)
na = r.na
na.iq_name = 'iq1'
bpf = r.iq2
# setup bandpass
bpf.setup(frequency = 2.5e6, #center frequency
Q=10.0, # the filter quality factor
acbandwidth = 10e5, # ac filter to remove pot. input offsets
phase=0, # nominal phase at center frequency (propagation phase lags not accounted for)
gain=2.0, # peak gain = +6 dB
output_direct='off',
output_signal='output_direct',
input='iq1')
# take transfer function
f, tf1, ampl = na.curve(start=1e5, stop=4e6, points=201, rbw=100, avg=3,
amplitude=0.2, input='iq2',output_direct='off')
# add a phase advance of 82.3 degrees and measure transfer function
bpf.phase = 82.3
f, tf2, ampl = na.curve(start=1e5, stop=4e6, points=201, rbw=100, avg=3,
amplitude=0.2, input='iq2',output_direct='off')
#plot
from pyrpl.iir import bodeplot
%matplotlib inline
bodeplot([(f, tf1, "phase = 0.0"), (f, tf2, "phase = %.1f"%bpf.phase)])
"""
Explanation: If your cable is properly connected, you will see that both magnitudes are near 0 dB over most of the frequency range. Near the Nyquist frequency (62.5 MHz), one can see that the internal signal remains flat while the analog signal is strongly attenuated, as it should be to avoid aliasing. One can also see that the delay (phase lag) of the internal signal is much less than the one through the analog signal path.
If you have executed the last example (PDH detection) in this python session, iq0 should still send a modulation to out1, which is added to the signal of the network analyzer, and sampled by input1. In this case, you should see a little peak near the PDH modulation frequency, which was 25 MHz in the example above.
Lorentzian bandpass filter
The iq module can also be used as a bandpass filter with continuously tunable phase. Let's measure the transfer function of such a bandpass with the network analyzer:
End of explanation
"""
iq = r.iq0
# turn off pfd module for settings
iq.pfd_on = False
# local oscillator frequency
iq.frequency = 33.7e6
# local oscillator phase
iq.phase = 0
iq.input = 'adc1'
iq.output_direct = 'off'
iq.output_signal = 'pfd'
print "Before turning on:"
print "Frequency difference error integral", iq.pfd_integral
print "After turning on:"
iq.pfd_on = True
for i in range(10):
print "Frequency difference error integral", iq.pfd_integral
"""
Explanation: Frequency comparator module
To lock the frequency of a VCO (Voltage controlled oscillator) to a frequency reference defined by the RedPitaya, the IQ module contains the frequency comparator block. This is how you set it up. You have to feed the output of this module through a PID block to send it to the analog output. As you will see, if your feedback is not already enabled when you turn on the module, its integrator will rapidly saturate (-585 is the maximum value here, while a value of the order of 1e-3 indicates a reasonable frequency lock).
End of explanation
"""
#reload to make sure settings are default ones
from pyrpl import RedPitaya
r = RedPitaya(hostname="192.168.1.100")
#shortcut
iir = r.iir
#print docstring of the setup function
print iir.setup.__doc__
#prepare plot parameters
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10, 6)
#setup a complicated transfer function
zeros = [ -4e4j-300, +4e4j-300,-2e5j-1000, +2e5j-1000, -2e6j-3000, +2e6j-3000]
poles = [ -1e6, -5e4j-300, +5e4j-300, -1e5j-3000, +1e5j-3000, -1e6j-30000, +1e6j-30000]
designdata = iir.setup(zeros, poles, loops=None, plot=True);
print "Filter sampling frequency: ", 125./iir.loops,"MHz"
"""
Explanation: IIR module
Sometimes it is interesting to realize even more complicated filters. This is the case, for example, when a piezo resonance limits the maximum gain of a feedback loop. For these situations, the IIR module can implement filters with 'Infinite Impulse Response' (https://en.wikipedia.org/wiki/Infinite_impulse_response). It is the your task to choose the filter to be implemented by specifying the complex values of the poles and zeros of the filter. In the current version of pyrpl, the IIR module can implement IIR filters with the following properties:
- strictly proper transfer function (number of poles > number of zeros)
- poles (zeros) either real or complex-conjugate pairs
- no three or more identical real poles (zeros)
- no two or more identical pairs of complex conjugate poles (zeros)
- pole and zero frequencies should be larger than $\frac{f_\rm{nyquist}}{1000}$ (but you can optimize the nyquist frequency of your filter by tuning the 'loops' parameter)
- the DC-gain of the filter must be 1.0. Despite the FPGA implemention being more flexible, we found this constraint rather practical. If you need different behavior, pass the IIR signal through a PID module and use its input filter and proportional gain. If you still need different behaviour, the file iir.py is a good starting point.
- total filter order <= 16 (realizable with 8 parallel biquads)
- a remaining bug limits the dynamic range to about 30 dB before internal saturation interferes with filter performance
Filters whose poles have a positive real part are unstable by design. Zeros with positive real part lead to non-minimum phase lag. Nevertheless, the IIR module will let you implement these filters.
In general the IIR module is still fragile in the sense that you should verify the correct implementation of each filter you design. Usually you can trust the simulated transfer function. It is nevertheless a good idea to use the internal network analyzer module to actually measure the IIR transfer function with an amplitude comparable to the signal you expect to go through the filter, as to verify that no saturation of internal filter signals limits its performance.
End of explanation
"""
# first thing to check if the filter is not ok
print "IIR overflows before:", bool(iir.overflow)
# measure tf of iir filter
r.iir.input = 'iq1'
f, tf, ampl = r.na.curve(iq_name='iq1', start=1e4, stop=3e6, points = 301, rbw=100, avg=1,
amplitude=0.1, input='iir', output_direct='off', logscale=True)
# first thing to check if the filter is not ok
print "IIR overflows after:", bool(iir.overflow)
#plot with design data
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10, 6)
from pyrpl.iir import bodeplot
bodeplot(designdata +[(f,tf,"measured system")],xlog=True)
"""
Explanation: If you try changing a few coefficients, you will see that your design filter is not always properly realized. The bottleneck here is the conversion from the analytical expression (poles and zeros) to the filter coefficients, not the FPGA performance. This conversion is (among other things) limited by floating point precision. We hope to provide a more robust algorithm in future versions. If you can obtain filter coefficients by another, preferrably analytical method, this might lead to better results than our generic algorithm.
Let's check if the filter is really working as it is supposed:
End of explanation
"""
#rescale the filter by 20fold reduction of DC gain
designdata = iir.setup(zeros,poles,g=0.1,loops=None,plot=False);
# first thing to check if the filter is not ok
print "IIR overflows before:", bool(iir.overflow)
# measure tf of iir filter
r.iir.input = 'iq1'
f, tf, ampl = r.iq1.na_trace(start=1e4, stop=3e6, points = 301, rbw=100, avg=1,
amplitude=0.1, input='iir', output_direct='off', logscale=True)
# first thing to check if the filter is not ok
print "IIR overflows after:", bool(iir.overflow)
#plot with design data
%matplotlib inline
pylab.rcParams['figure.figsize'] = (10, 6)
from pyrpl.iir import bodeplot
bodeplot(designdata+[(f,tf,"measured system")],xlog=True)
"""
Explanation: As you can see, the filter has trouble to realize large dynamic ranges. With the current standard design software, it takes some 'practice' to design transfer functions which are properly implemented by the code. While most zeros are properly realized by the filter, you see that the first two poles suffer from some kind of saturation. We are working on an automatic rescaling of the coefficients to allow for optimum dynamic range. From the overflow register printed above the plot, you can also see that the network analyzer scan caused an internal overflow in the filter. All these are signs that different parameters should be tried.
A straightforward way to impove filter performance is to adjust the DC-gain and compensate it later with the gain of a subsequent PID module. See for yourself what the parameter g=0.1 (instead of the default value g=1.0) does here:
End of explanation
"""
iir = r.iir
# useful diagnostic functions
print "IIR on:", iir.on
print "IIR bypassed:", iir.shortcut
print "IIR copydata:", iir.copydata
print "IIR loops:", iir.loops
print "IIR overflows:", bin(iir.overflow)
print "\nCoefficients (6 per biquad):"
print iir.coefficients
# set the unity transfer function to the filter
iir._setup_unity()
"""
Explanation: You see that we have improved the second peak (and avoided internal overflows) at the cost of increased nosie in other regions. Of course this noise can be reduced by increasing the NA averaging time. But maybe it will be detrimental to your application? After all, IIR filter design is far from trivial, but this tutorial should have given you enough information to get started and maybe to improve the way we have implemented the filter in pyrpl (e.g. by implementing automated filter coefficient scaling).
If you plan to play more with the filter, these are the remaining internal iir registers:
End of explanation
"""
pid = r.pid0
print pid.help()
pid.ival #bug: help forgets about pid.ival: current integrator value [volts]
"""
Explanation: 6) The Pyrpl class
The RedPitayas in our lab are mostly used to stabilize one item or another in quantum optics experiments. To do so, the experimenter usually does not want to bother with the detailed implementation on the RedPitaya while trying to understand the physics going on in her/his experiment. For this situation, we have developed the Pyrpl class, which provides an API with high-level functions such as:
# optimial pdh-lock with setpoint 0.1 cavity bandwidth away from resonance
cavity.lock(method='pdh',detuning=0.1)
# unlock the cavity
cavity.unlock()
# calibrate the fringe height of an interferometer, and lock it at local oscillator phase 45 degrees
interferometer.lock(phase=45.0)
First attempts at locking
SECTION NOT READY YET, BECAUSE CODE NOT CLEANED YET
Now lets go for a first attempt to lock something. Say you connect the error signal (transmission or reflection) of your setup to input 1. Make sure that the peak-to-peak of the error signal coincides with the maximum voltages the RedPitaya can handle (-1 to +1 V if the jumpers are set to LV). This is important for getting optimal noise performance. If your signal is too low, amplify it. If it is too high, you should build a voltage divider with 2 resistors of the order of a few kOhm (that way, the input impedance of the RedPitaya of 1 MOhm does not interfere).
Next, connect output 1 to the standard actuator at your hand, e.g. a piezo. Again, you should try to exploit the full -1 to +1 V output range. If the voltage at the actuator must be kept below 0.5V for example, you should make another voltage divider for this. Make sure that you take the input impedance of your actuator into consideration here. If you output needs to be amplified, it is best practice to put the voltage divider after the amplifier as to also attenuate the noise added by the amplifier. Hovever, when this poses a problem (limited bandwidth because of capacity of the actuator), you have to put the voltage divider before the amplifier. Also, this is the moment when you should think about low-pass filtering the actuator voltage. Because of DAC noise, analog low-pass filters are usually more effective than digital ones. A 3dB bandwidth of the order of 100 Hz is a good starting point for most piezos.
You often need two actuators to control your cavity. This is because the output resolution of 14 bits can only realize 16384 different values. This would mean that with a finesse of 15000, you would only be able to set it to resonance or a linewidth away from it, but nothing in between. To solve this, use a coarse actuator to cover at least one free spectral range which brings you near the resonance, and a fine one whose range is 1000 or 10000 times smaller and who gives you lots of graduation around the resonance. The coarse actuator should be strongly low-pass filtered (typical bandwidth of 1Hz or even less), the fine actuator can have 100 Hz or even higher bandwidth. Do not get confused here: the unity-gain frequency of your final lock can be 10- or even 100-fold above the 3dB bandwidth of the analog filter at the output - it suffices to increase the proportional gain of the RedPitaya Lockbox.
Once everything is connected, let's grab a PID module, make a shortcut to it and print its helpstring. All modules have a metho help() which prints all available registers and their description:
End of explanation
"""
pid.input = 'adc1'
pid.output_direct = 'out1'
#see other available options just for curiosity:
print pid.inputs
print pid.output_directs
"""
Explanation: We need to inform our RedPitaya about which connections we want to make. The cabling discussed above translates into:
End of explanation
"""
# turn on the laser
offresonant = r.scope.voltage1 #volts at analog input 1 with the unlocked cavity
# make a guess of what voltage you will measure at an optical resonance
resonant = 0.5 #Volts at analog input 1
# set the setpoint at relative reflection of 0.75 / rel. transmission of 0.25
pid.setpoint = 0.75*offresonant + 0.25*resonant
"""
Explanation: Finally, we need to define a setpoint. Lets first measure the offset when the laser is away from the resonance, and then measure or estimate how much light gets through on resonance.
End of explanation
"""
pid.i = 0 # make sure gain is off
pid.p = 0
#errorsignal = adc1 - setpoint
if resonant > offresonant: # when we are away from resonance, error is negative.
slopesign = 1.0 # therefore, near resonance, the slope is positive as the error crosses zero.
else:
slopesign = -1.0
gainsign = -slopesign #the gain must be the opposite to stabilize
# the effectove gain will in any case slopesign*gainsign = -1.
#Therefore we must start at the maximum positive voltage, so the negative effective gain leads to a decreasing output
pid.ival = 1.0 #sets the integrator value = output voltage to maximum
from time import sleep
sleep(1.0) #wait for the voltage to stabilize (adjust for a few times the lowpass filter bandwidth)
#finally, turn on the integrator
pid.i = gainsign * 0.1
#with a bit of luck, this should work
from time import time
t0 = time()
while True:
relative_error = abs((r.scope.voltage1-pid.setpoint)/(offresonant-resonant))
if time()-t0 > 2: #diagnostics every 2 seconds
print "relative error:",relative_error
t0 = time()
if relative_error < 0.1:
break
sleep(0.01)
if pid.ival <= -1:
print "Resonance missed. Trying again slower.."
pid.ival = 1.2 #overshoot a little
pid.i /= 2
print "Resonance approch successful"
"""
Explanation: Now lets start to approach the resonance. We need to figure out from which side we are coming. The choice is made such that a simple integrator will naturally drift into the resonance and stay there:
End of explanation
"""
from pyrpl import RedPitaya
r = RedPitaya(hostname="192.168.1.100")
#shortcut
iq = r.iq0
iq.setup(frequency=1000e3, bandwidth=[10e3,20e3], gain=0.0,
phase=0, acbandwidth=50000, amplitude=0.4,
input='adc1', output_direct='out1',
output_signal='output_direct', quadrature_factor=0)
iq.frequency=10
r.scope.input1='adc1'
# shortcut for na
na = r.na
na.iq_name = "iq1"
# pid1 will be our device under test
pid = r.pid0
pid.input = 'iq1'
pid.i = 0
pid.ival = 0
pid.p = 1.0
pid.setpoint = 0
pid.inputfilter = []#[-1e3, 5e3, 20e3, 80e3]
# take the transfer function through pid1, this will take a few seconds...
x, y, ampl = na.curve(start=0,stop=200e3,points=101,rbw=100,avg=1,amplitude=0.5,input='iq1',output_direct='off', acbandwidth=0)
#plot
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
plt.plot(x*1e-3,np.abs(y)**2);
plt.xlabel("Frequency [kHz]");
plt.ylabel("|S21|");
r.pid0.input = 'iq1'
r.pid0.output_direct='off'
r.iq2.input='iq1'
r.iq2.setup(0,bandwidth=0,gain=1.0,phase=0,acbandwidth=100,amplitude=0,input='iq1',output_direct='out1')
r.pid0.p=0.1
x,y = na.na_trace(start=1e4,stop=1e4,points=401,rbw=100,avg=1,amplitude=0.1,input='adc1',output_direct='off', acbandwidth=0)
r.iq2.frequency=1e6
r.iq2._reads(0x140,4)
r.iq2._na_averages=125000000
r.iq0.output_direct='off'
r.scope.input2='dac2'
r.iq0.amplitude=0.5
r.iq0.amplitude
"""
Explanation: Questions to users: what parameters do you know?
finesse of the cavity? 1000
length? 1.57m
what error signals are available? transmission direct, reflection AC -> directement pdh analogique
are modulators available n/a
what cavity length / laser frequency actuators are available? PZT mephisto DC - 10kHz, 48MHz opt./V, V_rp apmplifie x20
temperature du laser <1 Hz 2.5~GHz/V, apres AOM
what is known about them (displacement, bandwidth, amplifiers)?
what analog filters are present? YAG PZT a 10kHz
imposer le design des sorties
More to come
End of explanation
"""
# Make sure the notebook was launched with the following option:
# ipython notebook --pylab=qt
from pyrpl.gui import RedPitayaGui
r = RedPitayaGui(HOSTNAME)
r.gui()
"""
Explanation: 7) The Graphical User Interface
Most of the modules described in section 5 can be controlled via a graphical user interface. The graphical window can be displayed with the following:
WARNING: For the GUI to work fine within an ipython session, the option --gui=qt has to be given to the command launching ipython. This makes sure that an event loop is running.
End of explanation
"""
from pyrpl.gui import RedPitayaGui
from PyQt4 import QtCore, QtGui
class RedPitayaGuiCustom(RedPitayaGui):
"""
This is the derived class containing our customizations
"""
def customize_scope(self): #This function is called upon object instanciation
"""
By overwritting this function in the child class, the user can perform custom initializations.
"""
self.scope_widget.layout_custom = QtGui.QHBoxLayout()
#Adds an horizontal layout for our extra-buttons
self.scope_widget.button_scan = QtGui.QPushButton("Scan")
# creates a button "Scan"
self.scope_widget.button_lock = QtGui.QPushButton("Lock")
# creates a button "Lock"
self.scope_widget.label_setpoint = QtGui.QLabel("Setpoint")
# creates a label for the setpoint spinbox
self.scope_widget.spinbox_setpoint = QtGui.QDoubleSpinBox()
# creates a spinbox to enter the value of the setpoint
self.scope_widget.spinbox_setpoint.setDecimals(4)
# sets the desired number of decimals for the spinbox
self.scope_widget.spinbox_setpoint.setSingleStep(0.001)
# Change the step by which the setpoint is incremented when using the arrows
self.scope_widget.layout_custom.addWidget(self.scope_widget.button_scan)
self.scope_widget.layout_custom.addWidget(self.scope_widget.button_lock)
self.scope_widget.layout_custom.addWidget(self.scope_widget.label_setpoint)
self.scope_widget.layout_custom.addWidget(self.scope_widget.spinbox_setpoint)
# Adds the buttons in the layout
self.scope_widget.main_layout.addLayout(self.scope_widget.layout_custom)
# Adds the layout at the bottom of the scope layout
self.scope_widget.button_scan.clicked.connect(self.scan)
self.scope_widget.button_lock.clicked.connect(self.lock)
self.scope_widget.spinbox_setpoint.valueChanged.connect(self.change_setpoint)
# connects the buttons to the desired functions
def custom_setup(self): #This function is also called upon object instanciation
"""
By overwritting this function in the child class, the user can perform custom initializations.
"""
#setup asg1 to output the desired ramp
self.asg1.offset = .5
self.asg1.scale = 0.5
self.asg1.waveform = "ramp"
self.asg1.frequency = 100
self.asg1.trigger_source = 'immediately'
#setup the scope to record approximately one period
self.scope.duration = 0.01
self.scope.input1 = 'dac1'
self.scope.input2 = 'dac2'
self.scope.trigger_source = 'asg1'
#automatically start the scope
self.scope_widget.run_continuous()
def change_setpoint(self):
"""
Directly reflects the value of the spinbox into the pid0 setpoint
"""
self.pid0.setpoint = self.scope_widget.spinbox_setpoint.value()
def lock(self): #Called when button lock is clicked
"""
Set up everything in "lock mode"
"""
# disable button lock
self.scope_widget.button_lock.setEnabled(False)
# enable button scan
self.scope_widget.button_scan.setEnabled(True)
# shut down the asg
self.asg1.output_direct = 'off'
# set pid input/outputs
self.pid0.input = 'adc1'
self.pid0.output_direct = 'out2'
#set pid parameters
self.pid0.setpoint = self.scope_widget.spinbox_setpoint.value()
self.pid0.p = 0.1
self.pid0.i = 100
self.pid0.ival = 0
def scan(self): #Called when button lock is clicked
"""
Set up everything in "scan mode"
"""
# enable button lock
self.scope_widget.button_lock.setEnabled(True)
# enable button scan
self.scope_widget.button_scan.setEnabled(False)
# switch asg on
self.asg1.output_direct = 'out2'
#switch pid off
self.pid0.output_direct = 'off'
# Instantiate the class RePitayaGuiCustom
r = RedPitayaGuiCustom(HOSTNAME)
# launch the gui
r.gui()
"""
Explanation: The following window should open itself. Feel free to play with the button and tabs to start and stop the scope acquisition...
<img src="gui.bmp">
The window is composed of several tabs, each corresponding to a particular module. Since they generate a graphical output, the scope, network analyzer, and spectrum analyzer modules are very pleasant to use in GUI mode. For instance, the scope tab can be used to display in real-time the waveforms acquired by the redpitaya scope. Since the refresh rate is quite good, the scope tab can be used to perform optical alignements or to monitor transient signals as one would do it with a standalone scope.
Subclassing RedPitayaGui to customize the GUI
It is often convenient to develop a GUI that relies heavily on the existing RedPitayaGui, but with a few more buttons or functionalities. In this case, the most convenient solution is to derive the RedPitayaGui class. The GUI is programmed using the framework PyQt4. The full documentation of the framework can be found here: http://pyqt.sourceforge.net/Docs/PyQt4/. However, to quickly start in the right direction, a simple example of how to customize the gui is given below: The following code shows how to add a few buttons at the bottom of the scope tab to switch the experiment between the two states: Scanning with asg1/Locking with pid1
End of explanation
"""
%pylab qt
from pyrpl import Pyrpl
p = Pyrpl('test') # we have to do something about the notebook initializations...
import asyncio
async def run_temperature_lock(setpoint=0.1): # coroutines can receive arguments
with p.asgs.pop("temperature") as asg: # use the context manager "with" to
# make sure the asg will be freed after the acquisition
asg.setup(frequency=0, amplitue=0, offset=0) # Use the asg as a dummy
while IS_TEMP_LOCK_ACTIVE: # The loop will run untill this flag is manually changed to False
await asyncio.sleep(1) # Give way to other coroutines for 1 s
measured_temp = asg.offset # Dummy "temperature" measurment
asg.offset+= (setpoint - measured_temp)*0.1 # feedback with an integral gain
print("measured temp: ", measured_temp) # print the measured value to see how the execution flow works
async def run_n_fits(n): # a coroutine to launch n acquisitions
sa = p.spectrumanalyzer
with p.asgs.pop("fit_spectra") as asg: # use contextmanager again
asg.setup(output_direct='out1',
trigger_source='immediately')
freqs = [] # variables stay available all along the coroutine's execution
for i in range(n): # The coroutine qill be executed several times on the await statement inside this loop
asg.setup(frequency=1000*i) # Move the asg frequency
sa.setup(input=asg, avg=10, span=100e3, baseband=True) # setup the sa for the acquisition
spectrum = await sa.single_async() # wait for 10 averages to be ready
freq = sa.data_x[spectrum.argmax()] # take the max of the spectrum
freqs.append(freq) # append it ti the result
print("measured peak frequency: ", freq) # print to show how the execution goes
return freqs # Once the execution is over, the Future will be filled with the result...
from asyncio import ensure_future, get_event_loop
IS_TEMP_LOCK_ACTIVE = True
temp_future = ensure_future(run_temperature_lock(0.5)) # send temperature control task to the eventloop
fits_future = ensure_future(run_n_fits(50)) # send spectrum measurement task to the eventloop
## add the following lines if you don't already have an event_loop configured in ipython
# LOOP = get_event_loop()
# LOOP.run_until_complete()
IS_TEMP_LOCK_ACTIVE = False # hint, you can stop the spectrum acquisition task by pressin "pause or stop in the
print(fits_future.result())
"""
Explanation: Now, a custom gui with several extra buttons at the bottom of the scope tab should open itself. You can play with the buttons "scan" and "Lock" and see the effect on the channels.
<img src="custom_gui.png">
8) Using asynchronous functions with python 3
Pyrpl uses the Qt eventloop to perform asynchronous tasks, but it has been set as the default loop of asyncio, such that you only need to learn how to use the standard python module asyncio, and you don't need to know anything about Qt. To give you a quick overview of what can be done, we present in the following block an exemple of 2 tasks running in parrallele. The first one mimicks a temperature control loop, measuring periodically a signal every 1 s, and changing the offset of an asg based on the measured value (we realize this way a slow and rudimentary software pid). In parrallele, another task consists in repeatedly shifting the frequency of an asg, and measuring an averaged spectrum on the spectrum analyzer.
Both tasks are defined by coroutines (a python function that is preceded by the keyword async, and that can contain the keyword await). Basically, the execution of each coroutine is interrupted whenever the keyword await is encountered, giving the chance to other tasks to be executed. It will only be resumed once the underlying coroutine's value becomes ready.
Finally to execute the cocroutines, it is not enough to call my_coroutine(), since we need to send the task to the event loop. For that, we use the function ensure_future from the asyncio module. This function immediately returns an object that is not the result of the task (not the object that is behind return inside the coroutine), but rather a Future object, that can be used to retrieve the actual result once it is ready (this is done by calling future.result() latter on).
If you are executing the code inside the ipython notebook, then, this is all you have to do, since an event loop is already running in the back (a qt eventloop if you are using the option %pylab qt). Otherwise, you have to use one of the functions (LOOP.run_forever(), LOOP.run_until_complete(), or LOOP.run_in_executor()) to launch the eventloop.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/test-institute-1/cmip6/models/sandbox-2/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-2', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: TEST-INSTITUTE-1
Source ID: SANDBOX-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:43
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
shikhar413/openmc | examples/jupyter/search.ipynb | mit | # Initialize third-party libraries and the OpenMC Python API
import matplotlib.pyplot as plt
import numpy as np
import openmc
import openmc.model
%matplotlib inline
"""
Explanation: Criticality Search
This notebook illustrates the usage of the OpenMC Python API's generic eigenvalue search capability. In this Notebook, we will do a critical boron concentration search of a typical PWR pin cell.
To use the search functionality, we must create a function which creates our model according to the input parameter we wish to search for (in this case, the boron concentration).
This notebook will first create that function, and then, run the search.
End of explanation
"""
# Create the model. `ppm_Boron` will be the parametric variable.
def build_model(ppm_Boron):
# Create the pin materials
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_element('U', 1., enrichment=1.6)
fuel.add_element('O', 2.)
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_element('Zr', 1.)
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.741)
water.add_element('H', 2.)
water.add_element('O', 1.)
# Include the amount of boron in the water based on the ppm,
# neglecting the other constituents of boric acid
water.add_element('B', ppm_Boron * 1e-6)
# Instantiate a Materials object
materials = openmc.Materials([fuel, zircaloy, water])
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(r=0.39218)
clad_outer_radius = openmc.ZCylinder(r=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius & (+min_x & -max_x & +min_y & -max_y)
# Create root Universe
root_universe = openmc.Universe(name='root universe')
root_universe.add_cells([fuel_cell, clad_cell, moderator_cell])
# Create Geometry and set root universe
geometry = openmc.Geometry(root_universe)
# Instantiate a Settings object
settings = openmc.Settings()
# Set simulation parameters
settings.batches = 300
settings.inactive = 20
settings.particles = 1000
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -10, 0.63, 0.63, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings.source = openmc.source.Source(space=uniform_dist)
# We dont need a tallies file so dont waste the disk input/output time
settings.output = {'tallies': False}
model = openmc.model.Model(geometry, materials, settings)
return model
"""
Explanation: Create Parametrized Model
To perform the search we will use the openmc.search_for_keff function. This function requires a different function be defined which creates an parametrized model to analyze. This model is required to be stored in an openmc.model.Model object. The first parameter of this function will be modified during the search process for our critical eigenvalue.
Our model will be a pin-cell from the Multi-Group Mode Part II assembly, except this time the entire model building process will be contained within a function, and the Boron concentration will be parametrized.
End of explanation
"""
# Perform the search
crit_ppm, guesses, keffs = openmc.search_for_keff(build_model, bracket=[1000., 2500.],
tol=1e-2, print_iterations=True)
print('Critical Boron Concentration: {:4.0f} ppm'.format(crit_ppm))
"""
Explanation: Search for the Critical Boron Concentration
To perform the search we imply call the openmc.search_for_keff function and pass in the relvant arguments. For our purposes we will be passing in the model building function (build_model defined above), a bracketed range for the expected critical Boron concentration (1,000 to 2,500 ppm), the tolerance, and the method we wish to use.
Instead of the bracketed range we could have used a single initial guess, but have elected not to in this example. Finally, due to the high noise inherent in using as few histories as are used in this example, our tolerance on the final keff value will be rather large (1.e-2) and the default 'bisection' method will be used for the search.
End of explanation
"""
plt.figure(figsize=(8, 4.5))
plt.title('Eigenvalue versus Boron Concentration')
# Create a scatter plot using the mean value of keff
plt.scatter(guesses, [keffs[i].nominal_value for i in range(len(keffs))])
plt.xlabel('Boron Concentration [ppm]')
plt.ylabel('Eigenvalue')
plt.show()
"""
Explanation: Finally, the openmc.search_for_keff function also provided us with Lists of the guesses and corresponding keff values generated during the search process with OpenMC. Let's use that information to make a quick plot of the value of keff versus the boron concentration.
End of explanation
"""
|
landlab/landlab | notebooks/tutorials/terrain_analysis/steepness_finder/steepness_finder.ipynb | mit | import copy
import numpy as np
import matplotlib as mpl
from landlab import RasterModelGrid, imshow_grid
from landlab.io import read_esri_ascii
from landlab.components import FlowAccumulator, SteepnessFinder
"""
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../../landlab_header.png"></a>
Using the SteepnessFinder Component
Background
Given an input digital elevation model (DEM), the SteepnessFinder component calculates the steepness index for nodes or stream segments in the drainage network. The steepness index is measure of channel gradient that is normalized to compensate for the correlation between gradient and drainage area. The definition of the steepness index derives from an idealized mathematical relationship between channel gradient and drainage area,
$$S = k_{sn} A^\theta$$
where $S$ is local channel gradient, $A$ is drainage area, $k_{sn}$ is the steepness index, and $\theta$ is the concavity index (because its value reflects the upward concavity of the stream profile; a value of 0 would represent a linear profile with no concavity). The definition of steepness index is therefore
$$k_{sn} = \frac{S}{A^\theta}$$
The occurrence of an approximate power-law relationship between gradient and drainage area was noted by, for example, Hack (1957, his equation 2) and Flint (1974) (it is sometimes called "Flint's Law", John Hack having already had a different scaling relation named for him; see the HackCalculator component tutorial). The emergence of DEMs and computers powerful enough to analyze them opened the door to statistical exploration of the slope-area relation (Tarboton and Bras, 1989), and the recognition that the relationship can be interpreted in terms of geomorphic processes (Willgoose et al., 1991). The concavity and steepness indices are defined and discussed in Whipple and Tucker (1999). The steepness index, and the related metric the chi index (see the ChiFinder tutorial) have become widely used as methods for identifying anomalies in channel gradient that may related to tectonics, lithology, or landscape transience (see, e.g., Wobus et al., 2006; Kirby and Whipple, 2012).
Imports
First, import what we'll need:
End of explanation
"""
print(SteepnessFinder.__doc__)
"""
Explanation: Documentation
The Reference Documentation provides information about the SteepnessFinder class, describes its methods and attributes, and provides a link to the source code.
The SteepnessFinder class docstring describes the component and provides some simple examples:
End of explanation
"""
print(SteepnessFinder.__init__.__doc__)
"""
Explanation: The __init__ docstring lists the parameters:
End of explanation
"""
# read the DEM
(grid, elev) = read_esri_ascii("hugo_site_filled.asc", name="topographic__elevation")
grid.set_watershed_boundary_condition(elev)
cmap = copy.copy(mpl.cm.get_cmap("pink"))
imshow_grid(grid, elev, cmap=cmap, colorbar_label="Elevation (m)")
"""
Explanation: Example 1
In this example, we read in a small digital elevation model (DEM) from the Sevilleta National Wildlife Refuge, NM, USA.
The DEM file is in ESRI Ascii format, with NODATA codes for cells outside the main watershed. We'll use the Landlab grid method set_watershed_boundary_condition to assign closed-boundary status to NODATA cells.
End of explanation
"""
fa = FlowAccumulator(
grid,
flow_director="FlowDirectorD8", # use D8 routing
)
fa.run_one_step() # run the flow accumulator
cmap = copy.copy(mpl.cm.get_cmap("Blues"))
imshow_grid(
grid,
np.log10(grid.at_node["drainage_area"] + 1.0), # sq root helps show drainage
cmap=cmap,
colorbar_label="Log10(drainage area (m2))",
)
"""
Explanation: The SteepnessFinder needs to have drainage areas pre-calculated. We'll do that with the FlowAccumulator component. We'll have the component do D8 flow routing (each DEM cell drains to whichever of its 8 neighbors lies in the steepest downslope direction), and fill pits (depressions in the DEM that would otherwise block the flow) using the LakeMapperBarnes component. The latter two arguments below tell the lake mapper to update the flow directions and drainage areas after filling the pits.
End of explanation
"""
sf = SteepnessFinder(grid, min_drainage_area=2.0e4)
sf.calculate_steepnesses()
cmap = copy.copy(mpl.cm.get_cmap("viridis"))
imshow_grid(
grid,
grid.at_node["channel__steepness_index"],
cmap=cmap,
colorbar_label="Steepness index",
)
"""
Explanation: Now run the SteepnessFinder and display the map of $k_{sn}$ values:
End of explanation
"""
# calculate steepness
sf = SteepnessFinder(grid, elev_step=4.0, min_drainage_area=2.0e4)
sf.calculate_steepnesses()
cmap = copy.copy(mpl.cm.get_cmap("viridis"))
imshow_grid(
grid,
grid.at_node["channel__steepness_index"],
cmap=cmap,
colorbar_label="Steepness index",
)
"""
Explanation: Example 2: fixed elevation drop
One challenge in extracting $k_{sn}$ from digital elevation data is noise: cell-to-cell variations in slope can make it hard to visualize coherent patterns, as the above example demonstrates. One solution, discussed by Wobus et al. (2006), is to use a fixed elevation drop: one starts from a given pixel and iterates from pixel-to-pixel downstream until the elevation difference from the starting point is equal to or greater than a specified drop distance. One advantage of this method prevents the analyzed segments from having zero slope. Another is that it effectively averages gradient over a longer horizontal distance that depends on the local gradient: lower gradients, which are generally more prone to noise, will be averaged over a longer distance, and vice versa. The example below shows how to do this with SteepnessFinder:
End of explanation
"""
|
eggie5/UCSD-MAS-DSE230 | hmwk4/HW4 - Linear Regression-Redacted.ipynb | mit | import pickle
import pandas as pd
!ls *.pickle # check
!curl -o "stations_projections.pickle" "http://mas-dse-open.s3.amazonaws.com/Weather/stations_projections.pickle"
data = pickle.load(open("stations_projections.pickle",'r'))
data.shape
data.head(1)
# break up the lists of coefficients separate columns
for col in [u'TAVG_coeff', u'TRANGE_coeff', u'SNWD_coeff']:
for i in range(3):
new_col=col+str(i+1)
data[new_col]=[e[i] for e in list(data[col])]
data.drop(labels=col,axis=1,inplace=True)
data.drop(labels='station',axis=1,inplace=True)
print data.columns
data.head(3)
"""
Explanation: Linear Regression - Interpreting the result
In this notebook we use linear regression to predict the coefficients corresponding to the top eigenvectors of the measurements:
* TAVG: The average temperature for day/location. (TMAX + TMIN)/2
* TRANGE: The temperature range between the highest and lowest temperatures of the day. TMAX-TMIN.
* SNWD: The depth of the accumulated snow.
These 9 variables are the output variables that we aim to predict.
The 4 input variables we use for the regression are properties of the location of the station:
* latitude, longitude: location of the station.
* elevation: the elevation of the location above sea level.
* dist_coast: the distance of the station from the coast (in kilometers).
Read and parse the data
End of explanation
"""
from sklearn.linear_model import LinearRegression
"""
Explanation: Performing and evaluating the regression
As the size of the data is modest, we can perform the regression using regular python (not spark) running on a laptop. We use the library sklearn
End of explanation
"""
# Compute score changes
def compute_scores(y_label,X_Train,y_Train,X_test,Y_test):
lg = LinearRegression()
lg.fit(X_Train,y_Train)
train_score = lg.score(X_Train,y_Train)
test_score = lg.score(X_test,Y_test)
print('R-squared(Coeff. of determination): Train:%.3f, Test:%.3f, Ratio:%.3f\n' % (train_score,test_score,(test_score/train_score)))
full=set(range(X_Train.shape[1])) #col index list
for i in range(X_Train.shape[1]):
L=list(full.difference(set([i]))) # fill in
L.sort()
r_train_X=X_Train[:,L]
r_test_X=X_test[:,L]
lg = LinearRegression()
lg.fit(r_train_X,y_Train)
r_train_score = lg.score(r_train_X,y_Train)
r_test_score = lg.score(r_test_X,Y_test)
print "removed",data.columns[i],
print "Score decrease: \tTrain:%5.3f" % (train_score-r_train_score),
print "\tTest: %5.3f " % (test_score-r_test_score)
"""
Explanation: Coefficient of determination
Computed by calling the method LinearRegression.score()
The regression score comes under several names: "Coefficient of determination", $R^2$, "R squared score", "percentage of variance explained", "correlation coefficient". It is explained in more detail in wikipedia.
Roughly speaking the $R^2$-score measures the fraction of the variance of the regression output variable that is explained by the prediction function. The score varies between 0 and 1. A score of 1 means that the regression function perfectly predicts the value of $y$. A score of 0 means that it does not predict $y$ at all.
Training score vs Test score
Suppose we fit a regression function with 10 features to 10 data points. We are very likely to fit the data perfectly and get a score of 1. However, this does not mean that our model truly explains the data. It just means that the number of training examples we are using to fit the model is too small. To detect this situation, we can compute the score of the model that was fit to the training set, on a test set. If the ratio between the test score and the training score is smaller than, say, 0.1, then our regression function probably over-fits the data.
Finding the importance of input variables
The fact that a regression coefficient is far from zero provides some indication that it is important. However, the size of these coefficients also depends on the scaling of the variables. A much more reliable way to find out which of the input variables are important is to compare the score of the regression function we get when using all of the input variables to the score when one of the variables is eliminated. This is sometimes called "sensitivity analysis"
End of explanation
"""
from numpy.random import rand
N=data.shape[0]
train_i = rand(N)>0.5
Train = data.ix[train_i,:]
Test = data.ix[~train_i,:]
print data.shape,Train.shape,Test.shape
print Train.ix[:,:4].head()
from matplotlib import pyplot as plt
%matplotlib inline
def plot_regressions(X_test, y_test, clf):
print X_test.shape
print y_test.shape
plt.scatter(X_test, y_test, color='black')
plt.plot(X_test, clf.predict(X_test), color='blue',linewidth=3)
from sklearn.cross_validation import train_test_split
train_X = Train.ix[:,:4].values
test_X=Test.ix[:,:4].values
input_names=list(data.columns[:4])
for target in ["TAVG","TRANGE","SNWD"]:
for j in range(1,4):
y_label = target+"_coeff"+str(j)
train_y = Train[y_label]
test_y = Test[y_label]
lg = LinearRegression()
lg.fit(train_X,train_y)
print "\nTarget variable: ", y_label, '#'*40
print "Coeffs: ",\
' '.join(['%s:%5.2f ' % (input_names[i],lg.coef_[i]) for i in range(len(lg.coef_))])
compute_scores(y_label, train_X, train_y, test_X, test_y)
"""
Explanation: Partition into training set and test set
By dividing the data into two parts, we can detect when our model over-fits. When over-fitting happens, the significance on the test set is much smaller than the significance on the training set.
End of explanation
"""
|
vasco-da-gama/ros_hadoop | doc/Tutorial.ipynb | apache-2.0 | %%bash
echo -e "Current working directory: $(pwd)\n\n"
tree -d -L 2 /opt/ros_hadoop/
%%bash
# assuming you start the notebook in the doc/ folder of master (default Dockerfile build)
java -jar ../lib/rosbaginputformat.jar -f /opt/ros_hadoop/master/dist/HMB_4.bag
"""
Explanation: RosbagInputFormat
RosbagInputFormat is an open source splitable Hadoop InputFormat for the rosbag file format.
Usage from Spark (pyspark)
Example data can be found for instance at https://github.com/udacity/self-driving-car/tree/master/datasets published under MIT License.
Check that the rosbag file version is V2.0
The code you cloned is located in /opt/ros_hadoop/master while the latest release is in /opt/ros_hadoop/latest
../lib/rosbaginputformat.jar is a symlink to a recent version. You can replace it with the version you would like to test.
bash
java -jar ../lib/rosbaginputformat.jar --version -f /opt/ros_hadoop/master/dist/HMB_4.bag
Extract the index as configuration
The index is a very very small configuration file containing a protobuf array that will be given in the job configuration.
Note that the operation will not process and it will not parse the whole bag file, but will simply seek to the required offset.
End of explanation
"""
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
sparkConf = SparkConf()
sparkConf.setMaster("local[*]")
sparkConf.setAppName("ros_hadoop")
sparkConf.set("spark.jars", "../lib/protobuf-java-3.3.0.jar,../lib/rosbaginputformat.jar,../lib/scala-library-2.11.8.jar")
spark = SparkSession.builder.config(conf=sparkConf).getOrCreate()
sc = spark.sparkContext
"""
Explanation: This will generate a very small file named HMB_4.bag.idx.bin in the same folder.
Copy the bag file in HDFS
Using your favorite tool put the bag file in your working HDFS folder.
Note: keep the index json file as configuration to your jobs, do not put small files in HDFS.
For convenience we already provide an example file (/opt/ros_hadoop/master/dist/HMB_4.bag) in the HDFS under /user/root/
bash
hdfs dfs -put /opt/ros_hadoop/master/dist/HMB_4.bag
hdfs dfs -ls
Process the ros bag file in Spark using the RosbagInputFormat
Create the Spark Session or get an existing one
End of explanation
"""
fin = sc.newAPIHadoopFile(
path = "hdfs://127.0.0.1:9000/user/root/HMB_4.bag",
inputFormatClass = "de.valtech.foss.RosbagMapInputFormat",
keyClass = "org.apache.hadoop.io.LongWritable",
valueClass = "org.apache.hadoop.io.MapWritable",
conf = {"RosbagInputFormat.chunkIdx":"/opt/ros_hadoop/master/dist/HMB_4.bag.idx.bin"})
"""
Explanation: Create an RDD from the Rosbag file
Note: your HDFS address might differ.
End of explanation
"""
conn_a = fin.filter(lambda r: r[1]['header']['op'] == 7).map(lambda r: r[1]).collect()
conn_d = {str(k['header']['topic']):k for k in conn_a}
# see topic names
conn_d.keys()
"""
Explanation: Interpret the Messages
To interpret the messages we need the connections.
We could get the connections as configuration as well. At the moment we decided to collect the connections into Spark driver in a dictionary and use it in the subsequent RDD actions. Note in the next version of the RosbagInputFormater alternative implementations will be given.
Collect the connections from all Spark partitions of the bag file into the Spark driver
End of explanation
"""
%run -i ../src/main/python/functions.py
"""
Explanation: Load the python map functions from src/main/python/functions.py
End of explanation
"""
%matplotlib nbagg
# use %matplotlib notebook in python3
from functools import partial
import pandas as pd
import numpy as np
# Take messages from '/imu/data' topic using default str func
rdd = fin.flatMap(
partial(msg_map, conn=conn_d['/imu/data'])
)
print(rdd.take(1)[0])
"""
Explanation: Use of msg_map to apply a function on all messages
Python rosbag.bag needs to be installed on all Spark workers.
The msg_map function (from src/main/python/functions.py) takes three arguments:
1. r = the message or RDD record Tuple
2. func = a function (default str) to apply to the ROS message
3. conn = a connection to specify what topic to process
End of explanation
"""
from PIL import Image
from io import BytesIO
res = fin.flatMap(
partial(msg_map, func=lambda r: r.data, conn=conn_d['/center_camera/image_color/compressed'])
).take(50)
Image.open(BytesIO(res[48]))
"""
Explanation: Image data from camera messages
An example of taking messages using a func other than default str.
In our case we apply a lambda to messages from from '/center_camera/image_color/compressed' topic. As usual with Spark the operation will happen in parallel on all workers.
End of explanation
"""
def f(msg):
return (msg.header.stamp.secs, msg.fuel_level)
d = fin.flatMap(
partial(msg_map, func=f, conn=conn_d['/vehicle/fuel_level_report'])
).toDF().toPandas()
d.set_index('_1').plot()
"""
Explanation: Plot fuel level
The topic /vehicle/fuel_level_report contains 2215 ROS messages. Let us plot the header.stamp in seconds vs. fuel_level using a pandas dataframe
End of explanation
"""
def f(msg):
from keras.layers import dot, Dot, Input
from keras.models import Model
linear_acceleration = {
'x': msg.linear_acceleration.x,
'y': msg.linear_acceleration.y,
'z': msg.linear_acceleration.z,
}
linear_acceleration_covariance = np.array(msg.linear_acceleration_covariance)
i1 = Input(shape=(3,))
i2 = Input(shape=(3,))
o = dot([i1,i2], axes=1)
model = Model([i1,i2], o)
# return a tuple with (numpy dot product, keras dot "predict")
return (
np.dot(linear_acceleration_covariance.reshape(3,3),
[linear_acceleration['x'], linear_acceleration['y'], linear_acceleration['z']]),
model.predict([
np.array([[ linear_acceleration['x'], linear_acceleration['y'], linear_acceleration['z'] ]]),
linear_acceleration_covariance.reshape((3,3))])
)
fin.flatMap(partial(msg_map, func=f, conn=conn_d['/vehicle/imu/data_raw'])).take(5)
# tuple with (numpy dot product, keras dot "predict")
"""
Explanation: Machine Learning models on Spark workers
A dot product Keras "model" for each message from a topic. We will compare it with the one computed with numpy.
Note that the imports happen in the workers and not in driver. On the other hand the connection dictionary is sent over the closure.
End of explanation
"""
|
kubeflow/examples | house-prices-kaggle-competition/house-prices-kale.ipynb | apache-2.0 | !pip install --user -r requirements.txt
"""
Explanation: Kaggle Getting Started Competition : House Prices - Advanced Regression Techniques
The notebook is based on the notebook provided for House prices Kaggle competition. The notebook is a buildup of hands-on-exercises presented in Kaggle Learn courses of Intermediate Machine Learning and Feature Engineering
Install necessary packages
We can install the necessary package by either running pip install --user <package_name> or include everything in a requirements.txt file and run pip install --user -r requirements.txt. We have put the dependencies in a requirements.txt file so we will use the former method.
NOTE: Do not forget to use the --user argument. It is necessary if you want to use Kale to transform this notebook into a Kubeflow pipeline
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from IPython.display import display
from pandas.api.types import CategoricalDtype
from category_encoders import MEstimateEncoder
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
from sklearn.feature_selection import mutual_info_regression
from sklearn.model_selection import KFold, cross_val_score
from sklearn.metrics import (r2_score,mean_squared_error,
mean_squared_log_error,make_scorer)
from xgboost import XGBRegressor
"""
Explanation: Imports
End of explanation
"""
def load_and_preprocess_data():
# Read data
path = "data/"
df_train = pd.read_csv(path + "train.csv", index_col="Id")
df_test = pd.read_csv(path + "test.csv", index_col="Id")
# Merge the splits so we can process them together
df = pd.concat([df_train, df_test])
# Preprocessing
df = clean(df)
df = encode(df)
df = impute(df)
# Reform splits
df_train = df.loc[df_train.index, :]
df_test = df.loc[df_test.index, :]
return df_train, df_test
df_train, df_test = load_and_preprocess_data()
"""
Explanation: Load Data
End of explanation
"""
df.info()
df.head()
"""
Explanation: Exploring the dataset
End of explanation
"""
df.Exterior2nd.unique()
"""
Explanation: As we can see in the output of the previous column, the max entries should be 2919 for each column but few of them don't have it which indicates missing values. Some of them even have entries less than 500 such as columns PoolQC, MiscFeature and Alley. In the later part of the code we would look into how much potential each feature has.
Now lets check some of the categorical features in the dataset.
End of explanation
"""
def clean(df):
df["Exterior2nd"] = df["Exterior2nd"].replace({"Brk Cmn": "BrkComm"})
# Some values of GarageYrBlt are corrupt, so we'll replace them
# with the year the house was built
df["GarageYrBlt"] = df["GarageYrBlt"].where(df.GarageYrBlt <= 2010, df.YearBuilt)
# Names beginning with numbers are awkward to work with
df.rename(columns={
"1stFlrSF": "FirstFlrSF",
"2ndFlrSF": "SecondFlrSF",
"3SsnPorch": "Threeseasonporch",
}, inplace=True,
)
return df
df.head()
"""
Explanation: Comparing these to description.txt, we can observe that there are some typos in the categories. For example 'Brk Cmn' should be 'Brk Comm'. Let's start the data preprocessing with cleaning of the data
Preprocessing the data
Clean the data
End of explanation
"""
# The numeric features are already encoded correctly (`float` for
# continuous, `int` for discrete), but the categoricals we'll need to
# do ourselves. Note in particular, that the `MSSubClass` feature is
# read as an `int` type, but is actually a (nominative) categorical.
# The nominative (unordered) categorical features
features_nom = ["MSSubClass", "MSZoning", "Street", "Alley", "LandContour", "LotConfig", "Neighborhood", "Condition1", "Condition2", "BldgType", "HouseStyle", "RoofStyle", "RoofMatl", "Exterior1st", "Exterior2nd", "MasVnrType", "Foundation", "Heating", "CentralAir", "GarageType", "MiscFeature", "SaleType", "SaleCondition"]
# Pandas calls the categories "levels"
five_levels = ["Po", "Fa", "TA", "Gd", "Ex"]
ten_levels = list(range(10))
ordered_levels = {
"OverallQual": ten_levels,
"OverallCond": ten_levels,
"ExterQual": five_levels,
"ExterCond": five_levels,
"BsmtQual": five_levels,
"BsmtCond": five_levels,
"HeatingQC": five_levels,
"KitchenQual": five_levels,
"FireplaceQu": five_levels,
"GarageQual": five_levels,
"GarageCond": five_levels,
"PoolQC": five_levels,
"LotShape": ["Reg", "IR1", "IR2", "IR3"],
"LandSlope": ["Sev", "Mod", "Gtl"],
"BsmtExposure": ["No", "Mn", "Av", "Gd"],
"BsmtFinType1": ["Unf", "LwQ", "Rec", "BLQ", "ALQ", "GLQ"],
"BsmtFinType2": ["Unf", "LwQ", "Rec", "BLQ", "ALQ", "GLQ"],
"Functional": ["Sal", "Sev", "Maj1", "Maj2", "Mod", "Min2", "Min1", "Typ"],
"GarageFinish": ["Unf", "RFn", "Fin"],
"PavedDrive": ["N", "P", "Y"],
"Utilities": ["NoSeWa", "NoSewr", "AllPub"],
"CentralAir": ["N", "Y"],
"Electrical": ["Mix", "FuseP", "FuseF", "FuseA", "SBrkr"],
"Fence": ["MnWw", "GdWo", "MnPrv", "GdPrv"],
}
# Add a None level for missing values
ordered_levels = {key: ["None"] + value for key, value in
ordered_levels.items()}
def encode(df):
# Nominal categories
for name in features_nom:
df[name] = df[name].astype("category")
# Add a None category for missing values
if "None" not in df[name].cat.categories:
df[name].cat.add_categories("None", inplace=True)
# Ordinal categories
for name, levels in ordered_levels.items():
df[name] = df[name].astype(CategoricalDtype(levels,
ordered=True))
return df
"""
Explanation: Encode the Statistical Data Type
Pandas has Python types corresponding to the standard statistical types (numeric, categorical, etc.). Encoding each feature with its correct type helps ensure each feature is treated appropriately by whatever functions we use, and makes it easier for us to apply transformations consistently
End of explanation
"""
def impute(df):
for name in df.select_dtypes("number"):
df[name] = df[name].fillna(0)
for name in df.select_dtypes("category"):
df[name] = df[name].fillna("None")
return df
"""
Explanation: Handle missing values
Handling missing values now will make the feature engineering go more smoothly. We'll impute 0 for missing numeric values and "None" for missing categorical values. You might like to experiment with other imputation strategies. In particular, you could try creating "missing value" indicators: 1 whenever a value was imputed and 0 otherwise.
End of explanation
"""
def make_mi_scores(X, y):
X = X.copy()
for colname in X.select_dtypes(["object", "category"]):
X[colname], _ = X[colname].factorize()
# All discrete features should now have integer dtypes
discrete_features = [pd.api.types.is_integer_dtype(t) for t in X.dtypes]
mi_scores = mutual_info_regression(X, y, discrete_features=discrete_features, random_state=0)
mi_scores = pd.Series(mi_scores, name="MI Scores", index=X.columns)
mi_scores = mi_scores.sort_values(ascending=False)
return mi_scores
def plot_mi_scores(scores):
scores = scores.sort_values(ascending=True)
width = np.arange(len(scores))
ticks = list(scores.index)
plt.barh(width, scores)
plt.yticks(width, ticks)
plt.title("Mutual Information Scores")
"""
Explanation: Creating Features
To understand the potential of each feature, we calculate utility score for each feature. The utility score would help us eliminate the low-scoring features. Training on top features would help us better our model.
End of explanation
"""
X = df_train.copy()
y = X.pop("SalePrice")
mi_scores = make_mi_scores(X, y)
mi_scores.head()
"""
Explanation: Let's analyse mutual information scores
End of explanation
"""
def drop_uninformative(df, mi_scores):
return df.loc[:, mi_scores > 0.0]
"""
Explanation: We will focus on the top scoring features and drop the features that have 0.0 score
End of explanation
"""
def label_encode(df):
X = df.copy()
for colname in X.select_dtypes(["category"]):
X[colname] = X[colname].cat.codes
return X
"""
Explanation: Label encoding
End of explanation
"""
def mathematical_transforms(df):
X = pd.DataFrame() # dataframe to hold new features
X["LivLotRatio"] = df.GrLivArea / df.LotArea
X["Spaciousness"] = (df.FirstFlrSF + df.SecondFlrSF) / df.TotRmsAbvGrd
return X
def interactions(df):
X = pd.get_dummies(df.BldgType, prefix="Bldg")
X = X.mul(df.GrLivArea, axis=0)
return X
def counts(df):
X = pd.DataFrame()
X["PorchTypes"] = df[[
"WoodDeckSF",
"OpenPorchSF",
"EnclosedPorch",
"Threeseasonporch",
"ScreenPorch",
]].gt(0.0).sum(axis=1)
return X
def break_down(df):
X = pd.DataFrame()
X["MSClass"] = df.MSSubClass.str.split("_", n=1, expand=True)[0]
return X
def group_transforms(df):
X = pd.DataFrame()
X["MedNhbdArea"] = df.groupby("Neighborhood")["GrLivArea"].transform("median")
return X
"""
Explanation: Defining functions for feature creation using pandas
End of explanation
"""
cluster_features = [
"LotArea",
"TotalBsmtSF",
"FirstFlrSF",
"SecondFlrSF",
"GrLivArea",
]
def cluster_labels(df, features, n_clusters=20):
X = df.copy()
X_scaled = X.loc[:, features]
X_scaled = (X_scaled - X_scaled.mean(axis=0)) / X_scaled.std(axis=0)
kmeans = KMeans(n_clusters=n_clusters, n_init=50, random_state=0)
X_new = pd.DataFrame()
X_new["Cluster"] = kmeans.fit_predict(X_scaled)
return X_new
def cluster_distance(df, features, n_clusters=20):
X = df.copy()
X_scaled = X.loc[:, features]
X_scaled = (X_scaled - X_scaled.mean(axis=0)) / X_scaled.std(axis=0)
kmeans = KMeans(n_clusters=20, n_init=50, random_state=0)
X_cd = kmeans.fit_transform(X_scaled)
# Label features and join to dataset
X_cd = pd.DataFrame(
X_cd, columns=[f"Centroid_{i}" for i in range(X_cd.shape[1])]
)
return X_cd
"""
Explanation: Defining functions for feature creation using k-Means Clustering algorithm
End of explanation
"""
def apply_pca(X, standardize=True):
# Standardize
if standardize:
X = (X - X.mean(axis=0)) / X.std(axis=0)
# Create principal components
pca = PCA()
X_pca = pca.fit_transform(X)
# Convert to dataframe
component_names = [f"PC{i+1}" for i in range(X_pca.shape[1])]
X_pca = pd.DataFrame(X_pca, columns=component_names)
# Create loadings
loadings = pd.DataFrame(
pca.components_.T, # transpose the matrix of loadings
columns=component_names, # so the columns are the principal components
index=X.columns, # and the rows are the original features
)
return pca, X_pca, loadings
def pca_inspired(df):
X = pd.DataFrame()
X["Feature1"] = df.GrLivArea + df.TotalBsmtSF
X["Feature2"] = df.YearRemodAdd * df.TotalBsmtSF
return X
def pca_components(df, features):
X = df.loc[:, features]
_, X_pca, _ = apply_pca(X)
return X_pca
pca_features = [
"GarageArea",
"YearRemodAdd",
"TotalBsmtSF",
"GrLivArea",
]
def plot_variance(pca, width=8, dpi=100):
# Create figure
fig, axs = plt.subplots(1, 2)
n = pca.n_components_
grid = np.arange(1, n + 1)
# Explained variance
evr = pca.explained_variance_ratio_
axs[0].bar(grid, evr)
axs[0].set(
xlabel="Component", title="% Explained Variance", ylim=(0.0, 1.0)
)
# Cumulative Variance
cv = np.cumsum(evr)
axs[1].plot(np.r_[0, grid], np.r_[0, cv], "o-")
axs[1].set(
xlabel="Component", title="% Cumulative Variance", ylim=(0.0, 1.0)
)
# Set up figure
fig.set(figwidth=8, dpi=100)
return axs
"""
Explanation: Defining functions for feature creation using PCA algorithm
End of explanation
"""
import seaborn as sns
def corrplot(df, method="pearson", annot=True, **kwargs):
sns.clustermap(
df.corr(method),
vmin=-1.0,
vmax=1.0,
cmap="icefire",
method="complete",
annot=annot,
**kwargs,
)
corrplot(df_train, annot=None)
"""
Explanation: Correlation matrix
End of explanation
"""
class CrossFoldEncoder:
def __init__(self, encoder, **kwargs):
self.encoder_ = encoder
self.kwargs_ = kwargs # keyword arguments for the encoder
self.cv_ = KFold(n_splits=5)
# Fit an encoder on one split and transform the feature on the
# other. Iterating over the splits in all folds gives a complete
# transformation. We also now have one trained encoder on each
# fold.
def fit_transform(self, X, y, cols):
self.fitted_encoders_ = []
self.cols_ = cols
X_encoded = []
for idx_encode, idx_train in self.cv_.split(X):
fitted_encoder = self.encoder_(cols=cols, **self.kwargs_)
fitted_encoder.fit(
X.iloc[idx_encode, :], y.iloc[idx_encode],
)
X_encoded.append(fitted_encoder.transform(X.iloc[idx_train, :])[cols])
self.fitted_encoders_.append(fitted_encoder)
X_encoded = pd.concat(X_encoded)
X_encoded.columns = [name + "_encoded" for name in X_encoded.columns]
return X_encoded
# To transform the test data, average the encodings learned from
# each fold.
def transform(self, X):
from functools import reduce
X_encoded_list = []
for fitted_encoder in self.fitted_encoders_:
X_encoded = fitted_encoder.transform(X)
X_encoded_list.append(X_encoded[self.cols_])
X_encoded = reduce(
lambda x, y: x.add(y, fill_value=0), X_encoded_list
) / len(X_encoded_list)
X_encoded.columns = [name + "_encoded" for name in X_encoded.columns]
return X_encoded
"""
Explanation: Target Encoding
Using a separate holdout to create a target encoding would be a waste of data being used, instead we can use a trick similar to cross-validation:
1. Split the data into folds, each fold having two splits of the dataset.
2. Train the encoder on one split but transform the values of the other.
3. Repeat for all the splits.
This way, training and transformation always take place on independent sets of data, just like when you use a holdout set but without any data going to waste.
End of explanation
"""
def create_features(df, df_test=None):
X = df.copy()
y = X.pop("SalePrice")
mi_scores = make_mi_scores(X, y)
# Combine splits if test data is given
#
# If we're creating features for test set predictions, we should
# use all the data we have available. After creating our features,
# we'll recreate the splits.
if df_test is not None:
X_test = df_test.copy()
X_test.pop("SalePrice")
X = pd.concat([X, X_test])
# Mutual Information
X = drop_uninformative(X, mi_scores)
X.info()
# Experimentation can be done with uncommented codes amd transformations and che
# Transformations
X = X.join(mathematical_transforms(X))
X = X.join(interactions(X))
X = X.join(counts(X))
# X = X.join(break_down(X))
X = X.join(group_transforms(X))
# Clustering
# X = X.join(cluster_labels(X, cluster_features, n_clusters=20))
# X = X.join(cluster_distance(X, cluster_features, n_clusters=20))
# PCA
X = X.join(pca_inspired(X))
# X = X.join(pca_components(X, pca_features))
# X = X.join(indicate_outliers(X))
X = label_encode(X)
X.info()
# Reform splits
if df_test is not None:
X_test = X.loc[df_test.index, :]
X.drop(df_test.index, inplace=True)
# # Target Encoder
# encoder = CrossFoldEncoder(MEstimateEncoder, m=1)
# X = X.join(encoder.fit_transform(X, y, cols=["MSSubClass"]))
# if df_test is not None:
# X_test = X_test.join(encoder.transform(X_test))
if df_test is not None:
return X, X_test
else:
return X
"""
Explanation: Creating the final feature set
Now let's combine everything together. Putting the transformations into separate functions makes it easier to experiment with various combinations
End of explanation
"""
def score_dataset(X, y, model=XGBRegressor()):
# Label encoding for categoricals
#
# Label encoding is good for XGBoost and RandomForest, but one-hot
# would be better for models like Lasso or Ridge. The `cat.codes`
# attribute holds the category levels.
for colname in X.select_dtypes(["category"]):
X[colname] = X[colname].cat.codes
# Metric for Housing competition is RMSLE (Root Mean Squared Log Error)
log_y = np.log(y)
score = cross_val_score(
model, X, log_y, cv=5, scoring="neg_mean_squared_error",
)
score = -1 * score.mean()
score = np.sqrt(score)
return score
"""
Explanation: Build Models
End of explanation
"""
X = df_train.copy()
y = X.pop("SalePrice")
baseline_score = score_dataset(X, y)
print(f"Baseline score: {baseline_score:.5f} RMSLE")
"""
Explanation: Baseline training model
As we are trying to understand what features would provide a better model, we need to establish a baseline model which would act as a reference point for comparison of the models with different combination of features
End of explanation
"""
X_train = create_features(df_train)
y_train = df_train.loc[:, "SalePrice"]
feature_model_score = score_dataset(X_train, y_train)
print(f"Feature model score: {feature_model_score:.5f} RMSLE")
"""
Explanation: Features training model
End of explanation
"""
X_train, X_test = create_features(df_train, df_test)
y_train = df_train.loc[:, "SalePrice"]
X_train.head()
xgb_params = dict(
max_depth=6, # maximum depth of each tree - try 2 to 10
learning_rate=0.01, # effect of each tree - try 0.0001 to 0.1
n_estimators=1000, # number of trees (that is, boosting rounds) - try 1000 to 8000
min_child_weight=1, # minimum number of houses in a leaf - try 1 to 10
colsample_bytree=0.7, # fraction of features (columns) per tree - try 0.2 to 1.0
subsample=0.7, # fraction of instances (rows) per tree - try 0.2 to 1.0
reg_alpha=0.5, # L1 regularization (like LASSO) - try 0.0 to 10.0
reg_lambda=1.0, # L2 regularization (like Ridge) - try 0.0 to 10.0
num_parallel_tree=1, # set > 1 for boosted random forests
)
xgb = XGBRegressor(**xgb_params)
# XGB minimizes MSE, but competition loss is RMSLE
# So, we need to log-transform y to train and exp-transform the predictions
xgb.fit(X_train, np.log(y_train))
predictions = np.exp(xgb.predict(X_test))
print(predictions)
"""
Explanation: As we can see feature, engineering has help us improve score
Final Model
End of explanation
"""
output = pd.DataFrame({'Id': X_test.index, 'SalePrice': predictions})
output.to_csv('data/my_submission.csv', index=False)
print("Your submission was successfully saved!")
"""
Explanation: Submission
End of explanation
"""
|
cathalmccabe/PYNQ | docs/source/getting_started/python_environment.ipynb | bsd-3-clause | """Factors-and-primes functions.
Find factors or primes of integers, int ranges and int lists
and sets of integers with most factors in a given integer interval
"""
def factorize(n):
"""Calculate all factors of integer n.
"""
factors = []
if isinstance(n, int) and n > 0:
if n == 1:
factors.append(n)
return factors
else:
for x in range(1, int(n**0.5)+1):
if n % x == 0:
factors.append(x)
factors.append(n//x)
return sorted(set(factors))
else:
print('factorize ONLY computes with one integer argument > 0')
def primes_between(interval_min, interval_max):
"""Find all primes in the interval.
"""
primes = []
if (isinstance(interval_min, int) and interval_min > 0 and
isinstance(interval_max, int) and interval_max > interval_min):
if interval_min == 1:
primes = [1]
for i in range(interval_min, interval_max):
if len(factorize(i)) == 2:
primes.append(i)
return sorted(primes)
else:
print('primes_between ONLY computes over the specified range.')
def primes_in(integer_list):
"""Calculate all unique prime numbers.
"""
primes = []
try:
for i in (integer_list):
if len(factorize(i)) == 2:
primes.append(i)
return sorted(set(primes))
except TypeError:
print('primes_in ONLY computes over lists of integers.')
def get_ints_with_most_factors(interval_min, interval_max):
"""Finds the integers with the most factors.
"""
max_no_of_factors = 1
all_ints_with_most_factors = []
# Find the lowest number with most factors between i_min and i_max
if interval_check(interval_min, interval_max):
for i in range(interval_min, interval_max):
factors_of_i = factorize(i)
no_of_factors = len(factors_of_i)
if no_of_factors > max_no_of_factors:
max_no_of_factors = no_of_factors
results = (i, max_no_of_factors, factors_of_i,\
primes_in(factors_of_i))
all_ints_with_most_factors.append(results)
# Find any larger numbers with an equal number of factors
for i in range(all_ints_with_most_factors[0][0]+1, interval_max):
factors_of_i = factorize(i)
no_of_factors = len(factors_of_i)
if no_of_factors == max_no_of_factors:
results = (i, max_no_of_factors, factors_of_i, \
primes_in(factors_of_i))
all_ints_with_most_factors.append(results)
return all_ints_with_most_factors
else:
print_error_msg()
def interval_check(interval_min, interval_max):
"""Check type and range of integer interval.
"""
if (isinstance(interval_min, int) and interval_min > 0 and
isinstance(interval_max, int) and interval_max > interval_min):
return True
else:
return False
def print_error_msg():
"""Print invalid integer interval error message.
"""
print('ints_with_most_factors ONLY computes over integer intervals where'
' interval_min <= int_with_most_factors < interval_max and'
' interval_min >= 1')
"""
Explanation: Python Environment
We show here some examples of how to run Python on a Pynq platform. Python 3.8
is running exclusively on the ARM processor.
In the first example, which is based on calculating the factors and primes
of integer numbers, give us a sense of the performance available when running
on an ARM processor running Linux.
In the second set of examples, we leverage Python's numpy package and asyncio
module to demonstrate how Python can communicate
with programmable logic.
Factors and Primes Example
Code is provided in the cell below for a function to calculate factors and
primes. It contains some sample functions to calculate the factors and primes
of integers. We will use three functions from the factors_and_primes module
to demonstrate Python programming.
End of explanation
"""
factorize(1066)
"""
Explanation: Next we will call the factorize() function to calculate the factors of an integer.
End of explanation
"""
len(primes_between(1, 1066))
"""
Explanation: The primes_between() function can tell us how many prime numbers there are in an
integer range. Let’s try it for the interval 1 through 1066. We can also use one
of Python’s built-in methods len() to count them all.
End of explanation
"""
primes_1066 = primes_between(1, 1066)
primes_1066_average = sum(primes_1066) / len(primes_1066)
primes_1066_average
"""
Explanation: Additionally, we can combine len() with another built-in method, sum(), to calculate
the average of the 180 prime numbers.
End of explanation
"""
primes_1066_ends3 = [x for x in primes_between(1, 1066)
if str(x).endswith('3')]
print('{}'.format(primes_1066_ends3))
"""
Explanation: This result makes sense intuitively because prime numbers are known to become less
frequent for larger number intervals. These examples demonstrate how Python treats
functions as first-class objects so that functions may be passed as parameters to
other functions. This is a key property of functional programming and demonstrates
the power of Python.
In the next code snippet, we can use list comprehensions (a ‘Pythonic’ form of the
map-filter-reduce template) to ‘mine’ the factors of 1066 to find those factors that
end in the digit ‘3’.
End of explanation
"""
len(primes_1066_ends3) / len(primes_1066)
"""
Explanation: This code tells Python to first convert each prime between 1 and 1066 to a string and
then to return those numbers whose string representation end with the number ‘3’. It
uses the built-in str() and endswith() methods to test each prime for inclusion in the list.
And because we really want to know what fraction of the 180 primes of 1066 end in a
‘3’, we can calculate ...
End of explanation
"""
import numpy as np
import pynq
def get_pynq_buffer(shape, dtype):
""" Simple function to call PYNQ's memory allocator with numpy attributes"""
try:
return pynq.allocate(shape, dtype)
except RuntimeError:
print('Load an overlay to allocate memory')
return
"""
Explanation: These examples demonstrate how Python is a modern, multi-paradigmatic language. More
simply, it continually integrates the best features of other leading languages, including
functional programming constructs. Consider how many lines of code you would need to
implement the list comprehension above in C and you get an appreciation of the power
of productivity-layer languages. Higher levels of programming abstraction really do
result in higher programmer productivity!
Numpy Data Movement
Code in the cells below show a very simple data movement code snippet that can be used
to share data with programmable logic. We leverage the Python numpy package to
manipulate the buffer on the ARM processors and can then send a buffer pointer to
programmable logic for sharing data.
An overlay needs to be loaded onto the programmable logic in order to allocate memory
with pynq.allocate. This buffer can be manipulated as a numpy array and contains
a buffer pointer attribute. That pointer can then can be passed to programmable
logic hardware.
End of explanation
"""
buffer = get_pynq_buffer(shape=(4,4), dtype=np.uint32)
buffer
"""
Explanation: With the simple wrapper above, we can get access to memory that can be shared by both
numpy methods and programmable logic.
End of explanation
"""
isinstance(buffer, np.ndarray)
"""
Explanation: To double-check we show that the buffer is indeed a numpy array.
End of explanation
"""
try:
pl_buffer_address = hex(buffer.physical_address)
pl_buffer_address
except AttributeError:
print('Load an overlay to allocate memory')
"""
Explanation: To send the buffer pointer to programmable logic, we use its physical address which
is what programmable logic would need to communicate using this shared buffer.
End of explanation
"""
import asyncio
import random
import time
# Coroutine
async def wake_up(delay):
'''A function that will yield to asyncio.sleep() for a few seconds
and then resume, having preserved its state while suspended
'''
start_time = time.time()
print(f'The time is: {time.strftime("%I:%M:%S")}')
print(f"Suspending coroutine 'wake_up' at 'await` statement\n")
await asyncio.sleep(delay)
print(f"Resuming coroutine 'wake_up' from 'await` statement")
end_time = time.time()
sleep_time = end_time - start_time
print(f"'wake-up' was suspended for precisely: {sleep_time} seconds")
"""
Explanation: In this short example, we showed a simple allocation of a numpy array that is now ready
to be shared with programmable logic devices. With numpy arrays that are accessible to
programmable logic, we can quickly manipulate and move data across software and hardware.
Asyncio Integration
PYNQ also leverages the Python asyncio module for communicating with programmable logic
devices through events (namely interrupts).
A Python program running on PYNQ can use the asyncio library to manage multiple IO-bound
tasks asynchronously, thereby avoiding any blocking caused by waiting for responses from
slower IO subsystems. Instead, the program can continue to execute other tasks that are
ready to run. When the previously-busy tasks are ready to resume, they will be executed
in turn, and the cycle is repeated.
Again, since we won't assume what interrupt enabled devices are loaded on programmable
logic, we will show here a software-only asyncio example that uses asyncio's
sleep method.
End of explanation
"""
delay = random.randint(1,5)
my_event_loop = asyncio.get_event_loop()
try:
print("Creating task for coroutine 'wake_up'\n")
wake_up_task = my_event_loop.create_task(wake_up(delay))
my_event_loop.run_until_complete(wake_up_task)
except RuntimeError as err:
print (f'{err}' +
' - restart the Jupyter kernel to re-run the event loop')
"""
Explanation: With the wake_up function defined, we then can add a new task to the event loop.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.