markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights. | train_and_test(True, 2, tf.nn.relu) | batch-norm/Batch_Normalization_Lesson.ipynb | JasonNK/udacity-dlnd | mit |
When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning.
Note: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures.
Batch Normalization: A Detailed Look<a id='implementation_2'></a>
The layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization.
In order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer.
We represent the average as $\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$
$$
\mu_B \leftarrow \frac{1}{m}\sum_{i=1}^m x_i
$$
We then need to calculate the variance, or mean squared deviation, represented as $\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\mu_B$), which gives us what's called the "deviation" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation.
$$
\sigma_{B}^{2} \leftarrow \frac{1}{m}\sum_{i=1}^m (x_i - \mu_B)^2
$$
Once we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.)
$$
\hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}
$$
Above, we said "(almost) standard deviation". That's because the real standard deviation for the batch is calculated by $\sqrt{\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch.
Why increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account.
At this point, we have a normalized value, represented as $\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\gamma$, and then add a beta value, $\beta$. Both $\gamma$ and $\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate.
$$
y_i \leftarrow \gamma \hat{x_i} + \beta
$$
We now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice.
In NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations:
python
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
The next section shows you how to implement the math directly.
Batch normalization without the tf.layers package
Our implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package.
However, if you would like to implement batch normalization at a lower level, the following code shows you how.
It uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package.
1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before. | def fully_connected(self, layer_in, initial_weights, activation_fn=None):
"""
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
"""
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
num_out_nodes = initial_weights.shape[-1]
# Batch normalization adds additional trainable variables:
# gamma (for scaling) and beta (for shifting).
gamma = tf.Variable(tf.ones([num_out_nodes]))
beta = tf.Variable(tf.zeros([num_out_nodes]))
# These variables will store the mean and variance for this layer over the entire training set,
# which we assume represents the general population distribution.
# By setting `trainable=False`, we tell TensorFlow not to modify these variables during
# back propagation. Instead, we will assign values to these variables ourselves.
pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False)
# Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero.
# This is the default value TensorFlow uses.
epsilon = 1e-3
def batch_norm_training():
# Calculate the mean and variance for the data coming out of this layer's linear-combination step.
# The [0] defines an array of axes to calculate over.
batch_mean, batch_variance = tf.nn.moments(linear_output, [0])
# Calculate a moving average of the training data's mean and variance while training.
# These will be used during inference.
# Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter
# "momentum" to accomplish this and defaults it to 0.99
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
# The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean'
# and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.
# This is necessary because the those two operations are not actually in the graph
# connecting the linear_output and batch_normalization layers,
# so TensorFlow would otherwise just skip them.
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
# During inference, use the our estimated population mean and variance to normalize the layer
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
# Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute
# the operation returned from `batch_norm_training`; otherwise it will execute the graph
# operation returned from `batch_norm_inference`.
batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference)
# Pass the batch-normalized layer output through the activation function.
# The literature states there may be cases where you want to perform the batch normalization *after*
# the activation function, but it is difficult to find any uses of that in practice.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
| batch-norm/Batch_Normalization_Lesson.ipynb | JasonNK/udacity-dlnd | mit |
This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points:
It explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function.
It initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \leftarrow \gamma \hat{x_i} + \beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights.
Unlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly.
TensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block.
The actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization.
tf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation.
We use the tf.nn.moments function to calculate the batch mean and variance.
2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization:
python
if self.use_batch_norm:
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
Our new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line:
python
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training:
python
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
with these lines:
python
normalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon)
return gamma * normalized_linear_output + beta
And replace this line in batch_norm_inference:
python
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
with these lines:
python
normalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon)
return gamma * normalized_linear_output + beta
As you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\hat{x_i}$:
$$
\hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}
$$
And the second line is a direct translation of the following equation:
$$
y_i \leftarrow \gamma \hat{x_i} + \beta
$$
We still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you.
Why the difference between training and inference?
In the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so:
python
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
And that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function:
python
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
If you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that?
First, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training). | def batch_norm_test(test_training_accuracy):
"""
:param test_training_accuracy: bool
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
"""
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
tf.reset_default_graph()
# Train the model
bn = NeuralNet(weights, tf.nn.relu, True)
# First train the network
with tf.Session() as sess:
tf.global_variables_initializer().run()
bn.train(sess, 0.01, 2000, 2000)
bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True) | batch-norm/Batch_Normalization_Lesson.ipynb | JasonNK/udacity-dlnd | mit |
In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training. | batch_norm_test(True) | batch-norm/Batch_Normalization_Lesson.ipynb | JasonNK/udacity-dlnd | mit |
As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer.
Note: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions.
To overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it "normalize" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training.
So in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training. | batch_norm_test(False) | batch-norm/Batch_Normalization_Lesson.ipynb | JasonNK/udacity-dlnd | mit |
Now let's add a mesh dataset at a few different times so that we can see how the potentials affect the surfaces of the stars. | b.add_dataset('mesh', times=np.linspace(0,1,11), dataset='mesh01') | 2.3/tutorials/requiv.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Relevant Parameters
The 'requiv' parameter defines the stellar surface to have a constant volume of 4./3 pi requiv^3. | print(b['requiv@component']) | 2.3/tutorials/requiv.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Critical Potentials and System Checks
Additionally, for each detached component, there is an requiv_max Parameter which shows the critical value at which the Roche surface will overflow. Setting requiv to a larger value will fail system checks and raise a warning. | print(b['requiv_max@primary@component'])
print(b['requiv_max@primary@constraint'])
b.set_value('requiv@primary@component', 3) | 2.3/tutorials/requiv.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
At this time, if you were to call run_compute, an error would be thrown. An error isn't immediately thrown when setting requiv, however, since the overflow can be recitified by changing any of the other relevant parameters. For instance, let's change sma to be large enough to account for this value of rpole and you'll see that the error does not occur again. | b.set_value('sma@binary@component', 10) | 2.3/tutorials/requiv.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
These logger warnings are handy when running phoebe interactively, but in a script its also handy to be able to check whether the system is currently computable /before/ running run_compute.
This can be done by calling run_checks which returns a boolean (whether the system passes all checks) and a message (a string describing the first failed check). | print(b.run_checks())
b.set_value('sma@binary@component', 5)
print(b.run_checks()) | 2.3/tutorials/requiv.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
CIFAR-10 Data Loading and Preprocessing | # Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise.
num_training = 49000
num_validation = 1000
num_test = 1000
# Our validation set will be num_validation points from the original
# training set.
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
# Our training set will be the first num_train points from the original
# training set.
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
# We use the first num_test points of the original test set as our
# test set.
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
# As a sanity check, print out the shapes of the data
print 'Training data shape: ', X_train.shape
print 'Validation data shape: ', X_val.shape
print 'Test data shape: ', X_test.shape
# Preprocessing: subtract the mean image
# first: compute the image mean based on the training data
mean_image = np.mean(X_train, axis=0)
print mean_image[:10] # print a few of the elements
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8')) # visualize the mean image
# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
# Also, lets transform both data matrices so that each image is a column.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))]).T
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))]).T
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))]).T
print X_train.shape, X_val.shape, X_test.shape | assignment1/svm.ipynb | srippa/nn_deep | mit |
SVM Classifier
Your code for this section will all be written inside cs231n/classifiers/linear_svm.py.
As you can see, we have prefilled the function compute_loss_naive which uses for loops to evaluate the multiclass SVM loss function. | # Evaluate the naive implementation of the loss we provided for you:
from cs231n.classifiers.linear_svm import svm_loss_naive
import time
# generate a random SVM weight matrix of small numbers
W = np.random.randn(10, 3073) * 0.0001
loss, grad = svm_loss_naive(W, X_train, y_train, 0.00001)
print 'loss: %f' % (loss, ) | assignment1/svm.ipynb | srippa/nn_deep | mit |
The grad returned from the function above is right now all zero. Derive and implement the gradient for the SVM cost function and implement it inline inside the function svm_loss_naive. You will find it helpful to interleave your new code inside the existing function.
To check that you have correctly implemented the gradient correctly, you can numerically estimate the gradient of the loss function and compare the numeric estimate to the gradient that you computed. We have provided code that does this for you: | # Once you've implemented the gradient, recompute it with the code below
# and gradient check it with the function we provided for you
# Compute the loss and its gradient at W.
loss, grad = svm_loss_naive(W, X_train, y_train, 0.0)
# Numerically compute the gradient along several randomly chosen dimensions, and
# compare them with your analytically computed gradient. The numbers should match
# almost exactly along all dimensions.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: svm_loss_naive(w, X_train, y_train, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10) | assignment1/svm.ipynb | srippa/nn_deep | mit |
Inline Question 1:
It is possible that once in a while a dimension in the gradcheck will not match exactly. What could such a discrepancy be caused by? Is it a reason for concern? What is a simple example in one dimension where a gradient check could fail? Hint: the SVM loss function is not strictly speaking differentiable
Your Answer: fill this in. | # Next implement the function svm_loss_vectorized; for now only compute the loss;
# we will implement the gradient in a moment.
tic = time.time()
loss_naive, grad_naive = svm_loss_naive(W, X_train, y_train, 0.00001)
toc = time.time()
print 'Naive loss: %e computed in %fs' % (loss_naive, toc - tic)
from cs231n.classifiers.linear_svm import svm_loss_vectorized
tic = time.time()
loss_vectorized, _ = svm_loss_vectorized(W, X_train, y_train, 0.00001)
toc = time.time()
print 'Vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)
# The losses should match but your vectorized implementation should be much faster.
print 'difference: %f' % (loss_naive - loss_vectorized)
# Complete the implementation of svm_loss_vectorized, and compute the gradient
# of the loss function in a vectorized way.
# The naive implementation and the vectorized implementation should match, but
# the vectorized version should still be much faster.
tic = time.time()
_, grad_naive = svm_loss_naive(W, X_train, y_train, 0.00001)
toc = time.time()
print 'Naive loss and gradient: computed in %fs' % (toc - tic)
tic = time.time()
_, grad_vectorized = svm_loss_vectorized(W, X_train, y_train, 0.00001)
toc = time.time()
print 'Vectorized loss and gradient: computed in %fs' % (toc - tic)
# The loss is a single number, so it is easy to compare the values computed
# by the two implementations. The gradient on the other hand is a matrix, so
# we use the Frobenius norm to compare them.
difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print 'difference: %f' % difference | assignment1/svm.ipynb | srippa/nn_deep | mit |
Stochastic Gradient Descent
We now have vectorized and efficient expressions for the loss, the gradient and our gradient matches the numerical gradient. We are therefore ready to do SGD to minimize the loss. | # Now implement SGD in LinearSVM.train() function and run it with the code below
from cs231n.classifiers import LinearSVM
learning_rates = [1e-7, 5e-5]
regularization_strengths = [5e4, 1e5]
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train, y_train, learning_rate=1e-5, reg=5e4,
num_iters=1500, verbose=True)
toc = time.time()
print 'That took %fs' % (toc - tic)
# A useful debugging strategy is to plot the loss as a function of
# iteration number:
plt.plot(loss_hist)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
# Write the LinearSVM.predict function and evaluate the performance on both the
# training and validation set
y_train_pred = svm.predict(X_train)
print 'training accuracy: %f' % (np.mean(y_train == y_train_pred), )
y_val_pred = svm.predict(X_val)
print 'validation accuracy: %f' % (np.mean(y_val == y_val_pred), )
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of about 0.4 on the validation set.
learning_rates = [1e-7, 5e-5]
regularization_strengths = [5e4, 1e5]
# results is dictionary mapping tuples of the form
# (learning_rate, regularization_strength) to tuples of the form
# (training_accuracy, validation_accuracy). The accuracy is simply the fraction
# of data points that are correctly classified.
results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.
################################################################################
# TODO: #
# Write code that chooses the best hyperparameters by tuning on the validation #
# set. For each combination of hyperparameters, train a linear SVM on the #
# training set, compute its accuracy on the training and validation sets, and #
# store these numbers in the results dictionary. In addition, store the best #
# validation accuracy in best_val and the LinearSVM object that achieves this #
# accuracy in best_svm. #
# #
# Hint: You should use a small value for num_iters as you develop your #
# validation code so that the SVMs don't take much time to train; once you are #
# confident that your validation code works, you should rerun the validation #
# code with a larger value for num_iters. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
sz = [results[x][0]*1500 for x in results] # default size of markers is 20
plt.subplot(1,2,1)
plt.scatter(x_scatter, y_scatter, sz)
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
sz = [results[x][1]*1500 for x in results] # default size of markers is 20
plt.subplot(1,2,2)
plt.scatter(x_scatter, y_scatter, sz)
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
# Evaluate the best svm on test set
y_test_pred = best_svm.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print 'linear SVM on raw pixels final test set accuracy: %f' % test_accuracy
# Visualize the learned weights for each class.
# Depending on your choice of learning rate and regularization strength, these may
# or may not be nice to look at.
w = best_svm.W[:,:-1] # strip out the bias
w = w.reshape(10, 32, 32, 3)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in xrange(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i]) | assignment1/svm.ipynb | srippa/nn_deep | mit |
Migrate evaluation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migrate/evaluator">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/evaluator.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/evaluator.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/evaluator.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Evaluation is a critical part of measuring and benchmarking models.
This guide demonstrates how to migrate evaluator tasks from TensorFlow 1 to TensorFlow 2. In Tensorflow 1 this functionality is implemented by tf.estimator.train_and_evaluate, when the API is running distributedly. In Tensorflow 2, you can use the built-in tf.keras.utils.SidecarEvaluator, or a custom evaluation loop on the evaluator task.
There are simple serial evaluation options in both TensorFlow 1 (tf.estimator.Estimator.evaluate) and TensorFlow 2 (Model.fit(..., validation_data=(...)) or Model.evaluate). The evaluator task is preferable when you would like your workers not switching between training and evaluation, and built-in evaluation in Model.fit is preferable when you would like your evaluation to be distributed.
Setup | import tensorflow.compat.v1 as tf1
import tensorflow as tf
import numpy as np
import tempfile
import time
import os
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0 | site/en/guide/migrate/evaluator.ipynb | tensorflow/docs | apache-2.0 |
TensorFlow 1: Evaluating using tf.estimator.train_and_evaluate
In TensorFlow 1, you can configure a tf.estimator to evaluate the estimator using tf.estimator.train_and_evaluate.
In this example, start by defining the tf.estimator.Estimator and speciyfing training and evaluation specifications: | feature_columns = [tf1.feature_column.numeric_column("x", shape=[28, 28])]
classifier = tf1.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[256, 32],
optimizer=tf1.train.AdamOptimizer(0.001),
n_classes=10,
dropout=0.2
)
train_input_fn = tf1.estimator.inputs.numpy_input_fn(
x={"x": x_train},
y=y_train.astype(np.int32),
num_epochs=10,
batch_size=50,
shuffle=True,
)
test_input_fn = tf1.estimator.inputs.numpy_input_fn(
x={"x": x_test},
y=y_test.astype(np.int32),
num_epochs=10,
shuffle=False
)
train_spec = tf1.estimator.TrainSpec(input_fn=train_input_fn, max_steps=10)
eval_spec = tf1.estimator.EvalSpec(input_fn=test_input_fn,
steps=10,
throttle_secs=0) | site/en/guide/migrate/evaluator.ipynb | tensorflow/docs | apache-2.0 |
Then, train and evaluate the model. The evaluation runs synchronously between training because it's limited as a local run in this notebook and alternates between training and evaluation. However, if the estimator is used distributedly, the evaluator will run as a dedicated evaluator task. For more information, check the migration guide on distributed training. | tf1.estimator.train_and_evaluate(estimator=classifier,
train_spec=train_spec,
eval_spec=eval_spec) | site/en/guide/migrate/evaluator.ipynb | tensorflow/docs | apache-2.0 |
TensorFlow 2: Evaluating a Keras model
In TensorFlow 2, if you use the Keras Model.fit API for training, you can evaluate the model with tf.keras.utils.SidecarEvaluator. You can also visualize the evaluation metrics in TensorBoard which is not shown in this guide.
To help demonstrate this, let's first start by defining and training the model: | def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model = create_model()
model.compile(optimizer='adam',
loss=loss,
metrics=['accuracy'],
steps_per_execution=10,
run_eagerly=True)
log_dir = tempfile.mkdtemp()
model_checkpoint = tf.keras.callbacks.ModelCheckpoint(
filepath=os.path.join(log_dir, 'ckpt-{epoch}'),
save_weights_only=True)
model.fit(x=x_train,
y=y_train,
epochs=1,
callbacks=[model_checkpoint]) | site/en/guide/migrate/evaluator.ipynb | tensorflow/docs | apache-2.0 |
Then, evaluate the model using tf.keras.utils.SidecarEvaluator. In real training, it's recommended to use a separate job to conduct the evaluation to free up worker resources for training. | data = tf.data.Dataset.from_tensor_slices((x_test, y_test))
data = data.batch(64)
tf.keras.utils.SidecarEvaluator(
model=model,
data=data,
checkpoint_dir=log_dir,
max_evaluations=1
).start() | site/en/guide/migrate/evaluator.ipynb | tensorflow/docs | apache-2.0 |
$$
\pot_\cur = \sum_\prev \wcur \sigout\prev
$$
$$
\sigout\cur = \activfunc(\pot_\cur)
$$
$$
\weights = \begin{pmatrix}
\weight_{11} & \cdots & \weight_{1m} \
\vdots & \ddots & \vdots \
\weight_{n1} & \cdots & \weight_{nm}
\end{pmatrix}
$$
Divers
Le PMC peut approximer n'importe quelle fonction continue avec une précision arbitraire suivant le nombre de neurones présents sur la couche cachée.
Initialisation des poids: généralement des petites valeurs aléatoires
TODO: quelle différence entre:
* réseau bouclé
* réseau récurent
Fonction objectif (ou fonction d'erreur)
Fonction objectif: $\errfunc \left( \weights \left( \learnit \right) \right)$
$\learnit$: itération courante de l'apprentissage $(1, 2, ...)$
Typiquement, la fonction objectif (fonction d'erreur) est la somme du carré de l'erreur de chaque neurone de sortie.
$$
\errfunc = \frac12 \sum_{\cur \in \Omega} \left[ \sigout_\cur - \sigoutdes_\cur \right]^2
$$
$\Omega$: l'ensemble des neurones de sortie
Le $\frac12$, c'est juste pour simplifier les calculs de la dérivée ? | %matplotlib inline
import nnfigs
# https://github.com/jeremiedecock/neural-network-figures.git
import nnfigs.core as nnfig
import matplotlib.pyplot as plt
fig, ax = nnfig.init_figure(size_x=8, size_y=4)
nnfig.draw_synapse(ax, (0, -6), (10, 0))
nnfig.draw_synapse(ax, (0, -2), (10, 0))
nnfig.draw_synapse(ax, (0, 2), (10, 0))
nnfig.draw_synapse(ax, (0, 6), (10, 0))
nnfig.draw_synapse(ax, (0, -6), (10, -4))
nnfig.draw_synapse(ax, (0, -2), (10, -4))
nnfig.draw_synapse(ax, (0, 2), (10, -4))
nnfig.draw_synapse(ax, (0, 6), (10, -4))
nnfig.draw_synapse(ax, (0, -6), (10, 4))
nnfig.draw_synapse(ax, (0, -2), (10, 4))
nnfig.draw_synapse(ax, (0, 2), (10, 4))
nnfig.draw_synapse(ax, (0, 6), (10, 4))
nnfig.draw_synapse(ax, (10, -4), (12, -4))
nnfig.draw_synapse(ax, (10, 0), (12, 0))
nnfig.draw_synapse(ax, (10, 4), (12, 4))
nnfig.draw_neuron(ax, (0, -6), 0.5, empty=True)
nnfig.draw_neuron(ax, (0, -2), 0.5, empty=True)
nnfig.draw_neuron(ax, (0, 2), 0.5, empty=True)
nnfig.draw_neuron(ax, (0, 6), 0.5, empty=True)
nnfig.draw_neuron(ax, (10, -4), 1, ag_func="sum", tr_func="sigmoid")
nnfig.draw_neuron(ax, (10, 0), 1, ag_func="sum", tr_func="sigmoid")
nnfig.draw_neuron(ax, (10, 4), 1, ag_func="sum", tr_func="sigmoid")
plt.text(x=0, y=7.5, s=tex(STR_PREV), fontsize=14)
plt.text(x=10, y=7.5, s=tex(STR_CUR), fontsize=14)
plt.text(x=0, y=0, s=r"$\vdots$", fontsize=14)
plt.text(x=9.7, y=-6.1, s=r"$\vdots$", fontsize=14)
plt.text(x=9.7, y=5.8, s=r"$\vdots$", fontsize=14)
plt.text(x=12.5, y=4, s=tex(STR_SIGOUT + "_1"), fontsize=14)
plt.text(x=12.5, y=0, s=tex(STR_SIGOUT + "_2"), fontsize=14)
plt.text(x=12.5, y=-4, s=tex(STR_SIGOUT + "_3"), fontsize=14)
plt.text(x=16, y=4, s=tex(STR_ERRFUNC + "_1 = " + STR_SIGOUT + "_1 - " + STR_SIGOUT_DES + "_1"), fontsize=14)
plt.text(x=16, y=0, s=tex(STR_ERRFUNC + "_2 = " + STR_SIGOUT + "_2 - " + STR_SIGOUT_DES + "_2"), fontsize=14)
plt.text(x=16, y=-4, s=tex(STR_ERRFUNC + "_3 = " + STR_SIGOUT + "_3 - " + STR_SIGOUT_DES + "_3"), fontsize=14)
plt.text(x=16, y=-8, s=tex(STR_ERRFUNC + " = 1/2 ( " + STR_ERRFUNC + "^2_1 + " + STR_ERRFUNC + "^2_2 + " + STR_ERRFUNC + "^2_3 + \dots )"), fontsize=14)
plt.show() | ai_ml_multilayer_perceptron_fr.ipynb | jdhp-docs/python-notebooks | mit |
Apprentissage
Mise à jours des poids
$$
\weights(\learnit + 1) = \weights(\learnit) \underbrace{- \learnrate \nabla \errfunc \left( \weights(\learnit) \right)}
$$
$- \learnrate \nabla \errfunc \left( \weights(\learnit) \right)$: descend dans la direction opposée au gradient (plus forte pente)
avec $\nabla \errfunc \left( \weights(\learnit) \right)$: gradient de la fonction objectif au point $\weights$
$\learnrate > 0$: pas (ou taux) d'apprentissage
$$
\begin{align}
\delta_{\wcur} & = \wcur(\learnit + 1) - \wcur(\learnit) \
& = - \learnrate \frac{\partial \errfunc}{\partial \wcur}
\end{align}
$$
$$
\Leftrightarrow \wcur(\learnit + 1) = \wcur(\learnit) - \learnrate \frac{\partial \errfunc}{\partial \wcur}
$$
Chaque présentation de l'ensemble des exemples = un cycle (ou une époque) d'apprentissage
Critère d'arrêt de l'apprentissage: quand la valeur de la fonction objectif se stabilise (ou que le problème est résolu avec la précision souhaitée)
"généralement il n'y a qu'un seul minimum local" (preuve ???)
"dans le cas contraire, le plus simple est de recommencer plusieurs fois l'apprentissage avec des poids initiaux différents et de conserver la meilleure matrice $\weights$ (celle qui minimise $\errfunc$)" | fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(4, 4))
x = np.arange(10, 30, 0.1)
y = (x - 20)**2 + 2
ax.set_xlabel(r"Poids $" + STR_WEIGHTS + "$", fontsize=14)
ax.set_ylabel(r"Fonction objectif $" + STR_ERRFUNC + "$", fontsize=14)
# See http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.tick_params
ax.tick_params(axis='both', # changes apply to the x and y axis
which='both', # both major and minor ticks are affected
bottom='on', # ticks along the bottom edge are on
top='off', # ticks along the top edge are off
left='on', # ticks along the left edge are on
right='off', # ticks along the right edge are off
labelbottom='off', # labels along the bottom edge are off
labelleft='off') # labels along the lefleft are off
ax.set_xlim(left=10, right=25)
ax.set_ylim(bottom=0, top=5)
ax.plot(x, y); | ai_ml_multilayer_perceptron_fr.ipynb | jdhp-docs/python-notebooks | mit |
Apprentissage incrémentiel (ou partiel) (ang. incremental learning):
on ajuste les poids $\weights$ après la présentation d'un seul exemple
("ce n'est pas une véritable descente de gradient").
C'est mieux pour éviter les minimums locaux, surtout si les exemples sont
mélangés au début de chaque itération
Apprentissage différé (ang. batch learning):
TODO...
Est-ce que la fonction objectif $\errfunc$ est une fonction multivariée
ou est-ce une aggrégation des erreurs de chaque exemple ?
TODO: règle du delta / règle du delta généralisée
Rétropropagation du gradient
Rétropropagation du gradient:
une méthode pour calculer efficacement le gradient de la fonction objectif $\errfunc$.
Intuition:
La rétropropagation du gradient n'est qu'une méthode parmis d'autre pour résoudre le probème d'optimisation des poids $\weight$. On pourrait très bien résoudre ce problème d'optimisation avec des algorithmes évolutionnistes par exemple.
En fait, l'intérêt de la méthode de la rétropropagation du gradient (et ce qui explique sa notoriété) est qu'elle formule le problème d'optimisation des poids avec une écriture analytique particulièrement efficace qui élimine astucieusement un grand nombre de calculs redondants (un peu à la manière de ce qui se fait en programmation dynamique): quand on decide d'optimiser les poids via une descente de gradient, certains termes (les signaux d'erreurs $\errsig$) apparaissent un grand nombre de fois dans l'écriture analytique complète du gradient. La méthode de la retropropagation du gradient fait en sorte que ces termes ne soient calculés qu'une seule fois.
À noter qu'on aurrait très bien pu résoudre le problème avec une descente de gradient oú le gradient $\frac{\partial \errfunc}{\partial\wcur(\learnit)}$ serait calculé via une approximation numérique (méthode des différences finies par exemple) mais ce serait beaucoup plus lent et beaucoup moins efficace...
Principe:
on modifie les poids à l'aide des signaux d'erreur $\errsig$.
$$
\wcur(\learnit + 1) = \wcur(\learnit) \underbrace{- \learnrate \frac{\partial \errfunc}{\partial \wcur(\learnit)}}{\delta\prevcur}
$$
$$
\begin{align}
\delta_\prevcur & = - \learnrate \frac{\partial \errfunc}{\partial \wcur(\learnit)} \
& = - \learnrate \errsig_\cur \sigout\prev
\end{align}
$$
Dans le cas de l'apprentissage différé (batch), on calcule pour chaque exemple l'erreur correspondante. Leur contribution individuelle aux modifications des poids sont additionnées
L'apprentissage suppervisé fonctionne mieux avec des neurones de sortie linéaires (fonction d'activation $\activfunc$ = fonction identitée) "car les signaux d'erreurs se transmettent mieux".
Des données d'entrée binaires doivent être choisies dans ${-1,1}$ plutôt que ${0,1}$ car un signal nul ne contribu pas à l'apprentissage.
Voc:
- erreur marginale: TODO
Signaux d'erreur $\errsig_\cur$ pour les neurones de sortie $(\cur \in \Omega)$
$$
\errsig_\cur = \activfunc'(\pot_\cur)[\sigout_\cur - \sigoutdes_\cur]
$$
Signaux d'erreur $\errsig_\cur$ pour les neurones cachés $(\cur \not\in \Omega)$
$$
\errsig_\cur = \activfunc'(\pot_\cur) \sum_\next \weight_\curnext \errsig_\next
$$ | %matplotlib inline
import nnfigs
# https://github.com/jeremiedecock/neural-network-figures.git
import nnfigs.core as nnfig
import matplotlib.pyplot as plt
fig, ax = nnfig.init_figure(size_x=8, size_y=4)
nnfig.draw_synapse(ax, (0, -2), (10, 0))
nnfig.draw_synapse(ax, (0, 2), (10, 0), label=tex(STR_WEIGHT + "_{" + STR_NEXT + STR_CUR + "}"), label_position=0.5, fontsize=14)
nnfig.draw_synapse(ax, (10, 0), (12, 0))
nnfig.draw_neuron(ax, (0, -2), 0.5, empty=True)
nnfig.draw_neuron(ax, (0, 2), 0.5, empty=True)
plt.text(x=0, y=3.5, s=tex(STR_CUR), fontsize=14)
plt.text(x=10, y=3.5, s=tex(STR_NEXT), fontsize=14)
plt.text(x=0, y=-0.2, s=r"$\vdots$", fontsize=14)
nnfig.draw_neuron(ax, (10, 0), 1, ag_func="sum", tr_func="sigmoid")
plt.show() | ai_ml_multilayer_perceptron_fr.ipynb | jdhp-docs/python-notebooks | mit |
Plus de détail : calcul de $\errsig_\cur$
Dans l'exemple suivant on ne s'intéresse qu'aux poids $\weight_1$, $\weight_2$, $\weight_3$, $\weight_4$ et $\weight_5$ pour simplifier la demonstration. | %matplotlib inline
import nnfigs
# https://github.com/jeremiedecock/neural-network-figures.git
import nnfigs.core as nnfig
import matplotlib.pyplot as plt
fig, ax = nnfig.init_figure(size_x=8, size_y=4)
HSPACE = 6
VSPACE = 4
# Synapse #####################################
# Layer 1-2
nnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, VSPACE), label=tex(STR_WEIGHT + "_1"), label_position=0.4)
nnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, VSPACE), color="lightgray")
nnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, -VSPACE), color="lightgray")
nnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, -VSPACE), color="lightgray")
# Layer 2-3
nnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, VSPACE), label=tex(STR_WEIGHT + "_2"), label_position=0.4)
nnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, VSPACE), color="lightgray")
nnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, -VSPACE), label=tex(STR_WEIGHT + "_3"), label_position=0.4)
nnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, -VSPACE), color="lightgray")
# Layer 3-4
nnfig.draw_synapse(ax, (2*HSPACE, VSPACE), (3*HSPACE, 0), label=tex(STR_WEIGHT + "_4"), label_position=0.4)
nnfig.draw_synapse(ax, (2*HSPACE, -VSPACE), (3*HSPACE, 0), label=tex(STR_WEIGHT + "_5"), label_position=0.4, label_offset_y=-0.8)
# Neuron ######################################
# Layer 1 (input)
nnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True)
nnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True, line_color="lightgray")
# Layer 2
nnfig.draw_neuron(ax, (HSPACE, VSPACE), 1, ag_func="sum", tr_func="sigmoid")
nnfig.draw_neuron(ax, (HSPACE, -VSPACE), 1, ag_func="sum", tr_func="sigmoid", line_color="lightgray")
# Layer 3
nnfig.draw_neuron(ax, (2*HSPACE, VSPACE), 1, ag_func="sum", tr_func="sigmoid")
nnfig.draw_neuron(ax, (2*HSPACE, -VSPACE), 1, ag_func="sum", tr_func="sigmoid")
# Layer 4
nnfig.draw_neuron(ax, (3*HSPACE, 0), 1, ag_func="sum", tr_func="sigmoid")
# Text ########################################
# Layer 1 (input)
plt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + "_i"), fontsize=12)
# Layer 2
plt.text(x=HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + "_1"), fontsize=12)
plt.text(x=HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + "_1"), fontsize=12)
# Layer 3
plt.text(x=2*HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + "_2"), fontsize=12)
plt.text(x=2*HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + "_2"), fontsize=12)
plt.text(x=2*HSPACE-1.25, y=-VSPACE-1.8, s=tex(STR_POT + "_3"), fontsize=12)
plt.text(x=2*HSPACE+0.4, y=-VSPACE-1.8, s=tex(STR_SIGOUT + "_3"), fontsize=12)
# Layer 4
plt.text(x=3*HSPACE-1.25, y=1.5, s=tex(STR_POT + "_o"), fontsize=12)
plt.text(x=3*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + "_o"), fontsize=12)
plt.text(x=3*HSPACE+2, y=-0.3,
s=tex(STR_ERRFUNC + " = (" + STR_SIGOUT + "_o - " + STR_SIGOUT_DES + "_o)^2/2"),
fontsize=12)
plt.show() | ai_ml_multilayer_perceptron_fr.ipynb | jdhp-docs/python-notebooks | mit |
Attention: $\weight_1$ influe $\pot_2$ et $\pot_3$ en plus de $\pot_1$ et $\pot_o$.
Calcul de $\frac{\partial \errfunc}{\partial \weight_4}$
rappel:
$$
\begin{align}
\errfunc &= \frac12 \left( \sigout_o - \sigoutdes_o \right)^2 \tag{1} \
\sigout_o &= \activfunc(\pot_o) \tag{2} \
\pot_o &= \sigout_2 \weight_4 + \sigout_3 \weight_5 \tag{3} \
\end{align}
$$
c'est à dire:
$$
\errfunc = \frac12 \left( \activfunc \left( \sigout_2 \weight_4 + \sigout_3 \weight_5 \right) - \sigoutdes_o \right)^2
$$
donc, en appliquant les règles de derivation de fonctions composées, on a:
$$
\frac{\partial \errfunc}{\partial \weight_4} =
\frac{\partial \pot_o}{\partial \weight_4}
\underbrace{
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o}
}_{\errsig_o}
$$
de (1), (2) et (3) on déduit:
$$
\begin{align}
\frac{\partial \pot_o}{\partial \weight_4} &= \sigout_2 \
\frac{\partial \sigout_o}{\partial \pot_o} &= \activfunc'(\pot_o) \
\frac{\partial \errfunc}{\partial \sigout_o} &= \sigout_o - \sigoutdes_o \
\end{align}
$$
le signal d'erreur s'écrit donc:
$$
\begin{align}
\errsig_o &=
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o} \
&= \activfunc'(\pot_o) [\sigout_o - \sigoutdes_o]
\end{align}
$$
Calcul de $\frac{\partial \errfunc}{\partial \weight_5}$
$$
\frac{\partial \errfunc}{\partial \weight_5} =
\frac{\partial \pot_o}{\partial \weight_5}
\underbrace{
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o}
}_{\errsig_o}
$$
avec:
$$
\begin{align}
\frac{\partial \pot_o}{\partial \weight_5} &= \sigout_3 \
\frac{\partial \sigout_o}{\partial \pot_o} &= \activfunc'(\pot_o) \
\frac{\partial \errfunc}{\partial \sigout_o} &= \sigout_o - \sigoutdes_o \
\errsig_o &=
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o} \
&= \activfunc'(\pot_o) [\sigout_o - \sigoutdes_o]
\end{align}
$$
Calcul de $\frac{\partial \errfunc}{\partial \weight_2}$
$$
\frac{\partial \errfunc}{\partial \weight_2} =
\frac{\partial \pot_2}{\partial \weight_2}
%
\underbrace{
\frac{\partial \sigout_2}{\partial \pot_2}
\frac{\partial \pot_o}{\partial \sigout_2}
\underbrace{
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o}
}{\errsig_o}
}{\errsig_2}
$$
avec:
$$
\begin{align}
\frac{\partial \pot_2}{\partial \weight_2} &= \sigout_1 \
\frac{\partial \sigout_2}{\partial \pot_2} &= \activfunc'(\pot_2) \
\frac{\partial \pot_o}{\partial \sigout_2} &= \weight_4 \
\errsig_2 &=
\frac{\partial \sigout_2}{\partial \pot_2}
\frac{\partial \pot_o}{\partial \sigout_2}
\errsig_o \
&= \activfunc'(\pot_2) \weight_4 \errsig_o
\end{align}
$$
Calcul de $\frac{\partial \errfunc}{\partial \weight_3}$
$$
\frac{\partial \errfunc}{\partial \weight_3} =
\frac{\partial \pot_3}{\partial \weight_3}
%
\underbrace{
\frac{\partial \sigout_3}{\partial \pot_3}
\frac{\partial \pot_o}{\partial \sigout_3}
\underbrace{
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o}
}{\errsig_o}
}{\errsig_3}
$$
avec:
$$
\begin{align}
\frac{\partial \pot_3}{\partial \weight_3} &= \sigout_1 \
\frac{\partial \sigout_3}{\partial \pot_3} &= \activfunc'(\pot_3) \
\frac{\partial \pot_o}{\partial \sigout_3} &= \weight_5 \
\errsig_3 &=
\frac{\partial \sigout_3}{\partial \pot_3}
\frac{\partial \pot_o}{\partial \sigout_3}
\errsig_o \
&= \activfunc'(\pot_3) \weight_5 \errsig_o
\end{align}
$$
Calcul de $\frac{\partial \errfunc}{\partial \weight_1}$
$$
\frac{\partial \errfunc}{\partial \weight_1} =
\frac{\partial \pot_1}{\partial \weight_1}
%
\underbrace{
\frac{\partial \sigout_1}{\partial \pot_1}
\left(
\frac{\partial \pot_2}{\partial \sigout_1} % err?
\underbrace{
\frac{\partial \sigout_2}{\partial \pot_2}
\frac{\partial \pot_o}{\partial \sigout_2}
\underbrace{
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o}
}{\errsig_o}
}{\errsig_2}
+
\frac{\partial \pot_3}{\partial \sigout_1} % err?
\underbrace{
\frac{\partial \sigout_3}{\partial \pot_3}
\frac{\partial \pot_o}{\partial \sigout_3}
\underbrace{
\frac{\partial \sigout_o}{\partial \pot_o}
\frac{\partial \errfunc}{\partial \sigout_o}
}{\errsig_o}
}{\errsig_3}
\right)
}_{\errsig_1}
$$
avec:
$$
\begin{align}
\frac{\partial \pot_1}{\partial \weight_1} &= \sigout_i \
\frac{\partial \sigout_1}{\partial \pot_1} &= \activfunc'(\pot_1) \
\frac{\partial \pot_2}{\partial \sigout_1} &= \weight_2 \
\frac{\partial \pot_3}{\partial \sigout_1} &= \weight_3 \
\errsig_1 &=
\frac{\partial \sigout_1}{\partial \pot_1}
\left(
\frac{\partial \pot_2}{\partial \sigout_1}
\errsig_2
+
\frac{\partial \pot_3}{\partial \sigout_1}
\errsig_3
\right) \
&=
\activfunc'(\pot_1) \left( \weight_2 \errsig_2 + \weight_3 \errsig_3 \right)
\end{align}
$$
Fonctions d'activation : fonctions sigmoides (en forme de "S")
La fonction sigmoïde (en forme de "S") est définie par :
$$f(x) = \frac{1}{1 + e^{-x}}$$
pour tout réel $x$.
On peut la généraliser à toute fonction dont l'expression est :
$$f(x) = \frac{1}{1 + e^{-\lambda x}}$$ | def sigmoid(x, _lambda=1.):
y = 1. / (1. + np.exp(-_lambda * x))
return y
%matplotlib inline
x = np.linspace(-5, 5, 300)
y1 = sigmoid(x, 1.)
y2 = sigmoid(x, 5.)
y3 = sigmoid(x, 0.5)
plt.plot(x, y1, label=r"$\lambda=1$")
plt.plot(x, y2, label=r"$\lambda=5$")
plt.plot(x, y3, label=r"$\lambda=0.5$")
plt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')
plt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')
plt.legend()
plt.title("Fonction sigmoïde")
plt.axis([-5, 5, -0.5, 2]); | ai_ml_multilayer_perceptron_fr.ipynb | jdhp-docs/python-notebooks | mit |
Fonction dérivée :
$$
f'(x) = \frac{\lambda e^{-\lambda x}}{(1+e^{-\lambda x})^{2}}
$$
qui peut aussi être défini par
$$
\frac{\mathrm{d} y}{\mathrm{d} x} = \lambda y (1-y)
$$
où $y$ varie de 0 à 1. | def d_sigmoid(x, _lambda=1.):
e = np.exp(-_lambda * x)
y = _lambda * e / np.power(1 + e, 2)
return y
%matplotlib inline
x = np.linspace(-5, 5, 300)
y1 = d_sigmoid(x, 1.)
y2 = d_sigmoid(x, 5.)
y3 = d_sigmoid(x, 0.5)
plt.plot(x, y1, label=r"$\lambda=1$")
plt.plot(x, y2, label=r"$\lambda=5$")
plt.plot(x, y3, label=r"$\lambda=0.5$")
plt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')
plt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')
plt.legend()
plt.title("Fonction dérivée de la sigmoïde")
plt.axis([-5, 5, -0.5, 2]); | ai_ml_multilayer_perceptron_fr.ipynb | jdhp-docs/python-notebooks | mit |
Tangente hyperbolique | def tanh(x):
y = np.tanh(x)
return y
x = np.linspace(-5, 5, 300)
y = tanh(x)
plt.plot(x, y)
plt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')
plt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')
plt.title("Fonction tangente hyperbolique")
plt.axis([-5, 5, -2, 2]); | ai_ml_multilayer_perceptron_fr.ipynb | jdhp-docs/python-notebooks | mit |
Dérivée :
$$
\tanh '= \frac{1}{\cosh^{2}} = 1-\tanh^{2}
$$ | def d_tanh(x):
y = 1. - np.power(np.tanh(x), 2)
return y
x = np.linspace(-5, 5, 300)
y = d_tanh(x)
plt.plot(x, y)
plt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')
plt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')
plt.title("Fonction dérivée de la tangente hyperbolique")
plt.axis([-5, 5, -2, 2]); | ai_ml_multilayer_perceptron_fr.ipynb | jdhp-docs/python-notebooks | mit |
Fonction logistique
Fonctions ayant pour expression
$$
f(t) = K \frac{1}{1+ae^{-rt}}
$$
où $K$ et $r$ sont des réels positifs et $a$ un réel quelconque.
Les fonctions sigmoïdes sont un cas particulier de fonctions logistique avec $a > 0$.
Python implementation | # Define the activation function and its derivative
activation_function = tanh
d_activation_function = d_tanh
def init_weights(num_input_cells, num_output_cells, num_cell_per_hidden_layer, num_hidden_layers=1):
"""
The returned `weights` object is a list of weight matrices,
where weight matrix at index $i$ represents the weights between
layer $i$ and layer $i+1$.
Numpy array shapes for e.g. num_input_cells=2, num_output_cells=2,
num_cell_per_hidden_layer=3 (without taking account bias):
- in: (2,)
- in+bias: (3,)
- w[0]: (3,3)
- w[0]+bias: (3,4)
- w[1]: (3,2)
- w[1]+bias: (4,2)
- out: (2,)
"""
# TODO:
# - faut-il que wij soit positif ?
# - loi normale plus appropriée que loi uniforme ?
# - quel sigma conseillé ?
W = []
# Weights between the input layer and the first hidden layer
W.append(np.random.uniform(low=0., high=1., size=(num_input_cells + 1, num_cell_per_hidden_layer + 1)))
# Weights between hidden layers (if there are more than one hidden layer)
for layer in range(num_hidden_layers - 1):
W.append(np.random.uniform(low=0., high=1., size=(num_cell_per_hidden_layer + 1, num_cell_per_hidden_layer + 1)))
# Weights between the last hidden layer and the output layer
W.append(np.random.uniform(low=0., high=1., size=(num_cell_per_hidden_layer + 1, num_output_cells)))
return W
def evaluate_network(weights, input_signal): # TODO: find a better name
# Add the bias on the input layer
input_signal = np.concatenate([input_signal, [-1]])
assert input_signal.ndim == 1
assert input_signal.shape[0] == weights[0].shape[0]
# Compute the output of the first hidden layer
p = np.dot(input_signal, weights[0])
output_hidden_layer = activation_function(p)
# Compute the output of the intermediate hidden layers
# TODO: check this
num_layers = len(weights)
for n in range(num_layers - 2):
p = np.dot(output_hidden_layer, weights[n + 1])
output_hidden_layer = activation_function(p)
# Compute the output of the output layer
p = np.dot(output_hidden_layer, weights[-1])
output_signal = activation_function(p)
return output_signal
def compute_gradient():
# TODO
pass
weights = init_weights(num_input_cells=2, num_output_cells=2, num_cell_per_hidden_layer=3, num_hidden_layers=1)
print(weights)
#print(weights[0].shape)
#print(weights[1].shape)
input_signal = np.array([.1, .2])
input_signal
evaluate_network(weights, input_signal) | ai_ml_multilayer_perceptron_fr.ipynb | jdhp-docs/python-notebooks | mit |
ProPublica Campaign Finance API
https://propublica.github.io/campaign-finance-api-docs/#candidates | # set key
key="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
# set base url
base_url="https://api.propublica.org/campaign-finance/v1/"
# set headers
headers = {'X-API-Key': key}
# set url parameters
cycle = "2014/"
method = "candidates/"
file_format = ".json"
# create a list of FEC IDs from http://www.fec.gov/data/DataCatalog.do to run the API request on more than one ID
fec_id_list = []
with open('fecid2014.txt') as file:
for line in file:
fec_id_list.append(line.strip())
# make request, build list of results for each FEC ID
data = []
for fec_id in fec_id_list:
r = requests.get(base_url+cycle+method+fec_id+file_format, headers=headers)
candidate = r.json()['results']
data.append(candidate)
time.sleep(3)
print(data)
# format data for export
data = [v for sublist in data for v in sublist]
data_keys = data[0].keys()
# export to csv
import csv
with open('ppcampfin.csv', 'w') as file:
dict_writer = csv.DictWriter(file, data_keys)
dict_writer.writeheader()
dict_writer.writerows(data) | 01_collect-data.ipynb | mathias-gibson/ps239t-final-project | mit |
ProPublica Congress API - list of all members
https://propublica.github.io/congress-api-docs/?shell#lists-of-members | # set key
key="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
# set base url
base_url="https://api.propublica.org/congress/v1/"
# set url parameters
congress = "114/" #102-114 for House, 80-114 for Senate
chamber = "senate" #house or senate
method="/members"
file_format = ".json"
#set headers
headers = {'X-API-Key': key}
# make request
r = requests.get(base_url+congress+chamber+method+file_format, headers=headers)
# parse data for component nested dictionaries
data=(r.json())
bio_keys = data['results'][0]['members'][0]
bio_list = data['results'][0]['members']
# export to csv
import csv
with open('bio.csv', 'w') as file:
dict_writer = csv.DictWriter(file, bio_keys)
dict_writer.writeheader()
dict_writer.writerows(bio_list) | 01_collect-data.ipynb | mathias-gibson/ps239t-final-project | mit |
Create a grid with random elevation, set boundary conditions, and initialize components. | mg = RasterModelGrid((40, 40), 100)
z = mg.add_zeros('topographic__elevation', at='node')
z += np.random.rand(z.size)
outlet_id = int(mg.number_of_node_columns * 0.5)
mg.set_watershed_boundary_condition_outlet_id(outlet_id, z)
mg.at_node['topographic__elevation'][outlet_id] = 0
fr = FlowAccumulator(mg)
sp = FastscapeEroder(mg, K_sp=3e-5, m_sp=0.5, n_sp=1) | notebooks/tutorials/plotting/animate-landlab-output.ipynb | amandersillinois/landlab | mit |
Set model time and uplift parameters. | simulation_duration = 1e6
dt = 1000
n_timesteps = int(simulation_duration // dt) + 1
timesteps = np.linspace(0, simulation_duration, n_timesteps)
uplift_rate = 0.001
uplift_per_timestep = uplift_rate * dt | notebooks/tutorials/plotting/animate-landlab-output.ipynb | amandersillinois/landlab | mit |
Phase 1: Animate elevation change using imshow_grid
We first prepare the animation movie file. The model is run and the animation frames are captured together. | # Create a matplotlib figure for the animation.
fig, ax = plt.subplots(1, 1)
# Initiate an animation writer using the matplotlib module, `animation`.
# Set up to animate 6 frames per second (fps)
writer = animation.FFMpegWriter(fps=6)
# Setup the movie file.
writer.setup(fig, 'first_phase.mp4')
for t in timesteps:
# Uplift and erode.
z[mg.core_nodes] += uplift_per_timestep
fr.run_one_step()
sp.run_one_step(dt)
# Update the figure every 50,000 years.
if t % 5e4 == 0:
imshow_grid(mg, z, colorbar_label='elevation (m)')
plt.title('{:.0f} kiloyears'.format(t * 1e-3))
# Capture the state of `fig`.
writer.grab_frame()
# Remove the colorbar and clear the axis to reset the
# figure for the next animation timestep.
plt.gci().colorbar.remove()
ax.cla()
plt.close() | notebooks/tutorials/plotting/animate-landlab-output.ipynb | amandersillinois/landlab | mit |
Finish the animation
The method, writer.finish completes the processing of the movie and saves then it. | writer.finish() | notebooks/tutorials/plotting/animate-landlab-output.ipynb | amandersillinois/landlab | mit |
This code loads the saved mp4 and presents it in a Jupyter Notebook. | HTML("""<div align="middle"> <video width="80%" controls loop>
<source src="first_phase.mp4" type="video/mp4"> </video></div>""") | notebooks/tutorials/plotting/animate-landlab-output.ipynb | amandersillinois/landlab | mit |
Phase 2: Animate multiple visualizations of elevation change over time
In the second model phase, we will create an animation similar to the one above, although with the following differences:
* The uplift rate is greater.
* The animation file format is gif.
* The figure has two subplots.
* The data of one of the subplots is updated rather than recreating the plot from scratch for each frame.
* The animation frame rate (fps) is lower.
Increase uplift rate prior to running the second phase of the model. | increased_uplift_per_timestep = 10 * uplift_per_timestep | notebooks/tutorials/plotting/animate-landlab-output.ipynb | amandersillinois/landlab | mit |
Run the second phase of the model
Here we layout the figure with a left and right subplot.
* The left subplot will be an animation of the grid similar to phase 1. We will recreate the image of this subplot for each animation frame.
* The right subplot will be a line plot of the mean elevation over time. We will layout the subplot elements (labels, limits) before running the model, and then extend the plot line at each animation frame.
axes[0] and axes[1] refer to the left and right subplot, respectively.
A gif formatted movie is created in this model phase using the software, ImageMagick. | # Create a matplotlib figure for the animation.
fig2, axes = plt.subplots(1, 2, figsize=(9, 3))
fig2.subplots_adjust(top=0.85, bottom=0.25, wspace=0.4)
# Layout right subplot.
time = 0
line, = axes[1].plot(time, z.mean(), 'k')
axes[1].set_title('mean elevation over time')
axes[1].set_xlim([0, 1000])
axes[1].set_ylim([0, 1000])
axes[1].set_xlabel('time (kyr)')
axes[1].set_ylabel('elevation (m)')
# Initiate a writer and set up a movie file.
writer = animation.ImageMagickWriter(fps=2)
writer.setup(fig2, 'second_phase.gif')
for t in timesteps:
# Uplift and erode.
z[mg.core_nodes] += increased_uplift_per_timestep
fr.run_one_step()
sp.run_one_step(dt)
# Update the figure every 50,000 years.
if t % 5e4 == 0:
fig2.sca(axes[0])
fig2.suptitle('{:.0f} kiloyears'.format(t * 1e-3))
# Plot the left subplot.
axes[0].set_title('topography')
imshow_grid(mg, z, colorbar_label='elevation (m)')
colorbar = plt.gci().colorbar
# Update the right subplot.
line.set_xdata(np.append(line.get_xdata(), t * 1e-3))
line.set_ydata(np.append(line.get_ydata(), z.mean()))
# Capture the state of `fig2`.
writer.grab_frame()
# Reset the figure for the next animation time step.
plt.cla()
colorbar.remove()
writer.finish()
plt.close() | notebooks/tutorials/plotting/animate-landlab-output.ipynb | amandersillinois/landlab | mit |
This code loads the saved mp4 and presents it in a Jupyter Notebook. | Image(filename='second_phase.gif') | notebooks/tutorials/plotting/animate-landlab-output.ipynb | amandersillinois/landlab | mit |
Build up sensor to pvoutput model | from datetime import datetime,timedelta, time
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from data_helper_functions import *
from IPython.display import display
pd.options.display.max_columns = 999
%matplotlib inline
#iterate over datetimes:
mytime = datetime(2014, 4, 1, 13)
times = make_time(mytime)
# Now that we can call data up over any datetime and we have a list of interested datetimes,
# we can finally construct an X matrix and y vector for regression.
sensor_filefolder = 'data/sensor_data/colorado6months/'
pvoutput_filefolder = 'data/pvoutput/pvoutput6months/'
X = []
y = []
for desired_datetime in times:
try: #something wrong with y on last day
desired_date = (desired_datetime - timedelta(hours=6)).date() #make sure correct date
desired_date = datetime.combine(desired_date, time.min) #get into datetime format
sensor_filename = find_file_from_date(desired_date, sensor_filefolder)
df_sensor = return_sensor_data(sensor_filename, sensor_filefolder).ix[:,-15:-1]
df_sensor[df_sensor.index == desired_datetime]
pvoutput_filename = find_file_from_date(desired_date, pvoutput_filefolder)
df_pvoutput = return_pvoutput_data(pvoutput_filename, pvoutput_filefolder)
y.append(df_pvoutput[df_pvoutput.index == desired_datetime].values[0][0])
X.append(df_sensor[df_sensor.index == desired_datetime].values[0])
except:
pass
X = np.array(X)
y = np.array(y)
print X.shape
print y.shape | .ipynb_checkpoints/all-datasets-together-checkpoint.ipynb | scottlittle/solar-sensors | apache-2.0 |
...finally ready to model!
Random Forest | from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=99)
from sklearn.ensemble import RandomForestRegressor
rfr = RandomForestRegressor(oob_score = True)
rfr.fit(X_train,y_train)
y_pred = rfr.predict(X_test)
rfr.score(X_test,y_test)
df_sensor.columns.values.shape
sorted_mask = np.argsort(rfr.feature_importances_)
for i in zip(df_sensor.columns.values,rfr.feature_importances_[sorted_mask])[::-1]:
print i | .ipynb_checkpoints/all-datasets-together-checkpoint.ipynb | scottlittle/solar-sensors | apache-2.0 |
Linear model | #now do a linear model and compare:
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X_train,y_train)
lr.score(X_test,y_test)
sorted_mask = np.argsort(lr.coef_)
for i in zip(df_sensor.columns.values,lr.coef_[sorted_mask])[::-1]:
print i
df_sensor.ix[:,-15:-1].head() #selects photometer and AOD,
# useful in next iteration of using sensor data to fit | .ipynb_checkpoints/all-datasets-together-checkpoint.ipynb | scottlittle/solar-sensors | apache-2.0 |
When only keeping the photometer data, random forest and linear model do pretty similar. When I added all of the sensor instruments to the fit, rfr scored 0.87 and lr scored negative!
Also, I threw away the mysterious "Research 2" sensor, that was probably just a solar panel! I asked NREL what it is, so we'll see. If it turns out to be a solar panel, then I can do some feature engineering with the sensor data by simulating a solar panel!
Neural Net Exploration | import pandas as pd
import numpy as np
from sklearn.preprocessing import scale
from lasagne import layers
from lasagne.nonlinearities import softmax, rectify, sigmoid, linear, very_leaky_rectify, tanh
from lasagne.updates import nesterov_momentum, adagrad, momentum
from nolearn.lasagne import NeuralNet
import theano
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
y = y.astype('float32')
x = X.astype('float32')
scaler = StandardScaler()
scaled_x = scaler.fit_transform(x)
x_train, x_test, y_train, y_test = train_test_split(scaled_x, y, test_size = 0.2, random_state = 12)
nn_regression = NeuralNet(layers=[('input', layers.InputLayer),
# ('hidden1', layers.DenseLayer),
# ('hidden2', layers.DenseLayer),
('output', layers.DenseLayer)
],
# Input Layer
input_shape=(None, x.shape[1]),
# hidden Layer
# hidden1_num_units=512,
# hidden1_nonlinearity=softmax,
# hidden Layer
# hidden2_num_units=128,
# hidden2_nonlinearity=linear,
# Output Layer
output_num_units=1,
output_nonlinearity=very_leaky_rectify,
# Optimization
update=nesterov_momentum,
update_learning_rate=0.03,#0.02
update_momentum=0.8,#0.8
max_epochs=600, #was 100
# Others
#eval_size=0.2,
regression=True,
verbose=0,
)
nn_regression.fit(x_train, y_train)
y_pred = nn_regression.predict(x_test)
nn_regression.score(x_test, y_test)
val = 11
print y_pred[val][0]
print y_test[val]
plt.plot(y_pred,'ro')
plt.plot(y_test,'go') | .ipynb_checkpoints/all-datasets-together-checkpoint.ipynb | scottlittle/solar-sensors | apache-2.0 |
Extra Trees! | from sklearn.ensemble import ExtraTreesRegressor
etr = ExtraTreesRegressor(oob_score=True, bootstrap=True,
n_jobs=-1, n_estimators=1000) #nj_obs uses all cores!
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=99)
etr.fit(X_train, y_train)
print etr.score(X_test,y_test)
print etr.oob_score_
y_pred = etr.predict(X_test)
from random import randint
val = randint(0,y_test.shape[0])
print y_pred[val]
print y_test[val]
print X.shape
print y.shape | .ipynb_checkpoints/all-datasets-together-checkpoint.ipynb | scottlittle/solar-sensors | apache-2.0 |
Save this thing and try it out on the simulated sensors! | from sklearn.externals import joblib
joblib.dump(etr, 'data/sensor-to-power-model/sensor-to-power-model.pkl')
np.savez_compressed('data/y.npz',y=y) #save y | .ipynb_checkpoints/all-datasets-together-checkpoint.ipynb | scottlittle/solar-sensors | apache-2.0 |
I have cloned the $\delta$a$\delta$i repository into '/home/claudius/Downloads/dadi' and have compiled the code. Now I need to add that directory to the PYTHONPATH variable: | sys.path.insert(0, '/home/claudius/Downloads/dadi')
sys.path | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Now, I should be able to import $\delta$a$\delta$i | import dadi
dir(dadi)
import pylab
%matplotlib inline
x = pylab.linspace(0, 4*pylab.pi, 1000)
pylab.plot(x, pylab.sin(x), '-r')
%%sh
# this allows me to execute a shell command
ls | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
I have turned the 1D folded SFS's from realSFS into $\delta$d$\delta$i format by hand according to the description in section 3.1 of the manual. I have left out the masking line from the input file. | fs_ery = dadi.Spectrum.from_file('ERY.FOLDED.sfs.dadi_format')
fs_ery | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
$\delta$a$\delta$i is detecting that the spectrum is folded (as given in the input file), but it is also automatically masking the 0th and 18th count category. This is a not a good behaviour. | # number of segregating sites
fs_ery.data[1:].sum() | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Single population statistics
$\pi$ | fs_ery.pi() | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
I have next added a masking line to the input file, setting it to '1' for the first position, i. e. the 0-count category. | fs_ery = dadi.Spectrum.from_file('ERY.FOLDED.sfs.dadi_format', mask_corners=False) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
$\delta$a$\delta$i is issuing the following message when executing the above command:
WARNING:Spectrum_mod:Creating Spectrum with data_folded = True, but mask is not True for all entries which are nonsensical for a folded Spectrum. | fs_ery | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
I do not understand this warning from $\delta$a$\delta$i. The 18-count category is sensical for a folded spectrum with even sample size, so should not be masked. Anyway, I do not understand why $\delta$a$\delta$i is so reluctant to keep all positions, including the non-variable one. | fs_ery.pi() | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
The function that returns $\pi$ produces the same output with or without the last count category masked ?! I think that is because even if the last count class (966.62...) is masked, it is still included in the calculation of $\pi$. However, there is no obvious unmasking in the pi function. Strange!
There are (at least) two formulas that allow the calculation of $\pi$ from a folded sample allele frequency spectrum. One is given in Wakeley2009, p.16, equation (1.4):
$$
\pi = \frac{1}{n \choose 2} \sum_{i=1}^{n/2} i(n-i)\eta_{i}
$$
Here, $n$ is the number of sequences and $\eta_{i}$ is the SNP count in the i'th minor sample allele frequency class.
The other formula is on p. 45 in Gillespie "Population Genetics - A concise guide":
$$
\hat{\pi} = \frac{n}{n-1} \sum_{i=1}^{S_{n}} 2 \hat{p_{i}}(1-\hat{p_{i}})
$$
This is the formula that $\delta$a$\delta$i's pi function uses, with the modification that it multiplies each $\hat{p_{i}}$ by the count in the i'th class of the SFS, i. e. the sum is not over all SNP's but over all SNP frequency classes. | # Calcualting pi with the formula from Wakeley2009
n = 36 # 36 sequences sampled from 18 diploid individuals
pi_Wakeley = (sum( [i*(n-i)*fs_ery[i] for i in range(1, n/2+1)] ) * 2.0 / (n*(n-1)))/pylab.sum(fs_ery.data)
# note fs_ery.data gets the whole fs_ery list, including masked entries
pi_Wakeley | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
This is the value of $\pi_{site}$ that I calculated previously and included in the first draft of the thesis. | fs_ery.mask
fs_ery.data # gets all data, including the masked one
# Calculating pi with the formula from Gillespie:
n = 18
p = pylab.arange(0, n+1)/float(n)
p
# Calculating pi with the formula from Gillespie:
n / (n-1.0) * 2 * pylab.sum(fs_ery * p*(1-p)) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
This is the same as the output of dadi's pi function on the same SFS. | # the sample size (n) that dadi stores in this spectrum object and uses as n in the pi function
fs_ery.sample_sizes[0]
# what is the total number of sites in the spectrum
pylab.sum(fs_ery.data) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
So, 1.6 million sites went into the ery spectrum. | # pi per site
n / (n-1.0) * 2 * pylab.sum(fs_ery * p*(1-p)) / pylab.sum(fs_ery.data) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Apart from the incorrect small sample size correction by $\delta$a$\delta$i in case of folded spectra ($n$ refers to sampled sequences, not individuals), Gillespie's formula leads to a much higher estimate of $\pi_{site}$ than Wakeley's. Why is that? | # with correct small sample size correction
2 * n / (2* n-1.0) * 2 * pylab.sum(fs_ery * p*(1-p)) / pylab.sum(fs_ery.data)
# Calculating pi with the formula from Gillespie:
n = 18
p = pylab.arange(0, n+1)/float(n)
p = p/2 # with a folded spectrum, we are summing over minor allele freqs only
pi_Gillespie = 2*n / (2*n-1.0) * 2 * pylab.sum(fs_ery * p*(1-p)) / pylab.sum(fs_ery.data)
pi_Gillespie
pi_Wakeley - pi_Gillespie | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
As can be seen from the insignificant difference (must be due to numerical inaccuracies) between the $\pi_{Wakeley}$ and the $\pi_{Gillespie}$ estimates, they are equivalent with the calculation for folded spectra given above as well as the correct small sample size correction. Beware: $\delta$a$\delta$i does not handle folded spectra correctly.
It should be a relatively easy to fix the pi function to work correctly with folded spectra. Care should be taken to also correctly handle uneven sample sizes. | fs_ery.folded | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
I think for now it would be best to import unfolded spectra from realSFS and fold them if necessary in dadi. | fs_par = dadi.Spectrum.from_file('PAR.FOLDED.sfs.dadi_format')
pylab.plot(fs_ery, 'r', label='ery')
pylab.plot(fs_par, 'g', label='par')
pylab.legend() | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
ML estimate of $\theta$ from 1D folded spectrum
I am trying to fit eq. 4.21 of Wakeley2009 to the oberseved 1D folded spectra.
$$
E[\eta_i] = \theta \frac{\frac{1}{i} + \frac{1}{n-i}}{1+\delta_{i,n-i}} \qquad 1 \le i \le \big[n/2\big]
$$
Each frequency class, $\eta_i$, provides an estimate of $\theta$. However, I would like to find the value of $\theta$ that minimizes the deviation of the above equation from all observed counts $\eta_i$.
I am following the example given here: https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#example-of-solving-a-fitting-problem
$$
\frac{\delta E}{\delta \theta} = \frac{\frac{1}{i} + \frac{1}{n-i}}{1+\delta_{i,n-i}} \qquad 1 \le i \le \big[n/2\big]
$$
I have just one parameter to optimize. | from scipy.optimize import least_squares
def model(theta, eta, n):
"""
theta: scaled population mutation rate parameter [scalar]
eta: the folded 1D spectrum, including 0-count cat. [list]
n: number of sampled gene copies, i. e. 2*num_ind [scalar]
returns a numpy array
"""
i = pylab.arange(1, eta.size)
delta = pylab.where(i == n-i, 1, 0)
return theta * 1/i + 1/(n-i) / (1 + delta)
?pylab.where
# test
i = pylab.arange(1, 19)
n = 36
print i == n-i
#
print pylab.where(i == n-i, 1, 0)
# get a theta estimate from pi:
theta = pi_Wakeley * fs_ery.data.sum()
print theta
#
print len(fs_ery)
#
model(theta, fs_ery, 36)
def fun(theta, eta, n):
"""
return residuals between model and data
"""
return model(theta, eta, n) - eta[1:]
def jac(theta, eta, n, test=False):
"""
creates a Jacobian matrix
"""
J = pylab.empty((eta.size-1, theta.size))
i = pylab.arange(1, eta.size, dtype=float)
delta = pylab.where(i == n-i, 1, 0)
num = 1/i + 1/(n-i)
den = 1 + delta
if test:
print i
print num
print den
J[:,0] = num / den
return J
# test
jac(theta, fs_ery, 36, test=True)
# starting value
theta0 = theta # pi_Wakeley from above
# sum over unmasked entries, i. e. without 0-count category, i. e. returns number of variable sites
fs_ery.sum()
# optimize
res = least_squares(fun, x0=theta0, jac=jac, bounds=(0,fs_ery.sum()),
kwargs={'eta': fs_ery, 'n': 36}, verbose=1)
res.success
?least_squares
print res.x
print theta
pylab.rcParams['figure.figsize'] = [12.0, 8.0]
import matplotlib.pyplot as plt
plt.rcParams['font.size'] = 14.0
i = range(1, len(fs_ery))
eta_model = model(res.x, eta=fs_ery, n=36) # get predicted values with optimal theta
plt.plot(i, fs_ery[1:], "bo", label="data from ery") # plot observed spectrum
ymax = max( fs_ery[1:].max(), eta_model.max() )
plt.axis([0, 19, 0, ymax*1.1]) # set axis range
plt.xlabel("minor allele frequency (i)")
plt.ylabel(r'$\eta_i$', fontsize='large', rotation='horizontal')
plt.title("folded SFS of ery")
plt.plot(i, eta_model, "go-",
label="\nneutral model"
+ "\n"
+ r'$\theta_{opt} = $' + str(round(res.x, 1))
) # plot model prediction with optimal theta
plt.legend() | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
The counts in each frequency class should be Poisson distributed with rate equal to $E[\eta_i]$ as given above. The lowest frequency class has the highest rate and therefore also the highest variance | #?plt.ylabel
#print plt.rcParams
fs_ery[1:].max()
#?pylab
os.getcwd()
%%sh
ls | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
The following function will take the file name of a file containing the flat 1D folded frequency spectrum of one population and plots it together with the best fitting neutral expectation. | def plot_folded_sfs(filename, n, pop = ''):
# read in spectrum from file
data = open(filename, 'r')
sfs = pylab.array( data.readline().split(), dtype=float )
data.close() # should close connection to file
#return sfs
# get starting value for theta from Watterson's theta
S = sfs[1:].sum()
T_total = sum([1.0/i for i in range(1, n)]) # onhe half the expected total length of the genealogy
theta0 = S / T_total # see eq. 4.7 in Wakeley2009
# optimize
res = least_squares(fun, x0=theta0, jac=jac, bounds=(0, sfs.sum()),
kwargs={'eta': sfs, 'n': 36}, verbose=1)
#print "Optimal theta per site is {0:.4f}".format(res.x[0]/sfs.sum())
#print res.x[0]/sfs.sum()
#return theta0, res
# plot
plt.rcParams['font.size'] = 14.0
i = range(1, len(sfs))
eta_model = model(res.x, eta=sfs, n=36) # get predicted values with optimal theta
plt.plot(i, sfs[1:], "rs", label="data of " + pop) # plot observed spectrum
ymax = max( sfs[1:].max(), eta_model.max() )
plt.axis([0, 19, 0, ymax*1.1]) # set axis range
plt.xlabel("minor allele frequency (i)")
plt.ylabel(r'$\eta_i$', fontsize='large', rotation='horizontal')
plt.title("folded SFS")
plt.text(5, 10000,
r"Optimal neutral $\theta$ per site is {0:.4f}".format(res.x[0]/sfs.sum()))
plt.plot(i, eta_model, "go-",
label="\nneutral model"
+ "\n"
+ r'$\theta_{opt} = $' + str(round(res.x, 1))
) # plot model prediction with optimal theta
plt.legend()
plot_folded_sfs('PAR.FOLDED.sfs', n=36, pop='par')
plot_folded_sfs('ERY.FOLDED.sfs', n=36, pop='ery') | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Univariate function minimizers or 1D scalar minimisation
Since I only have one value to optimize, I can use a slightly simpler approach than used above: | from scipy.optimize import minimize_scalar
?minimize_scalar
# define cost function
def f(theta, eta, n):
"""
return sum of squared deviations between model and data
"""
return sum( (model(theta, eta, n) - eta[1:])**2 ) # see above for definition of the 'model' function | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
It would be interesting to know whether the cost function is convex or not. | theta = pylab.arange(0, fs_ery.data[1:].sum()) # specify range of theta
cost = [f(t, fs_ery.data, 36) for t in theta]
plt.plot(theta, cost, 'b-', label='ery')
plt.xlabel(r'$\theta$')
plt.ylabel('cost')
plt.title("cost function for ery")
plt.legend(loc='best')
?plt.legend | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Within the specified bounds (the observed $\theta$, i. e. derived from the data, cannot lie outside these bounds), the cost function is convex. This is therefore an easy optimisation problem. See here for more details. | res = minimize_scalar(f, bounds = (0, fs_ery.data[1:].sum()), method = 'bounded', args = (fs_ery.data, 36))
res
# number of segregating sites
fs_par.data[1:].sum()
res = minimize_scalar(f, bounds = (0, fs_par.data[1:].sum()), method = 'bounded', args = (fs_par.data, 36))
res | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
The fitted values of $\theta$ are similar to the ones obtained above with the least_squares function. The estimates for ery deviate more than for par. | from sympy import *
x0 , x1 = symbols('x0 x1')
init_printing(use_unicode=True)
diff(0.5*(1-x0)**2 + (x1-x0**2)**2, x0)
diff(0.5*(1-x0)**2 + (x1-x0**2)**2, x1) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Wow! Sympy is a replacement for Mathematica. There is also Sage, which may include even more functionality. | from scipy.optimize import curve_fit | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Curve_fit is another function that can be used for optimization. | ?curve_fit
def model(i, theta):
"""
i: indpendent variable, here minor SNP frequency classes
theta: scaled population mutation rate parameter [scalar]
returns a numpy array
"""
n = len(i)
delta = pylab.where(i == n-i, 1, 0)
return theta * 1/i + 1/(n-i) / (1 + delta)
i = pylab.arange(1, fs_ery.size)
popt, pcov = curve_fit(model, i, fs_ery.data[1:])
# optimal theta
print popt
perr = pylab.sqrt(pcov)
perr
print str(int(popt[0] - 1.96*perr[0])) + ' < ' + str(int(popt[0])) + ' < ' + str(int(popt[0] + 1.96*perr[0]))
popt, pcov = curve_fit(model, i, fs_par.data[1:])
perr = pylab.sqrt(pcov)
print str(int(popt[0] - 1.96*perr[0])) + ' < ' + str(int(popt[0])) + ' < ' + str(int(popt[0] + 1.96*perr[0])) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
I am not sure whether these standard errors (perr) are correct. It may be that it is assumed that errors are normally distributed, which they are not exactly in this case. They should be close to Poisson distributed (see Fu1995), which should be fairly similar to normal with such high expected values as here.
If the standard errors are correct, then the large overlap of the 95% confidence intervals would indicate that the data do not provide significant support for a difference in $\theta$ between par and ery.
Parametric bootstrap from the observed SFS | %pwd
% ll
! cat ERY.FOLDED.sfs.dadi_format
fs_ery = dadi.Spectrum.from_file('ERY.FOLDED.sfs.dadi_format', mask_corners=False)
fs_ery
fs_ery.pop_ids = ['ery']
# get a Poisson sample from the observed spectrum
fs_ery_param_boot = fs_ery.sample()
fs_ery_param_boot
fs_ery_param_boot.data
%psource fs_ery.sample | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
There must be a way to get more than one bootstrap sample per call. | fs_ery_param_boot = pylab.array([fs_ery.sample() for i in range(100)])
# get the first 3 boostrap samples from the doubleton class
fs_ery_param_boot[:3, 2] | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
It would be good to get the 5% and 95% quantiles from the bootstrap samples of each frequency class and add those intervals to the plot of the observed frequency spectrum and the fitted neutral spectrum. This would require to find a quantile function and to find out how to add lines to a plot with matplotlib.
It would also be good to use the predicted counts from the neutral model above with the fitted $\theta$ as parameters for the bootstrap with sample() and add 95% confidence intervals to the predicted neutral SFS. I have done this in R instead (see /data3/claudius/Big_Data/ANGSD/SFS/SFS.Rmd)
Using unfolded spectra
I edited the 2D SFS created for estimating $F_{ST}$ by realSFS. I have convinced myself that realSFS outputs a flattened 2D matrix as expected by $\delta$a$\delta$i's Spectrum.from_file function (see section 3.1 of the manual with my comments). Note, that in the manual, "samples" stands for number of allele copies, so that the correct specification of dimensions for this 2D unfolded SFS of 18 diploid individuals in each of 2 populations is 37 x 37. | # read in the flattened 2D SFS
EryPar_unfolded_2dsfs = dadi.Spectrum.from_file('EryPar.unfolded.2dsfs.dadi_format', mask_corners=True)
# check dimension
len(EryPar_unfolded_2dsfs[0,])
EryPar_unfolded_2dsfs.sample_sizes
# add population labels
EryPar_unfolded_2dsfs.pop_ids = ["ery", "par"]
EryPar_unfolded_2dsfs.pop_ids | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Marginalizing
$\delta$a$\delta$i offers a function to get the marginal spectra from multidimensional spectra. Note, that this marginalisation is nothing fancy. In R it would be taking either the rowSums or the colSums of the matrix. | # marginalise over par to get 1D SFS for ery
fs_ery = EryPar_unfolded_2dsfs.marginalize([1])
# note the argument is an array with dimensions, one can marginalise over more than one dimension at the same time,
# but that is only interesting for 3-dimensional spectra, which I don't have here
fs_ery
# marginalise over ery to get 1D SFS for par
fs_par = EryPar_unfolded_2dsfs.marginalize([0])
fs_par | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Note, that these marginalised 1D SFS's are not identical to the 1D SFS estimated directly with realSFS. This is because, for the estimation of the 2D SFS, realSFS has only taken sites that had data from at least 9 individuals in each population (see assembly.sh, lines 1423 onwards).
The SFS's of par and ery had conspicuous shape differences. It would therefore be good to plot them to see, whether the above commands have done the correct thing. | # plot 1D spectra for each population
pylab.plot(fs_par, 'g', label="par")
pylab.plot(fs_ery, 'r', label="ery")
pylab.legend() | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
These marginal unfolded spectra look similar in shape to the 1D folded spectra of each subspecies (see above). | fs_ery.pi() / pylab.sum(fs_ery.data)
fs_ery.data
n = 36 # 36 sequences sampled from 18 diploid individuals
pi_Wakeley = (sum( [i*(n-i)*fs_ery[i] for i in range(1, n)] ) * 2.0 / (n*(n-1)))
pi_Wakeley = pi_Wakeley / pylab.sum(fs_ery.data)
pi_Wakeley | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
$\delta$a$\delta$i's pi function seems to calculate the correct value of $\pi$ for this unfolded spectrum. However, it is worrying that $\pi$ from this marginal spectrum is about 20 times larger than the one calculated from the directly estimated 1D folded spectrum (see above the $\pi$ calculated from the folded 1D spectrum). | fs_par.pi() / pylab.sum(fs_par.data)
pylab.sum(fs_par.data)
pylab.sum(EryPar_unfolded_2dsfs.data) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
<font color="red">The sum over the marginalised 1D spectra should be the same as the sum over the 2D spectrum !</font> | # from dadi's marginalise function:
fs_ery.data
sfs2d = EryPar_unfolded_2dsfs.copy()
# this should get the marginal spectrum for ery
ery_mar = [pylab.sum(sfs2d.data[i]) for i in range(0, len(sfs2d))]
ery_mar
# this should get the marginal spectrum for ery and then take the sum over it
sum([pylab.sum(sfs2d.data[i]) for i in range(0, len(sfs2d))])
# look what happens if I include masking
sum([pylab.sum(sfs2d[i]) for i in range(0, len(sfs2d))])
fs_ery.data - ery_mar | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
So, during the marginalisation the masking of data in the fixed categories (0, 36) is the problem, producing incorrectly marginalised counts in those masked categories. This is shown in the following: | sfs2d[0]
pylab.sum(sfs2d[0])
# from dadi's marginalise function:
fs_ery.data
# dividing by the correct number of sites to get pi per site:
fs_ery.pi() / pylab.sum(sfs2d.data) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
This is very close to the estimate of $\pi$ derived from the folded 1D spectrum of ery! (see above) | fs_par.pi() / pylab.sum(sfs2d.data) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
This is also nicely close to the estimate of $\pi_{site}$ of par from its folded 1D spectrum.
Tajima's D | fs_ery.Watterson_theta() / pylab.sum(sfs2d.data)
fs_ery.Tajima_D()
fs_par.Tajima_D() | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Now, I am calculating Tajima's D from the ery marginal spectrum by hand in order to check whether $\delta$a$\delta$i is doing the right thing. | n = 36
pi_Wakeley = (sum( [i*(n-i)*fs_ery.data[i] for i in range(1, n+1)] )
* 2.0 / (n*(n-1)))
#/ pylab.sum(sfs2d.data)
pi_Wakeley
# number of segregating sites
# this sums over all unmasked positions in the array
pylab.sum(fs_ery)
fs_ery.S()
S = pylab.sum(fs_ery)
theta_Watterson = S / pylab.sum(1.0 / (pylab.arange(1, n)))
theta_Watterson
# normalizing constant, see page 45 in Gillespie
a1 = pylab.sum(1.0 / pylab.arange(1, n))
#print a1
a2 = pylab.sum(1.0 / pylab.arange(1, n)**2.0)
#print a2
b1 = (n+1.0)/(3.0*(n-1))
#print b1
b2 = 2.0*(n**2 + n + 3)/(9.0*n*(n-1))
#print b2
c1 = b1 - (1.0/a1)
#print c1
c2 = b2 - (n+2.0)/(a1*n) + a2/a1**2
#print c2
C = ((c1/a1)*S + (c2/(a1**2.0 + a2))*S*(S-1))
C = C**(1/2.0)
ery_Tajimas_D = (pi_Wakeley - theta_Watterson) / C
print '{0:.6f}'.format(ery_Tajimas_D)
ery_Tajimas_D - fs_ery.Tajima_D() | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
$\delta$a$\delta$i seems to do the right thing. Note, that the estimate of Tajima's D from this marginal spectrum of ery is slightly different from the estimate derived from the folded 1D spectrum of ery (see /data3/claudius/Big_Data/ANGSD/SFS/SFS.Rmd). The folded 1D spectrum resulted in a Tajima's D estimate of $\sim$0.05, i. e. a difference of almost 0.1. Again, the 2D spectrum is based on only those sites for which there were at least 9 individiuals with data in both populations, whereas the 1D folded spectrum of ery included all sites for which there were 9 ery individuals with data (see line 1571 onwards in assembly.sh). | fs_par.Tajima_D() | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
My estimate from the folded 1D spectrum of par was -0.6142268 (see /data3/claudius/Big_Data/ANGSD/SFS/SFS.Rmd).
Multi-population statistics | EryPar_unfolded_2dsfs.S() | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
The 2D spectrum contains counts from 60k sites that are variable in par or ery or both. | EryPar_unfolded_2dsfs.Fst() | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
This estimate of $F_{ST}$ according to Weir and Cockerham (1984) is well below the estimate of $\sim$0.3 from ANGSD according to Bhatia/Hudson (2013). Note, however, that this estimate showed a positive bias of around 0.025 in 100 permutations of population labels of individuals. Taking the positive bias into account, both estimates of $F_{ST}$ are quite similar.
The following function scramble_pop_ids should generate a 2D SFS with counts as if individuals were assigned to populations randomly. Theoretically, the $F_{ST}$ calculated from this SFS should be 0. | %psource EryPar_unfolded_2dsfs.scramble_pop_ids
# plot the scrambled 2D SFS
dadi.Plotting.plot_single_2d_sfs(EryPar_unfolded_2dsfs.scramble_pop_ids(), vmin=1) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
So, this is how the 2D SFS would look like if ery and par were not genetically differentiated. | # get Fst for scrambled SFS
EryPar_unfolded_2dsfs.scramble_pop_ids().Fst() | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
The $F_{ST}$ from the scrambled SFS is much lower than the $F_{ST}$ of the observed SFS. That should mean that there is significant population structure. However, the $F_{ST}$ from the scrambled SFS is not 0. I don't know why that is. | # folding
EryPar_folded_2dsfs = EryPar_unfolded_2dsfs.fold()
EryPar_folded_2dsfs
EryPar_folded_2dsfs.mask | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Plotting | dadi.Plotting.plot_single_2d_sfs(EryPar_unfolded_2dsfs, vmin=1)
dadi.Plotting.plot_single_2d_sfs(EryPar_folded_2dsfs, vmin=1) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
The folded 2D spectrum is not a minor allele frequency spectrum as are the 1D folded spectra of ery and par. This is because an allele that is minor in one population can be the major allele in the other. What is not counted are the alleles that are major in both populations, i. e. the upper right corner.
For the 2D spectrum to make sense it is crucial that allele frequencies are polarised the same way in both populations, either with an outgroup sequence or arbitrarily with respect to the reference sequence (as I did here).
How to fold a 1D spectrum | # unfolded spectrum from marginalisation of 2D unfolded spectrum
fs_ery
len(fs_ery)
fs_ery.fold() | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Let's use the formula (1.2) from Wakeley2009 to fold the 1D spectrum manually:
$$
\eta_{i} = \frac{\zeta_{i} + \zeta_{n-i}}{1 + \delta_{i, n-i}} \qquad 1 \le i \le [n/2]
$$
$n$ is the number of gene copies sampled, i. e. haploid sample size. $[n/2]$ is the largest integer less than or equal to n/2 (to handle uneven sample sizes). $\zeta_{i}$ are the unfolded frequencies and $\delta_{i, n-i}$ is Kronecker's $\delta$ which is 1 if $i = n-i$ and zero otherwise (to avoid counting the unfolded n/2 frequency class twice with even sample sizes). | fs_ery_folded = fs_ery.copy() # make a copy of the UNfolded spectrum
n = len(fs_ery)-1
for i in range(len(fs_ery)):
fs_ery_folded[i] += fs_ery[n-i]
if i == n/2.0:
fs_ery_folded[i] /= 2
fs_ery_folded[0:19]
isinstance(fs_ery_folded, pylab.ndarray)
mask = [True]
mask.extend([False] * 18)
mask.extend([True] * 18)
print mask
print sum(mask)
mask = [True] * 37
for i in range(len(mask)):
if i > 0 and i < 19:
mask[i] = False
print mask
print sum(mask) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Here is how to flatten an array of arrays with list comprehension: | mask = [[True], [False] * 18, [True] * 18]
print mask
print [elem for a in mask for elem in a] | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Set new mask for the folded spectrum: | fs_ery_folded.mask = mask
fs_ery_folded.folded = True
fs_ery_folded - fs_ery.fold() | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
The fold() function works correctly for 1D spectra, at least. How about 2D spectra?
$$
\eta_{i,j} = \frac{\zeta_{i,j} + \zeta_{n-i, m-j}}{1 + \delta_{i, n-i; j, m-j}}
\qquad 1 \le i+j \le \Big[\frac{n+m}{2}\Big]
$$ | EryPar_unfolded_2dsfs.sample_sizes
EryPar_unfolded_2dsfs._total_per_entry()
# copy the unfolded 2D spectrum for folding
import copy
sfs2d_folded = copy.deepcopy(EryPar_unfolded_2dsfs)
n = len(sfs2d_folded)-1
m = len(sfs2d_folded[0])-1
for i in range(n+1):
for j in range(m+1):
sfs2d_folded[i,j] += sfs2d_folded[n-i, m-j]
if i == n/2.0 and j == m/2.0:
sfs2d_folded[i,j] /= 2
mask = sfs2d_folded._total_per_entry() > (n+m)/2
mask
sfs2d_folded.mask = mask
sfs2d_folded.fold = True
dadi.Plotting.plot_single_2d_sfs(sfs2d_folded, vmin=1) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
I am going to go through every step in the fold function of dadi: | # copy the unfolded 2D spectrum for folding
import copy
sfs2d_unfolded = copy.deepcopy(EryPar_unfolded_2dsfs)
total_samples = pylab.sum(sfs2d_unfolded.sample_sizes)
total_samples
total_per_entry = dadi.Spectrum(sfs2d_unfolded._total_per_entry(), pop_ids=['ery', 'par'])
#total_per_entry.pop_ids = ['ery', 'par']
dadi.Plotting.plot_single_2d_sfs(total_per_entry, vmin=1)
total_per_entry = sfs2d_unfolded._total_per_entry()
total_per_entry
where_folded_out = total_per_entry > total_samples/2
where_folded_out
original_mask = sfs2d_unfolded.mask
original_mask
pylab.logical_or([True, False, True], [False, False, True])
# get the number of elements along each axis
sfs2d_unfolded.shape
[slice(None, None, -1) for i in sfs2d_unfolded.shape]
matrix = pylab.array([
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]
])
reverse_slice = [slice(None, None, -1) for i in matrix.shape]
reverse_slice
matrix[reverse_slice]
matrix[::-1,::-1] | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
With the variable length list of slice objects, one can generalise the reverse of arrays with any dimensions. | final_mask = pylab.logical_or(original_mask, dadi.Numerics.reverse_array(original_mask))
final_mask | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Here, folding doesn't mask new cells. | ?pylab.where
pylab.where(matrix < 6, matrix, 0)
# this takes the part of the spectrum that is non-sensical if the derived allele is not known
# and sets the rest to 0
print pylab.where(where_folded_out, sfs2d_unfolded, 0)
# let's plot the bit of the spectrum that we are going to fold onto the rest:
dadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(pylab.where(where_folded_out, sfs2d_unfolded, 0)), vmin=1)
# now let's reverse this 2D array, i. e. last row first and last element of each row first:
_reversed = dadi.Numerics.reverse_array(pylab.where(where_folded_out, sfs2d_unfolded, 0))
_reversed
dadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(_reversed), vmin=1) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
The transformation we have done with the upper-right diagonal 2D array above should be identical to projecting it across a vertical center line (creating an upper left triangular matrix) and then projecting it across a horizontal center line (creating the final lower left triangular matrix). Note, that this is not like mirroring the upper-right triangular 2D array across the 36-36 diagonal! | # This shall now be added to the original unfolded 2D spectrum.
sfs2d_folded = pylab.ma.masked_array(sfs2d_unfolded.data + _reversed)
dadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(sfs2d_folded), vmin=1)
sfs2d_folded.data
sfs2d_folded.data[where_folded_out] = 0
sfs2d_folded.data
dadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(sfs2d_folded), vmin=1)
sfs2d_folded.shape
where_ambiguous = (total_per_entry == total_samples/2.0)
where_ambiguous | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
SNP's with joint frequencies in the True cells are counted twice at the moment due to the folding and the fact that the sample sizes are even. | # this extracts the diagonal values from the UNfolded spectrum and sets the rest to 0
ambiguous = pylab.where(where_ambiguous, sfs2d_unfolded, 0)
dadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(ambiguous), vmin=1) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
These are the values in the diagonal before folding. | reversed_ambiguous = dadi.Numerics.reverse_array(ambiguous)
dadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(reversed_ambiguous), vmin=1) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
These are the values that got added to the diagonal during folding. Comparing with the previous plot, one can see for instance that the value in the (0, 36) class got added to the value in the (36, 0) class and vice versa. The two frequency classes are equivalent, since it is arbitrary which allele we call minor in the total sample (of 72 gene copies). These SNP's are therefore counted twice. | a = -1.0*ambiguous + 0.5*ambiguous + 0.5*reversed_ambiguous
b = -0.5*ambiguous + 0.5*reversed_ambiguous
a == b
sfs2d_folded += -0.5*ambiguous + 0.5*reversed_ambiguous
final_mask = pylab.logical_or(final_mask, where_folded_out)
final_mask
sfs2d_folded = dadi.Spectrum(sfs2d_folded, mask=final_mask, data_folded=True, pop_ids=['ery', 'par'])
pylab.rcParams['figure.figsize'] = [12.0, 8.0]
dadi.Plotting.plot_single_2d_sfs(sfs2d_folded, vmin=1) | Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb | claudiuskerth/PhDthesis | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.