markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
You can apply fancier functions than .sum(), e.g. let's compute the variance of each group: | a.reshape(4, 3).var(axis=-1) | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
Exercise 6
Your turn to do a fancier reshaping: we will compute the average of a 2D array over non-overlapping rectangular patches:
Choose to small numbers m and n, e.g. 3 and 4.
Create a 2D array, with number of rows a multiple of one of those numbers, and number of columns a multiple of the other, e.g. 15 x 24.
Reshape and aggregate to create a 2D array holding the sums over non overlapping m x n tiles, e.g. a 5 x 6 array.
Hint: .sum() can take a tuple of integers as axis=, so you can do the whole thing in a single reshape from 2D to 4D, then aggregate back to 2D. If tyou find this confusing, doing two aggregations will also work. | # Your code goes here | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
Rearranging dimensions
Once we have a multidimensional array, rearranging the order of its dimensions is as simple as rearranging its .shape and .tuple attributes. You could do this with np.ndarray, but it would be a pain. NumPy has a bunch of functions for doing that, but they are all watered down versions of np.transpose, which takes a tuple with the desired permutation of the array dimensions.
Exercise 7
Write a function roll_axis_to_end that takes an array and an axis, and makes that axis the last dimension of the array.
For extra credit, rewrite your function using np.ndarray. | # Your code goes here | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
Playing with strides
For the rest of the workshop we are going to dome some fancy tricks with strides, to create interesting views of an existing array.
Exercise 8
Create a function to extract the diagonal of a 2-D array, using the np.ndarray constructor. | # Your code goes here | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
Exercise 9
Something very interesting happens when we set a stride to zero. Give that idea some thought and then:
Create two functions, stacked_column_vector and stacked_row_vector, that take a 1D array (the vector), and an integer n, and create a 2D view of the array that stack n copies of the vector, either as columns or rows of the view.
Use this functions to create an outer_product function that takes two 1D vectors and computes their outer product. | # Your code goes here | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
Exercise 10
In the last exercise we used zero strides to reuse an item more than once in the resulting view. Let's try to build on that idea:
Write a function that takes a 1D array and a window integer value, and creates a 2D view of the array, each row a view through a sliding window of size window into the original array.
Hint: There are len(array) - window + 1 such "views through a window".
Another hint: Here's a small example expected run:
>>> sliding_window(np.arange(4), 2)
[[0, 1],
[1, 2],
[2, 3]] | # Your code goes here | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
Parting pro tip
NumPy's worst kept secret is the existence of a mostly undocumented, mostly hidden, as_strided function, that makes creating views with funny strides much easier (and also much more dangerous!) than using np.ndarray. Here's the available documentation: | from numpy.lib.stride_tricks import as_strided
np.info(as_strided) | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
We're going to be building a model that recognizes these digits as 5, 0, and 4.
Imports and input data
We'll proceed in steps, beginning with importing and inspecting the MNIST data. This doesn't have anything to do with TensorFlow in particular -- we're just downloading the data archive. | import os
from six.moves.urllib.request import urlretrieve
SOURCE_URL = 'http://yann.lecun.com/exdb/mnist/'
WORK_DIRECTORY = "/tmp/mnist-data"
def maybe_download(filename):
"""A helper to download the data files if not present."""
if not os.path.exists(WORK_DIRECTORY):
os.mkdir(WORK_DIRECTORY)
filepath = os.path.join(WORK_DIRECTORY, filename)
if not os.path.exists(filepath):
filepath, _ = urlretrieve(SOURCE_URL + filename, filepath)
statinfo = os.stat(filepath)
print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')
else:
print('Already downloaded', filename)
return filepath
train_data_filename = maybe_download('train-images-idx3-ubyte.gz')
train_labels_filename = maybe_download('train-labels-idx1-ubyte.gz')
test_data_filename = maybe_download('t10k-images-idx3-ubyte.gz')
test_labels_filename = maybe_download('t10k-labels-idx1-ubyte.gz') | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
Working with the images
Now we have the files, but the format requires a bit of pre-processing before we can work with it. The data is gzipped, requiring us to decompress it. And, each of the images are grayscale-encoded with values from [0, 255]; we'll normalize these to [-0.5, 0.5].
Let's try to unpack the data using the documented format:
[offset] [type] [value] [description]
0000 32 bit integer 0x00000803(2051) magic number
0004 32 bit integer 60000 number of images
0008 32 bit integer 28 number of rows
0012 32 bit integer 28 number of columns
0016 unsigned byte ?? pixel
0017 unsigned byte ?? pixel
........
xxxx unsigned byte ?? pixel
Pixels are organized row-wise. Pixel values are 0 to 255. 0 means background (white), 255 means foreground (black).
We'll start by reading the first image from the test data as a sanity check. | import gzip, binascii, struct, numpy
import matplotlib.pyplot as plt
with gzip.open(test_data_filename) as f:
# Print the header fields.
for field in ['magic number', 'image count', 'rows', 'columns']:
# struct.unpack reads the binary data provided by f.read.
# The format string '>i' decodes a big-endian integer, which
# is the encoding of the data.
print(field, struct.unpack('>i', f.read(4))[0])
# Read the first 28x28 set of pixel values.
# Each pixel is one byte, [0, 255], a uint8.
buf = f.read(28 * 28)
image = numpy.frombuffer(buf, dtype=numpy.uint8)
# Print the first few values of image.
print('First 10 pixels:', image[:10]) | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
The first 10 pixels are all 0 values. Not very interesting, but also unsurprising. We'd expect most of the pixel values to be the background color, 0.
We could print all 28 * 28 values, but what we really need to do to make sure we're reading our data properly is look at an image. | %matplotlib inline
# We'll show the image and its pixel value histogram side-by-side.
_, (ax1, ax2) = plt.subplots(1, 2)
# To interpret the values as a 28x28 image, we need to reshape
# the numpy array, which is one dimensional.
ax1.imshow(image.reshape(28, 28), cmap=plt.cm.Greys);
ax2.hist(image, bins=20, range=[0,255]); | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
The large number of 0 values correspond to the background of the image, another large mass of value 255 is black, and a mix of grayscale transition values in between.
Both the image and histogram look sensible. But, it's good practice when training image models to normalize values to be centered around 0.
We'll do that next. The normalization code is fairly short, and it may be tempting to assume we haven't made mistakes, but we'll double-check by looking at the rendered input and histogram again. Malformed inputs are a surprisingly common source of errors when developing new models. | # Let's convert the uint8 image to 32 bit floats and rescale
# the values to be centered around 0, between [-0.5, 0.5].
#
# We again plot the image and histogram to check that we
# haven't mangled the data.
scaled = image.astype(numpy.float32)
scaled = (scaled - (255 / 2.0)) / 255
_, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(scaled.reshape(28, 28), cmap=plt.cm.Greys);
ax2.hist(scaled, bins=20, range=[-0.5, 0.5]); | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
Great -- we've retained the correct image data while properly rescaling to the range [-0.5, 0.5].
Reading the labels
Let's next unpack the test label data. The format here is similar: a magic number followed by a count followed by the labels as uint8 values. In more detail:
[offset] [type] [value] [description]
0000 32 bit integer 0x00000801(2049) magic number (MSB first)
0004 32 bit integer 10000 number of items
0008 unsigned byte ?? label
0009 unsigned byte ?? label
........
xxxx unsigned byte ?? label
As with the image data, let's read the first test set value to sanity check our input path. We'll expect a 7. | with gzip.open(test_labels_filename) as f:
# Print the header fields.
for field in ['magic number', 'label count']:
print(field, struct.unpack('>i', f.read(4))[0])
print('First label:', struct.unpack('B', f.read(1))[0]) | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
Indeed, the first label of the test set is 7.
Forming the training, testing, and validation data sets
Now that we understand how to read a single element, we can read a much larger set that we'll use for training, testing, and validation.
Image data
The code below is a generalization of our prototyping above that reads the entire test and training data set. | IMAGE_SIZE = 28
PIXEL_DEPTH = 255
def extract_data(filename, num_images):
"""Extract the images into a 4D tensor [image index, y, x, channels].
For MNIST data, the number of channels is always 1.
Values are rescaled from [0, 255] down to [-0.5, 0.5].
"""
print('Extracting', filename)
with gzip.open(filename) as bytestream:
# Skip the magic number and dimensions; we know these values.
bytestream.read(16)
buf = bytestream.read(IMAGE_SIZE * IMAGE_SIZE * num_images)
data = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.float32)
data = (data - (PIXEL_DEPTH / 2.0)) / PIXEL_DEPTH
data = data.reshape(num_images, IMAGE_SIZE, IMAGE_SIZE, 1)
return data
train_data = extract_data(train_data_filename, 60000)
test_data = extract_data(test_data_filename, 10000) | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
A crucial difference here is how we reshape the array of pixel values. Instead of one image that's 28x28, we now have a set of 60,000 images, each one being 28x28. We also include a number of channels, which for grayscale images as we have here is 1.
Let's make sure we've got the reshaping parameters right by inspecting the dimensions and the first two images. (Again, mangled input is a very common source of errors.) | print('Training data shape', train_data.shape)
_, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(train_data[0].reshape(28, 28), cmap=plt.cm.Greys);
ax2.imshow(train_data[1].reshape(28, 28), cmap=plt.cm.Greys); | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
Looks good. Now we know how to index our full set of training and test images.
Label data
Let's move on to loading the full set of labels. As is typical in classification problems, we'll convert our input labels into a 1-hot encoding over a length 10 vector corresponding to 10 digits. The vector [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], for example, would correspond to the digit 1. | NUM_LABELS = 10
def extract_labels(filename, num_images):
"""Extract the labels into a 1-hot matrix [image index, label index]."""
print('Extracting', filename)
with gzip.open(filename) as bytestream:
# Skip the magic number and count; we know these values.
bytestream.read(8)
buf = bytestream.read(1 * num_images)
labels = numpy.frombuffer(buf, dtype=numpy.uint8)
# Convert to dense 1-hot representation.
return (numpy.arange(NUM_LABELS) == labels[:, None]).astype(numpy.float32)
train_labels = extract_labels(train_labels_filename, 60000)
test_labels = extract_labels(test_labels_filename, 10000) | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
As with our image data, we'll double-check that our 1-hot encoding of the first few values matches our expectations. | print('Training labels shape', train_labels.shape)
print('First label vector', train_labels[0])
print('Second label vector', train_labels[1]) | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
The 1-hot encoding looks reasonable.
Segmenting data into training, test, and validation
The final step in preparing our data is to split it into three sets: training, test, and validation. This isn't the format of the original data set, so we'll take a small slice of the training data and treat that as our validation set. | VALIDATION_SIZE = 5000
validation_data = train_data[:VALIDATION_SIZE, :, :, :]
validation_labels = train_labels[:VALIDATION_SIZE]
train_data = train_data[VALIDATION_SIZE:, :, :, :]
train_labels = train_labels[VALIDATION_SIZE:]
train_size = train_labels.shape[0]
print('Validation shape', validation_data.shape)
print('Train size', train_size) | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
Defining the model
Now that we've prepared our data, we're ready to define our model.
The comments describe the architecture, which fairly typical of models that process image data. The raw input passes through several convolution and max pooling layers with rectified linear activations before several fully connected layers and a softmax loss for predicting the output class. During training, we use dropout.
We'll separate our model definition into three steps:
Defining the variables that will hold the trainable weights.
Defining the basic model graph structure described above. And,
Stamping out several copies of the model graph for training, testing, and validation.
We'll start with the variables. | import tensorflow as tf
# We'll bundle groups of examples during training for efficiency.
# This defines the size of the batch.
BATCH_SIZE = 60
# We have only one channel in our grayscale images.
NUM_CHANNELS = 1
# The random seed that defines initialization.
SEED = 42
# This is where training samples and labels are fed to the graph.
# These placeholder nodes will be fed a batch of training data at each
# training step, which we'll write once we define the graph structure.
train_data_node = tf.placeholder(
tf.float32,
shape=(BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS))
train_labels_node = tf.placeholder(tf.float32,
shape=(BATCH_SIZE, NUM_LABELS))
# For the validation and test data, we'll just hold the entire dataset in
# one constant node.
validation_data_node = tf.constant(validation_data)
test_data_node = tf.constant(test_data)
# The variables below hold all the trainable weights. For each, the
# parameter defines how the variables will be initialized.
conv1_weights = tf.Variable(
tf.truncated_normal([5, 5, NUM_CHANNELS, 32], # 5x5 filter, depth 32.
stddev=0.1,
seed=SEED))
conv1_biases = tf.Variable(tf.zeros([32]))
conv2_weights = tf.Variable(
tf.truncated_normal([5, 5, 32, 64],
stddev=0.1,
seed=SEED))
conv2_biases = tf.Variable(tf.constant(0.1, shape=[64]))
fc1_weights = tf.Variable( # fully connected, depth 512.
tf.truncated_normal([IMAGE_SIZE // 4 * IMAGE_SIZE // 4 * 64, 512],
stddev=0.1,
seed=SEED))
fc1_biases = tf.Variable(tf.constant(0.1, shape=[512]))
fc2_weights = tf.Variable(
tf.truncated_normal([512, NUM_LABELS],
stddev=0.1,
seed=SEED))
fc2_biases = tf.Variable(tf.constant(0.1, shape=[NUM_LABELS]))
print('Done') | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
Now that we've defined the variables to be trained, we're ready to wire them together into a TensorFlow graph.
We'll define a helper to do this, model, which will return copies of the graph suitable for training and testing. Note the train argument, which controls whether or not dropout is used in the hidden layer. (We want to use dropout only during training.) | def model(data, train=False):
"""The Model definition."""
# 2D convolution, with 'SAME' padding (i.e. the output feature map has
# the same size as the input). Note that {strides} is a 4D array whose
# shape matches the data layout: [image index, y, x, depth].
conv = tf.nn.conv2d(data,
conv1_weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Bias and rectified linear non-linearity.
relu = tf.nn.relu(tf.nn.bias_add(conv, conv1_biases))
# Max pooling. The kernel size spec ksize also follows the layout of
# the data. Here we have a pooling window of 2, and a stride of 2.
pool = tf.nn.max_pool(relu,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
conv = tf.nn.conv2d(pool,
conv2_weights,
strides=[1, 1, 1, 1],
padding='SAME')
relu = tf.nn.relu(tf.nn.bias_add(conv, conv2_biases))
pool = tf.nn.max_pool(relu,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Reshape the feature map cuboid into a 2D matrix to feed it to the
# fully connected layers.
pool_shape = pool.get_shape().as_list()
reshape = tf.reshape(
pool,
[pool_shape[0], pool_shape[1] * pool_shape[2] * pool_shape[3]])
# Fully connected layer. Note that the '+' operation automatically
# broadcasts the biases.
hidden = tf.nn.relu(tf.matmul(reshape, fc1_weights) + fc1_biases)
# Add a 50% dropout during training only. Dropout also scales
# activations such that no rescaling is needed at evaluation time.
if train:
hidden = tf.nn.dropout(hidden, 0.5, seed=SEED)
return tf.matmul(hidden, fc2_weights) + fc2_biases
print('Done') | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
Having defined the basic structure of the graph, we're ready to stamp out multiple copies for training, testing, and validation.
Here, we'll do some customizations depending on which graph we're constructing. train_prediction holds the training graph, for which we use cross-entropy loss and weight regularization. We'll adjust the learning rate during training -- that's handled by the exponential_decay operation, which is itself an argument to the MomentumOptimizer that performs the actual training.
The vaildation and prediction graphs are much simpler the generate -- we need only create copies of the model with the validation and test inputs and a softmax classifier as the output. | # Training computation: logits + cross-entropy loss.
logits = model(train_data_node, True)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=train_labels_node, logits=logits))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(fc1_weights) + tf.nn.l2_loss(fc1_biases) +
tf.nn.l2_loss(fc2_weights) + tf.nn.l2_loss(fc2_biases))
# Add the regularization term to the loss.
loss += 5e-4 * regularizers
# Optimizer: set up a variable that's incremented once per batch and
# controls the learning rate decay.
batch = tf.Variable(0)
# Decay once per epoch, using an exponential schedule starting at 0.01.
learning_rate = tf.train.exponential_decay(
0.01, # Base learning rate.
batch * BATCH_SIZE, # Current index into the dataset.
train_size, # Decay step.
0.95, # Decay rate.
staircase=True)
# Use simple momentum for the optimization.
optimizer = tf.train.MomentumOptimizer(learning_rate,
0.9).minimize(loss,
global_step=batch)
# Predictions for the minibatch, validation set and test set.
train_prediction = tf.nn.softmax(logits)
# We'll compute them only once in a while by calling their {eval()} method.
validation_prediction = tf.nn.softmax(model(validation_data_node))
test_prediction = tf.nn.softmax(model(test_data_node))
print('Done') | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
Training and visualizing results
Now that we have the training, test, and validation graphs, we're ready to actually go through the training loop and periodically evaluate loss and error.
All of these operations take place in the context of a session. In Python, we'd write something like:
with tf.Session() as s:
...training / test / evaluation loop...
But, here, we'll want to keep the session open so we can poke at values as we work out the details of training. The TensorFlow API includes a function for this, InteractiveSession.
We'll start by creating a session and initializing the varibles we defined above. | # Create a new interactive session that we'll use in
# subsequent code cells.
s = tf.InteractiveSession()
# Use our newly created session as the default for
# subsequent operations.
s.as_default()
# Initialize all the variables we defined above.
tf.global_variables_initializer().run() | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
Now we're ready to perform operations on the graph. Let's start with one round of training. We're going to organize our training steps into batches for efficiency; i.e., training using a small set of examples at each step rather than a single example. | BATCH_SIZE = 60
# Grab the first BATCH_SIZE examples and labels.
batch_data = train_data[:BATCH_SIZE, :, :, :]
batch_labels = train_labels[:BATCH_SIZE]
# This dictionary maps the batch data (as a numpy array) to the
# node in the graph it should be fed to.
feed_dict = {train_data_node: batch_data,
train_labels_node: batch_labels}
# Run the graph and fetch some of the nodes.
_, l, lr, predictions = s.run(
[optimizer, loss, learning_rate, train_prediction],
feed_dict=feed_dict)
print('Done') | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
Let's take a look at the predictions. How did we do? Recall that the output will be probabilities over the possible classes, so let's look at those probabilities. | print(predictions[0]) | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
As expected without training, the predictions are all noise. Let's write a scoring function that picks the class with the maximum probability and compares with the example's label. We'll start by converting the probability vectors returned by the softmax into predictions we can match against the labels. | # The highest probability in the first entry.
print('First prediction', numpy.argmax(predictions[0]))
# But, predictions is actually a list of BATCH_SIZE probability vectors.
print(predictions.shape)
# So, we'll take the highest probability for each vector.
print('All predictions', numpy.argmax(predictions, 1)) | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
Next, we can do the same thing for our labels -- using argmax to convert our 1-hot encoding into a digit class. | print('Batch labels', numpy.argmax(batch_labels, 1)) | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
Now we can compare the predicted and label classes to compute the error rate and confusion matrix for this batch. | correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(batch_labels, 1))
total = predictions.shape[0]
print(float(correct) / float(total))
confusions = numpy.zeros([10, 10], numpy.float32)
bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(batch_labels, 1))
for predicted, actual in bundled:
confusions[predicted, actual] += 1
plt.grid(False)
plt.xticks(numpy.arange(NUM_LABELS))
plt.yticks(numpy.arange(NUM_LABELS))
plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest'); | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
Now let's wrap this up into our scoring function. | def error_rate(predictions, labels):
"""Return the error rate and confusions."""
correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(labels, 1))
total = predictions.shape[0]
error = 100.0 - (100 * float(correct) / float(total))
confusions = numpy.zeros([10, 10], numpy.float32)
bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(labels, 1))
for predicted, actual in bundled:
confusions[predicted, actual] += 1
return error, confusions
print('Done') | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
We'll need to train for some time to actually see useful predicted values. Let's define a loop that will go through our data. We'll print the loss and error periodically.
Here, we want to iterate over the entire data set rather than just the first batch, so we'll need to slice the data to that end.
(One pass through our training set will take some time on a CPU, so be patient if you are executing this notebook.) | # Train over the first 1/4th of our training set.
steps = train_size // BATCH_SIZE
for step in range(steps):
# Compute the offset of the current minibatch in the data.
# Note that we could use better randomization across epochs.
offset = (step * BATCH_SIZE) % (train_size - BATCH_SIZE)
batch_data = train_data[offset:(offset + BATCH_SIZE), :, :, :]
batch_labels = train_labels[offset:(offset + BATCH_SIZE)]
# This dictionary maps the batch data (as a numpy array) to the
# node in the graph it should be fed to.
feed_dict = {train_data_node: batch_data,
train_labels_node: batch_labels}
# Run the graph and fetch some of the nodes.
_, l, lr, predictions = s.run(
[optimizer, loss, learning_rate, train_prediction],
feed_dict=feed_dict)
# Print out the loss periodically.
if step % 100 == 0:
error, _ = error_rate(predictions, batch_labels)
print('Step %d of %d' % (step, steps))
print('Mini-batch loss: %.5f Error: %.5f Learning rate: %.5f' % (l, error, lr))
print('Validation error: %.1f%%' % error_rate(
validation_prediction.eval(), validation_labels)[0])
| tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
The error seems to have gone down. Let's evaluate the results using the test set.
To help identify rare mispredictions, we'll include the raw count of each (prediction, label) pair in the confusion matrix. | test_error, confusions = error_rate(test_prediction.eval(), test_labels)
print('Test error: %.1f%%' % test_error)
plt.xlabel('Actual')
plt.ylabel('Predicted')
plt.grid(False)
plt.xticks(numpy.arange(NUM_LABELS))
plt.yticks(numpy.arange(NUM_LABELS))
plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');
for i, cas in enumerate(confusions):
for j, count in enumerate(cas):
if count > 0:
xoff = .07 * len(str(count))
plt.text(j-xoff, i+.2, int(count), fontsize=9, color='white') | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
We can see here that we're mostly accurate, with some errors you might expect, e.g., '9' is often confused as '4'.
Let's do another sanity check to make sure this matches roughly the distribution of our test set, e.g., it seems like we have fewer '5' values. | plt.xticks(numpy.arange(NUM_LABELS))
plt.hist(numpy.argmax(test_labels, 1)); | tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb | HKUST-SING/tensorflow | apache-2.0 |
Setting Up S3 Access Using Boto
We'll use boto to access the S3 bucket. Below, we'll set the bucket ID and create a resource to access it.
Note that although the bucket is public, boto requires the presence of an AWS access key and secret key to use a s3 resource. To request data anonymously, we'll use a low-level client instead. | era5_bucket = 'era5-pds'
# AWS access / secret keys required
# s3 = boto3.resource('s3')
# bucket = s3.Bucket(era5_bucket)
# No AWS keys required
client = boto3.client('s3', config=botocore.client.Config(signature_version=botocore.UNSIGNED)) | aws/era5-s3-via-boto.ipynb | planet-os/notebooks | mit |
ERA5 Data Structure on S3
The ERA5 data is chunked into distinct NetCDF files per variable, each containing a month of hourly data. These files are organized in the S3 bucket by year, month, and variable name.
The data is structured as follows:
/{year}/{month}/main.nc
/data/{var1}.nc
/{var2}.nc
/{....}.nc
/{varN}.nc
where year is expressed as four digits (e.g. YYYY) and month as two digits (e.g. MM). Individual data variables (var1 through varN) use names corresponding to CF standard names convention plus any applicable additional info, such as vertical coordinate.
For example, the full file path for air temperature for January 2008 is:
/2008/01/data/air_temperature_at_2_metres.nc
Note that due to the nature of the ERA5 forecast timing, which is run twice daily at 06:00 and 18:00 UTC, the monthly data file begins with data from 07:00 on the first of the month and continues through 06:00 of the following month. We'll see this in the coordinate values of a data file we download later in the notebook.
Granule variable structure and metadata attributes are stored in main.nc. This file contains coordinate and auxiliary variable data. This file is also annotated using NetCDF CF metadata conventions.
We can use the paginate method to list the top level key prefixes in the bucket, which corresponds to the available years of ERA5 data. | paginator = client.get_paginator('list_objects')
result = paginator.paginate(Bucket=era5_bucket, Delimiter='/')
for prefix in result.search('CommonPrefixes'):
print(prefix.get('Prefix')) | aws/era5-s3-via-boto.ipynb | planet-os/notebooks | mit |
Let's take a look at the objects available for a specific month using boto's list_objects_v2 method. | keys = []
date = datetime.date(2018,1,1) # update to desired date
prefix = date.strftime('%Y/%m/')
response = client.list_objects_v2(Bucket=era5_bucket, Prefix=prefix)
response_meta = response.get('ResponseMetadata')
if response_meta.get('HTTPStatusCode') == 200:
contents = response.get('Contents')
if contents == None:
print("No objects are available for %s" % date.strftime('%B, %Y'))
else:
for obj in contents:
keys.append(obj.get('Key'))
print("There are %s objects available for %s\n--" % (len(keys), date.strftime('%B, %Y')))
for k in keys:
print(k)
else:
print("There was an error with your request.") | aws/era5-s3-via-boto.ipynb | planet-os/notebooks | mit |
Downloading Files
Let's download main.nc file for that month and use xarray to inspect the metadata relating to the data files. | metadata_file = 'main.nc'
metadata_key = prefix + metadata_file
client.download_file(era5_bucket, metadata_key, metadata_file)
ds_meta = xr.open_dataset('main.nc', decode_times=False)
ds_meta.info() | aws/era5-s3-via-boto.ipynb | planet-os/notebooks | mit |
Now let's acquire data for a single variable over the course of a month. Let's download air temperature for August of 2017 and open the NetCDF file using xarray.
Note that the cell below may take some time to execute, depending on your connection speed. Most of the variable files are roughly 1 GB in size. | # select date and variable of interest
date = datetime.date(2017,8,1)
var = 'air_temperature_at_2_metres'
# file path patterns for remote S3 objects and corresponding local file
s3_data_ptrn = '{year}/{month}/data/{var}.nc'
data_file_ptrn = '{year}{month}_{var}.nc'
year = date.strftime('%Y')
month = date.strftime('%m')
s3_data_key = s3_data_ptrn.format(year=year, month=month, var=var)
data_file = data_file_ptrn.format(year=year, month=month, var=var)
if not os.path.isfile(data_file): # check if file already exists
print("Downloading %s from S3..." % s3_data_key)
client.download_file(era5_bucket, s3_data_key, data_file)
ds = xr.open_dataset(data_file)
ds.info | aws/era5-s3-via-boto.ipynb | planet-os/notebooks | mit |
The ds.info output above shows us that there are three dimensions to the data: lat, lon, and time0; and one data variable: air_temperature_at_2_metres. Let's inspect the coordinate values to see what they look like... | ds.coords.values() | aws/era5-s3-via-boto.ipynb | planet-os/notebooks | mit |
In the coordinate values, we can see that longitude is expressed as degrees east, ranging from 0 to 359.718 degrees. Latitude is expressed as degrees north, ranging from -89.784874 to 89.784874. And finally the time0 coordinate, ranging from 2017-08-01T07:00:00Z to 2017-09-01T06:00:00Z.
As mentioned above, due to the forecast run timing the first forecast run of the month results in data beginning at 07:00, while the last produces data through September 1 at 06:00.
Temperature at Specific Locations
Let's create a list of various locations and plot their temperature values during the month. Note that the longitude values of the coordinates below are not given in degrees east, but rather as a mix of eastward and westward values. The data's longitude coordinate is degrees east, so we'll convert these location coordinates accordingly to match the data. | # location coordinates
locs = [
{'name': 'santa_monica', 'lon': -118.496245, 'lat': 34.010341},
{'name': 'tallinn', 'lon': 24.753574, 'lat': 59.436962},
{'name': 'honolulu', 'lon': -157.835938, 'lat': 21.290014},
{'name': 'cape_town', 'lon': 18.423300, 'lat': -33.918861},
{'name': 'dubai', 'lon': 55.316666, 'lat': 25.266666},
]
# convert westward longitudes to degrees east
for l in locs:
if l['lon'] < 0:
l['lon'] = 360 + l['lon']
locs
ds_locs = xr.Dataset()
# interate through the locations and create a dataset
# containing the temperature values for each location
for l in locs:
name = l['name']
lon = l['lon']
lat = l['lat']
var_name = name
ds2 = ds.sel(lon=lon, lat=lat, method='nearest')
lon_attr = '%s_lon' % name
lat_attr = '%s_lat' % name
ds2.attrs[lon_attr] = ds2.lon.values.tolist()
ds2.attrs[lat_attr] = ds2.lat.values.tolist()
ds2 = ds2.rename({var : var_name}).drop(('lat', 'lon'))
ds_locs = xr.merge([ds_locs, ds2])
ds_locs.data_vars | aws/era5-s3-via-boto.ipynb | planet-os/notebooks | mit |
Convert Units and Create a Dataframe
Temperature data in the ERA5 dataset uses Kelvin. Let's convert it to something more meaningful. I've chosen to use Fahrenheit, because as a U.S. citizen (and stubborn metric holdout) Celcius still feels foreign to me ;-)
While we're at it, let's also convert the dataset to a pandas dataframe and use the describe method to display some statistics about the data. | def kelvin_to_celcius(t):
return t - 273.15
def kelvin_to_fahrenheit(t):
return t * 9/5 - 459.67
ds_locs_f = ds_locs.apply(kelvin_to_fahrenheit)
df_f = ds_locs_f.to_dataframe()
df_f.describe() | aws/era5-s3-via-boto.ipynb | planet-os/notebooks | mit |
Show Me Some Charts!
Finally, let's plot the temperature data for each of the locations over the period. The first plot displays the hourly temperature for each location over the month.
The second plot is a box plot. A box plot is a method for graphically depicting groups of numerical data through their quartiles. The box extends from the Q1 to Q3 quartile values of the data, with a line at the median (Q2). The whiskers extend from the edges of box to show the range of the data. The position of the whiskers is set by default to 1.5 * IQR (IQR = Q3 - Q1) from the edges of the box. Outlier points are those past the end of the whiskers. | # readability please
plt.rcParams.update({'font.size': 16})
ax = df_f.plot(figsize=(18, 10), title="ERA5 Air Temperature at 2 Meters", grid=1)
ax.set(xlabel='Date', ylabel='Air Temperature (deg F)')
plt.show()
ax = df_f.plot.box(figsize=(18, 10))
ax.set(xlabel='Location', ylabel='Air Temperature (deg F)')
plt.show() | aws/era5-s3-via-boto.ipynb | planet-os/notebooks | mit |
Спецсимволы
Для задания в строке особых символов (например переводов строк или табуляций) в Python используются специальный последовательности, вроде \n для перевода строки или \t для символа табуляции: | s = "Эта строка\nсостоит из двух строк"
print(s)
s = "А в этой\tстроке\nиспользуются\tсимволы табуляции"
print(s) | crash-course/strings.ipynb | citxx/sis-python | mit |
Такой же синтаксис используетя для задания кавычек в строке: | s1 = "Это \"строка\" с кавычками."
s2 = 'И "это" тоже.'
s3 = 'С одинарными Кавычками \'\' всё работает также.'
print(s1, s2, s3)
print("Если надо задать обратный слэш \\, то его надо просто удвоить: '\\\\'") | crash-course/strings.ipynb | citxx/sis-python | mit |
Операции со строками
Сложение
Строки можно складывать. В этом случае они просто припишутся друг к другу. По-умному это называется конкатенацией. | greeting = "Привет"
exclamation = "!!!"
print(greeting + exclamation) | crash-course/strings.ipynb | citxx/sis-python | mit |
Повторение
Можно умножать на целое число, чтобы повторить строку нужное число раз. | print("I will write in Python with style!\n" * 10)
print(3 * "Really\n") | crash-course/strings.ipynb | citxx/sis-python | mit |
Индексация
Получить символ на заданной позиции можно также, как и в C++ или Pascal. Индекасация начинается с 0. | s = "Это моя строка"
print(s[0], s[1], s[2]) | crash-course/strings.ipynb | citxx/sis-python | mit |
Но нельзя поменять отдельный символ. Это сделано для того, чтобы более логично и эффективно реализовать некоторые возможности Python. | s = "Вы не можете изменить символы этой строки"
s[0] = "Т" | crash-course/strings.ipynb | citxx/sis-python | mit |
Перевод: ОшибкаТипа: объект 'str' не поддерживает присваивание элементов
Можно указывать отрицательные индексы, тогда нумерация происходит с конца. | s = "Строка"
print(s[-1], "=", s[5])
print(s[-2], "=", s[4])
print(s[-3], "=", s[3])
print(s[-4], "=", s[2])
print(s[-5], "=", s[1])
print(s[-6], "=", s[0]) | crash-course/strings.ipynb | citxx/sis-python | mit |
Длина строки | s = "Для получения длины используется функция len"
print(len(s)) | crash-course/strings.ipynb | citxx/sis-python | mit |
Проверка наличия подстроки
Проверить наличие или отсутствие в строке подстроки или символа можно с помощью операций in и not in. | vowels = "аеёиоуыэюя"
c = "ы"
if c in vowels:
print(c, "- гласная")
else:
print(c, "- согласная")
s = "Python - лучший из неторопливых языков :)"
print("Python" in s)
print("C++" in s) | crash-course/strings.ipynb | citxx/sis-python | mit |
Кодировка символов
В памяти компьютера каждый символ хранится как число. Соответствие между символом и числом называется кодировкой.
Самая простая кодировка для латинских букв, цифр и часто используемых символов — ASCII. Она задаёт коды (числа) для 128 символов и используется в Python для представления этих символов.
Код по символу | # Код любого символа можно получить с помощью функции ord
print(ord("a"))
# Можно пользоваться тем, что коды чисел, маленьких латинских букв и больших латинских букв идут подряд.
print("Цифры:", ord("0"), ord("1"), ord("2"), ord("3"), "...", ord("8"), ord("9"))
print("Маленькие буквы:", ord("a"), ord("b"), ord("c"), ord("d"), "...", ord("y"), ord("z"))
print("Большие буквы:", ord("A"), ord("B"), ord("C"), ord("D"), "...", ord("Y"), ord("Z"))
# Например, так можно получить номер буквы в алфавите
c = "g"
print(ord(c) - ord('a')) | crash-course/strings.ipynb | citxx/sis-python | mit |
Символ по коду | # Для получение символа по коду используется функция chr
print(chr(100)) | crash-course/strings.ipynb | citxx/sis-python | mit |
ASCII | # Этот код выводит всю таблицу ASCII
for code in range(128):
print('chr(' + str(code) + ') =', repr(chr(code))) | crash-course/strings.ipynb | citxx/sis-python | mit |
读取数据,去掉不用的数据 | if __name__ == '__main__':
# parser = set_arguments()
# cmd_args = parser.parse_args()
print('{} START'.format(time.strftime(TIME_FORMAT)))
fd = codecs.open(DEFAULT_FIN, 'r', 'utf-8')
fw = codecs.open( DEFAULT_FOUT, 'w', 'utf-8')
reg = re.compile('〖(.*)〗')
start_flag = False
for line in fd:
line = line.strip()
if not line or '《全唐诗》' in line or '<http' in line or '□' in line:
continue
elif '〖' in line and '〗' in line:
if start_flag:
fw.write('\n')
start_flag = True
g = reg.search(line)
if g:
fw.write(g.group(1))
fw.write('\n')
else:a
# noisy data
print(line)
else:
line = reg_noisy.sub('', line)
line = reg_note.sub('', line)
line = line.replace(' .', '')
fw.write(line)
fd.close()
fw.close()
print('{} STOP'.format(time.strftime(TIME_FORMAT))) | PrepareData.ipynb | seth2000/chinesepoem | mit |
分词实验
DEFAULT_FOUT = os.path.join(DATA_FOLDER, 'poem.txt')
thu1 = thulac.thulac(seg_only=True) #只进行分词,不进行词性标注
text = thu1.cut("我爱北京天安门", text=True) #进行一句话分词
print(text)
thu1 = thulac.thulac(seg_only=True) #只进行分词,不进行词性标注
thu1.cut_f(DEFAULT_FOUT, outp) #对input.txt文件内容进行分词,输出到output.txt | print('{} START'.format(time.strftime(TIME_FORMAT)))
import thulac
DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt')
fd = codecs.open(DEFAULT_FOUT, 'r', 'utf-8')
fw = codecs.open(DEFAULT_Segment, 'w', 'utf-8')
thu1 = thulac.thulac(seg_only=True) #只进行分词,不进行词性标注
for line in fd:
#print(line)
fw.write(thu1.cut(line, text=True))
fw.write('\n')
fd.close()
fw.close()
print('{} STOP'.format(time.strftime(TIME_FORMAT)))
print('{} START'.format(time.strftime(TIME_FORMAT)))
from gensim.models import word2vec
#DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt')
DEFAULT_Word2Vec = os.path.join(DATA_FOLDER, 'Word2Vec150.bin')
sentences = word2vec.Text8Corpus(DEFAULT_Segment)
model = word2vec.Word2Vec(sentences, size=150)
#DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt')
model.save(DEFAULT_Word2Vec)
print('{} STOP'.format(time.strftime(TIME_FORMAT)))
model[u'男']
DEFAULT_FIN = os.path.join(DATA_FOLDER, '唐诗语料库.txt')
DEFAULT_FOUT = os.path.join(DATA_FOLDER, 'poem.txt')
DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt')
def GetFirstNline(filePath, linesNumber):
fd = codecs.open(filePath, 'r', 'utf-8')
for i in range(1,linesNumber):
print(fd.readline())
fd.close()
GetFirstNline(DEFAULT_Segment, 3)
GetFirstNline(DEFAULT_FOUT, 3) | PrepareData.ipynb | seth2000/chinesepoem | mit |
分词不是很成功,我们转向直接用汉字字符来代替分段,我们保留标点符号 | print('{} START'.format(time.strftime(TIME_FORMAT)))
DEFAULT_FOUT = os.path.join(DATA_FOLDER, 'poem.txt')
DEFAULT_charSegment = os.path.join(DATA_FOLDER, 'Charactersegment.txt')
fd = codecs.open(DEFAULT_FOUT, 'r', 'utf-8')
fw = codecs.open(DEFAULT_charSegment, 'w', 'utf-8')
start_flag = False
for line in fd:
if len(line) > 0:
for c in line:
if c != '\n':
fw.write(c)
fw.write(' ')
fw.write('\n')
fd.close()
fw.close()
print('{} STOP'.format(time.strftime(TIME_FORMAT)))
GetFirstNline(DEFAULT_charSegment, 3)
print('{} START'.format(time.strftime(TIME_FORMAT)))
from gensim.models import word2vec
#DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt')
DEFAULT_Char2Vec = os.path.join(DATA_FOLDER, 'Char2Vec100.bin')
fd = codecs.open(DEFAULT_charSegment, 'r', 'utf-8')
sentences = fd.readlines()
fd.close
model = word2vec.Word2Vec(sentences, size=100)
#DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt')
model.save(DEFAULT_Char2Vec)
print('{} STOP'.format(time.strftime(TIME_FORMAT)))
model[u'男']
print('{} START'.format(time.strftime(TIME_FORMAT)))
from gensim.models import word2vec
DEFAULT_charSegment = os.path.join(DATA_FOLDER, 'Charactersegment.txt')
DEFAULT_Char2Vec50 = os.path.join(DATA_FOLDER, 'Char2Vec50.bin')
fd = codecs.open(DEFAULT_charSegment, 'r', 'utf-8')
sentences = fd.readlines()
fd.close
model = word2vec.Word2Vec(sentences, size=50)
#DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt')
model.save(DEFAULT_Char2Vec50)
print('{} STOP'.format(time.strftime(TIME_FORMAT)))
model.wv.most_similar([u'好']) | PrepareData.ipynb | seth2000/chinesepoem | mit |
把汉字转成拼音 | from pypinyin import pinyin | PrepareData.ipynb | seth2000/chinesepoem | mit |
Importing some modules (libraries) and giving them short names such as np and plt. You will find that most users will use these common ones. | import numpy as np
import matplotlib.pyplot as plt | notebooks/04-plotting.ipynb | teuben/astr288p | mit |
It might be tempting to import a module in a blank namespace, to make for "more readable code" like the following example:
from math import *
s2 = sqrt(2)
but the danger of this is that importing multiple modules in blank namespace can make some invisible, plus obfuscates the code where the function came from. So it is safer to stick to import where you get the module namespace (or a shorter alias):
import math
s2 = math.sqrt(2)
Line plot
The array $x$ will contain numbers from 0 to 9.5 in steps of 0.5. We then compute two arrays $y$ and $z$ as follows:
$$
y = {1\over{10}}{x^2}
$$
and
$$
z = 3\sqrt{x}
$$ | x = 0.5*np.arange(20)
y = x*x*0.1
z = np.sqrt(x)*3
plt.plot(x,y,'o-',label='y')
plt.plot(x,z,'*--',label='z')
plt.title("$x^2$ and $\sqrt{x}$")
#plt.legend(loc='best')
plt.legend()
plt.xlabel('X axis')
plt.ylabel('Y axis')
#plt.xscale('log')
#plt.yscale('log')
#plt.savefig('sample1.png')
| notebooks/04-plotting.ipynb | teuben/astr288p | mit |
Scatter plot | plt.scatter(x,y,s=40.0,c='r',label='y')
plt.scatter(x,z,s=20.0,c='g',label='z')
plt.legend(loc='best')
plt.show() | notebooks/04-plotting.ipynb | teuben/astr288p | mit |
Multi planel plots| | fig = plt.figure()
fig1 = fig.add_subplot(121)
fig1.scatter(x,z,s=20.0,c='g',label='z')
fig2 = fig.add_subplot(122)
fig2.scatter(x,y,s=40.0,c='r',label='y'); | notebooks/04-plotting.ipynb | teuben/astr288p | mit |
Histogram | n = 100000
mean = 4.0
disp = 2.0
bins = 32
g = np.random.normal(mean,disp,n)
p = np.random.poisson(mean,n)
gh=plt.hist(g,bins)
ph=plt.hist(p,bins)
plt.hist([g,p],bins) | notebooks/04-plotting.ipynb | teuben/astr288p | mit |
Los sistemas mas sencillos a estudiar en oscilaciones son el sistema masa-resorte y el péndulo simple.
<div>
<img style="float: left; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/7/76/Pendulum.jpg" width="150px" height="50px" />
<img style="float: right; margin: 15px 15px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/ko/9/9f/Mass_spring.png" width="200px" height="100px" />
</div>
\begin{align}
\frac{d^2 x}{dt^2} + \omega_{0}^2 x &= 0, \quad \omega_{0} = \sqrt{\frac{k}{m}}\notag\
\frac{d^2 \theta}{dt^2} + \omega_{0}^{2}\, \theta &= 0, \quad\mbox{donde}\quad \omega_{0}^2 = \frac{g}{l}
\end{align}
Sistema masa-resorte
La solución a este sistema masa-resorte se explica en términos de la segunda ley de Newton. Para este caso, si la masa permanece constante y solo consideramos la dirección en $x$. Entonces,
\begin{equation}
F = m \frac{d^2x}{dt^2}.
\end{equation}
¿Cuál es la fuerza? Ley de Hooke!
\begin{equation}
F = -k x, \quad k > 0.
\end{equation}
Vemos que la fuerza se opone al desplazamiento y su intensidad es proporcional al mismo. Y $k$ es la constante elástica o recuperadora del resorte.
Entonces, un modelo del sistema masa-resorte está descrito por la siguiente ecuación diferencial:
\begin{equation}
\frac{d^2x}{dt^2} + \frac{k}{m}x = 0,
\end{equation}
cuya solución se escribe como
\begin{equation}
x(t) = A \cos(\omega_{o} t) + B \sin(\omega_{o} t)
\end{equation}
Y su primera derivada (velocidad) sería
\begin{equation}
\frac{dx(t)}{dt} = \omega_{0}[- A \sin(\omega_{0} t) + B\cos(\omega_{0}t)]
\end{equation}
<font color=red> Ver en el tablero que significa solución de la ecuación diferencial.</font>
¿Cómo se ven las gráficas de $x$ vs $t$ y $\frac{dx}{dt}$ vs $t$?
Esta instrucción es para que las gráficas aparezcan dentro de este entorno. | %matplotlib inline | Modulo1/.ipynb_checkpoints/Clase4_OsciladorArmonico-checkpoint.ipynb | EricChiquitoG/Simulacion2017 | mit |
_Esta es la librería con todas las instrucciones para realizar gráficos. _ | import matplotlib.pyplot as plt
import matplotlib as mpl
label_size = 14
mpl.rcParams['xtick.labelsize'] = label_size
mpl.rcParams['ytick.labelsize'] = label_size | Modulo1/.ipynb_checkpoints/Clase4_OsciladorArmonico-checkpoint.ipynb | EricChiquitoG/Simulacion2017 | mit |
Y esta es la librería con todas las funciones matemáticas necesarias. | import numpy as np
# Definición de funciones a graficar
A, B, w0 = .5, .1, .5 # Parámetros
t = np.linspace(0, 50, 100) # Creamos vector de tiempo de 0 a 50 con 100 puntos
x = A*np.cos(w0*t)+B*np.sin(w0*t) # Función de posición
dx = w0*(-A*np.sin(w0*t)+B*np.cos(w0*t)) # Función de velocidad
# Gráfico
plt.figure(figsize = (7, 4)) # Ventana de gráfica con tamaño
plt.plot(t, x, '-', lw = 1, ms = 4,
label = '$x(t)$') # Explicación
plt.plot(t, dx, 'ro-', lw = 1, ms = 4,
label = r'$\dot{x(t)}$')
plt.xlabel('$t$', fontsize = 20) # Etiqueta eje x
plt.show()
# Colores, etiquetas y otros formatos
plt.figure(figsize = (7, 4))
plt.scatter(t, x, lw = 0, c = 'red',
label = '$x(t)$') # Gráfica con puntos
plt.plot(t, x, 'r-', lw = 1) # Grafica normal
plt.scatter(t, dx, lw = 0, c = 'b',
label = r'$\frac{dx}{dt}$') # Con la r, los backslash se tratan como un literal, no como un escape
plt.plot(t, dx, 'b-', lw = 1)
plt.xlabel('$t$', fontsize = 20)
plt.legend(loc = 'best') # Leyenda con las etiquetas de las gráficas
plt.show() | Modulo1/.ipynb_checkpoints/Clase4_OsciladorArmonico-checkpoint.ipynb | EricChiquitoG/Simulacion2017 | mit |
Y si consideramos un conjunto de frecuencias de oscilación, entonces | frecuencias = np.array([.1, .2 , .5, .6]) # Vector de diferentes frecuencias
plt.figure(figsize = (7, 4)) # Ventana de gráfica con tamaño
# Graficamos para cada frecuencia
for w0 in frecuencias:
x = A*np.cos(w0*t)+B*np.sin(w0*t)
plt.plot(t, x, '*-')
plt.xlabel('$t$', fontsize = 16) # Etiqueta eje x
plt.ylabel('$x(t)$', fontsize = 16) # Etiqueta eje y
plt.title('Oscilaciones', fontsize = 16) # Título de la gráfica
plt.show() | Modulo1/.ipynb_checkpoints/Clase4_OsciladorArmonico-checkpoint.ipynb | EricChiquitoG/Simulacion2017 | mit |
Estos colores, son el default de matplotlib, sin embargo existe otra librería dedicada, entre otras cosas, a la presentación de gráficos. | import seaborn as sns
sns.set(style='ticks', palette='Set2')
frecuencias = np.array([.1, .2 , .5, .6])
plt.figure(figsize = (7, 4))
for w0 in frecuencias:
x = A*np.cos(w0*t)+B*np.sin(w0*t)
plt.plot(t, x, 'o-',
label = '$\omega_0 = %s$'%w0) # Etiqueta cada gráfica con frecuencia correspondiente (conversion float a string)
plt.xlabel('$t$', fontsize = 16)
plt.ylabel('$x(t)$', fontsize = 16)
plt.title('Oscilaciones', fontsize = 16)
plt.legend(loc='center left', bbox_to_anchor=(1.05, 0.5), prop={'size': 14})
plt.show() | Modulo1/.ipynb_checkpoints/Clase4_OsciladorArmonico-checkpoint.ipynb | EricChiquitoG/Simulacion2017 | mit |
Si queremos tener manipular un poco mas las cosas, hacemos uso de lo siguiente: | from ipywidgets import *
def masa_resorte(t = 0):
A, B, w0 = .5, .1, .5 # Parámetros
x = A*np.cos(w0*t)+B*np.sin(w0*t) # Función de posición
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(x, [0], 'ko', ms = 10)
ax.set_xlim(xmin = -0.6, xmax = .6)
ax.axvline(x=0, color = 'r')
ax.axhline(y=0, color = 'grey', lw = 1)
fig.canvas.draw()
interact(masa_resorte, t = (0, 50,.01)); | Modulo1/.ipynb_checkpoints/Clase4_OsciladorArmonico-checkpoint.ipynb | EricChiquitoG/Simulacion2017 | mit |
La opción de arriba generalmente será lenta, así que lo recomendable es usar interact_manual. | def masa_resorte(t = 0):
A, B, w0 = .5, .1, .5 # Parámetros
x = A*np.cos(w0*t)+B*np.sin(w0*t) # Función de posición
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(x, [0], 'ko', ms = 10)
ax.set_xlim(xmin = -0.6, xmax = .6)
ax.axvline(x=0, color = 'r')
ax.axhline(y=0, color = 'grey', lw = 1)
fig.canvas.draw()
interact_manual(masa_resorte, t = (0, 50,.01)); | Modulo1/.ipynb_checkpoints/Clase4_OsciladorArmonico-checkpoint.ipynb | EricChiquitoG/Simulacion2017 | mit |
Péndulo simple
Ahora, si fijamos nuestra atención al movimiento de un péndulo simple (oscilaciones pequeñas), la ecuación diferencial a resolver tiene la misma forma:
\begin{equation}
\frac{d^2 \theta}{dt^2} + \omega_{0}^{2}\, \theta = 0, \quad\mbox{donde}\quad \omega_{0}^2 = \frac{g}{l}.
\end{equation}
La diferencia más evidente es como hemos definido a $\omega_{0}$. Esto quiere decir que,
\begin{equation}
\theta(t) = A\cos(\omega_{0} t) + B\sin(\omega_{0}t)
\end{equation}
Si graficamos la ecuación de arriba vamos a encontrar un comportamiento muy similar al ya discutido anteriormente. Es por ello que ahora veremos el movimiento en el plano $xy$. Es decir,
\begin{align}
x &= l \sin(\theta), \quad
y = l \cos(\theta)
\end{align} | # Podemos definir una función que nos entregue theta dados los parámetros y el tiempo
def theta_t(a, b, g, l, t):
omega_0 = np.sqrt(g/l)
return a * np.cos(omega_0 * t) + b * np.sin(omega_0 * t)
# Hacemos un gráfico interactivo del péndulo
def pendulo_simple(t = 0):
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(1, 1, 1)
x = 2 * np.sin(theta_t(.4, .6, 9.8, 2, t))
y = - 2 * np.cos(theta_t(.4, .6, 9.8, 2, t))
ax.plot(x, y, 'ko', ms = 10)
ax.plot([0], [0], 'rD')
ax.plot([0, x ], [0, y], 'k-', lw = 1)
ax.set_xlim(xmin = -2.2, xmax = 2.2)
ax.set_ylim(ymin = -2.2, ymax = .2)
fig.canvas.draw()
interact_manual(pendulo_simple, t = (0, 10,.01)); | Modulo1/.ipynb_checkpoints/Clase4_OsciladorArmonico-checkpoint.ipynb | EricChiquitoG/Simulacion2017 | mit |
Condiciones iniciales
Realmente lo que se tiene que resolver es,
\begin{equation}
\theta(t) = \theta(0) \cos(\omega_{0} t) + \frac{\dot{\theta}(0)}{\omega_{0}} \sin(\omega_{0} t)
\end{equation}
Actividad. Modificar el programa anterior para incorporar las condiciones iniciales. | # Solución:
def theta_t():
return
def pendulo_simple(t = 0):
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(1, 1, 1)
x = 2 * np.sin(theta_t( , t))
y = - 2 * np.cos(theta_t(, t))
ax.plot(x, y, 'ko', ms = 10)
ax.plot([0], [0], 'rD')
ax.plot([0, x ], [0, y], 'k-', lw = 1)
ax.set_xlim(xmin = -2.2, xmax = 2.2)
ax.set_ylim(ymin = -2.2, ymax = .2)
fig.canvas.draw()
interact_manual(pendulo_simple, t = (0, 10,.01)); | Modulo1/.ipynb_checkpoints/Clase4_OsciladorArmonico-checkpoint.ipynb | EricChiquitoG/Simulacion2017 | mit |
Plano fase $(x, \frac{dx}{dt})$
La posición y velocidad para el sistema masa-resorte se escriben como:
\begin{align}
x(t) &= x(0) \cos(\omega_{o} t) + \frac{\dot{x}(0)}{\omega_{0}} \sin(\omega_{o} t)\
\dot{x}(t) &= -\omega_{0}x(0) \sin(\omega_{0} t) + \dot{x}(0)\cos(\omega_{0}t)]
\end{align} | k = 3 #constante elástica [N]/[m]
m = 1 # [kg]
omega_0 = np.sqrt(k/m)
x_0 = .5
dx_0 = .1
t = np.linspace(0, 50, 300)
x_t = x_0 *np.cos(omega_0 *t) + (dx_0/omega_0) * np.sin(omega_0 *t)
dx_t = -omega_0 * x_0 * np.sin(omega_0 * t) + dx_0 * np.cos(omega_0 * t)
plt.figure(figsize = (7, 4))
plt.plot(t, x_t, label = '$x(t)$', lw = 1)
plt.plot(t, dx_t, label = '$\dot{x}(t)$', lw = 1)
#plt.plot(t, dx_t/omega_0, label = '$\dot{x}(t)$', lw = 1) # Mostrar que al escalar, la amplitud queda igual
plt.legend(loc='center left', bbox_to_anchor=(1.01, 0.5), prop={'size': 14})
plt.xlabel('$t$', fontsize = 18)
plt.show()
plt.figure(figsize = (5, 5))
plt.plot(x_t, dx_t/omega_0, 'ro', ms = 2)
plt.xlabel('$x(t)$', fontsize = 18)
plt.ylabel('$\dot{x}(t)/\omega_0$', fontsize = 18)
plt.show()
plt.figure(figsize = (5, 5))
plt.scatter(x_t, dx_t/omega_0, cmap = 'viridis', c = dx_t, s = 8, lw = 0)
plt.xlabel('$x(t)$', fontsize = 18)
plt.ylabel('$\dot{x}(t)/\omega_0$', fontsize = 18)
plt.show() | Modulo1/.ipynb_checkpoints/Clase4_OsciladorArmonico-checkpoint.ipynb | EricChiquitoG/Simulacion2017 | mit |
Multiples condiciones iniciales | k = 3 #constante elástica [N]/[m]
m = 1 # [kg]
omega_0 = np.sqrt(k/m)
t = np.linspace(0, 50, 50)
x_0s = np.array([.7, .5, .25, .1])
dx_0s = np.array([.2, .1, .05, .01])
cmaps = np.array(['viridis', 'inferno', 'magma', 'plasma'])
plt.figure(figsize = (6, 6))
for indx, x_0 in enumerate(x_0s):
x_t = x_0 *np.cos(omega_0 *t) + (dx_0s[indx]/omega_0) * np.sin(omega_0 *t)
dx_t = -omega_0 * x_0 * np.sin(omega_0 * t) + dx_0s[indx] * np.cos(omega_0 * t)
plt.scatter(x_t, dx_t/omega_0, cmap = cmaps[indx],
c = dx_t, s = 10,
lw = 0)
plt.xlabel('$x(t)$', fontsize = 18)
plt.ylabel('$\dot{x}(t)/\omega_0$', fontsize = 18)
#plt.legend(loc='center left', bbox_to_anchor=(1.05, 0.5)) | Modulo1/.ipynb_checkpoints/Clase4_OsciladorArmonico-checkpoint.ipynb | EricChiquitoG/Simulacion2017 | mit |
Problem Statement: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> Figure 1 </u>: Football field<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games. | train_X, train_Y, test_X, test_Y = load_2D_dataset() | deep-learnining-specialization/2. improving deep neural networks/week1/Regularization.ipynb | diegocavalca/Studies | cc0-1.0 |
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
Your goal: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
Analysis of the dataset: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in regularization mode -- by setting the lambd input to a non-zero value. We use "lambd" instead of "lambda" because "lambda" is a reserved keyword in Python.
- in dropout mode -- by setting the keep_prob to a value less than one
You will first try the model without any regularization. Then, you will implement:
- L2 regularization -- functions: "compute_cost_with_regularization()" and "backward_propagation_with_regularization()"
- Dropout -- functions: "forward_propagation_with_dropout()" and "backward_propagation_with_dropout()"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model. | def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters | deep-learnining-specialization/2. improving deep neural networks/week1/Regularization.ipynb | diegocavalca/Studies | cc0-1.0 |
Let's train the model without any regularization, and observe the accuracy on the train/test sets. | parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters) | deep-learnining-specialization/2. improving deep neural networks/week1/Regularization.ipynb | diegocavalca/Studies | cc0-1.0 |
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the baseline model (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model. | plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) | deep-learnining-specialization/2. improving deep neural networks/week1/Regularization.ipynb | diegocavalca/Studies | cc0-1.0 |
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
2 - L2 Regularization
The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} }\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
Exercise: Implement compute_cost_with_regularization() which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
python
np.sum(np.square(Wl))
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $. | # GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (lambd/(2*m))*(np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1))) | deep-learnining-specialization/2. improving deep neural networks/week1/Regularization.ipynb | diegocavalca/Studies | cc0-1.0 |
Expected Output:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
Exercise: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$). | # GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m)*W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m)*W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m)*W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"])) | deep-learnining-specialization/2. improving deep neural networks/week1/Regularization.ipynb | diegocavalca/Studies | cc0-1.0 |
Expected Output:
<table>
<tr>
<td>
**dW1**
</td>
<td>
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
</td>
</tr>
<tr>
<td>
**dW2**
</td>
<td>
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
</td>
</tr>
<tr>
<td>
**dW3**
</td>
<td>
[[-1.77691347 -0.11832879 -0.09397446]]
</td>
</tr>
</table>
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The model() function will call:
- compute_cost_with_regularization instead of compute_cost
- backward_propagation_with_regularization instead of backward_propagation | parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters) | deep-learnining-specialization/2. improving deep neural networks/week1/Regularization.ipynb | diegocavalca/Studies | cc0-1.0 |
Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary. | plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) | deep-learnining-specialization/2. improving deep neural networks/week1/Regularization.ipynb | diegocavalca/Studies | cc0-1.0 |
Observations:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
What is L2-regularization actually doing?:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
What you should remember -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
3 - Dropout
Finally, dropout is a widely used regularization technique that is specific to deep learning.
It randomly shuts down some neurons in each iteration. Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep_prob$ or keep it with probability $keep_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
3.1 - Forward propagation with dropout
Exercise: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
Instructions:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using np.random.rand() to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{1} d^{1} ... d^{1}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 0 with probability (1-keep_prob) or 1 with probability (keep_prob), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: X = (X < 0.5). Note that 0 and 1 are respectively equivalent to False and True.
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by keep_prob. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.) | # GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0], A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = D1 < keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1 * D1 # Step 3: shut down some neurons of A1
A1 = A1 / keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0], A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = D2 < keep_prob # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2 * D2 # Step 3: shut down some neurons of A2
A2 = A2 / keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3)) | deep-learnining-specialization/2. improving deep neural networks/week1/Regularization.ipynb | diegocavalca/Studies | cc0-1.0 |
Expected Output:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
3.2 - Backward propagation with dropout
Exercise: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
Instruction:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to A1. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to dA1.
2. During forward propagation, you had divided A1 by keep_prob. In backpropagation, you'll therefore have to divide dA1 by keep_prob again (the calculus interpretation is that if $A^{[1]}$ is scaled by keep_prob, then its derivative $dA^{[1]}$ is also scaled by the same keep_prob). | # GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2 * D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1 * D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"])) | deep-learnining-specialization/2. improving deep neural networks/week1/Regularization.ipynb | diegocavalca/Studies | cc0-1.0 |
Expected Output:
<table>
<tr>
<td>
**dA1**
</td>
<td>
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
</td>
</tr>
<tr>
<td>
**dA2**
</td>
<td>
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
</td>
</tr>
</table>
Let's now run the model with dropout (keep_prob = 0.86). It means at every iteration you shut down each neurons of layer 1 and 2 with 24% probability. The function model() will now call:
- forward_propagation_with_dropout instead of forward_propagation.
- backward_propagation_with_dropout instead of backward_propagation. | parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters) | deep-learnining-specialization/2. improving deep neural networks/week1/Regularization.ipynb | diegocavalca/Studies | cc0-1.0 |
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary. | plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) | deep-learnining-specialization/2. improving deep neural networks/week1/Regularization.ipynb | diegocavalca/Studies | cc0-1.0 |
Create Local File Space | mydatafolder = os.environ['PWD'] + '/' + 'my_team_name_data_folder'
# THIS DISABLES HOST KEY CHECKING! Should be okay for our temporary running machines though.
cnopts = pysftp.CnOpts()
cnopts.hostkeys = None
#Get this from your Nimbix machine (or other cloud service provider!)
hostname='NAE-xxxx.jarvice.com'
username='nimbix'
password='xx' | tutorials/General_move_data_to_from_Nimbix_Cloud.ipynb | setiQuest/ML4SETI | apache-2.0 |
PUT a file
If you follow the Step 3 tutorial, you will have created some zip files containing the PNGs. These will be located in your my_team_name_data_folder/zipfiles/ directory. | with pysftp.Connection(hostname, username=username, password=password, cnopts=cnopts) as sftp:
sftp.put(mydatafolder + '/zipfiles/classification_6_noise.zip') # upload file to remote | tutorials/General_move_data_to_from_Nimbix_Cloud.ipynb | setiQuest/ML4SETI | apache-2.0 |
GET a file
First, I define a separate location to hold files I get from remote. | fromnimbixfolder = mydatafolder + '/fromnimbix'
if os.path.exists(fromnimbixfolder) is False:
os.makedirs(fromnimbixfolder)
with pysftp.Connection(hostname, username=username, password=password, cnopts=cnopts) as sftp:
with pysftp.cd(fromnimbixfolder):
sftp.get('test.csv') #data in local HOME space
sftp.get('/data/my_team_name_data_folder/our_results.csv') #data in persistent Nimbix Cloud storage | tutorials/General_move_data_to_from_Nimbix_Cloud.ipynb | setiQuest/ML4SETI | apache-2.0 |
Variables and data types
Here we go! we've written our first line of code... But I guess we want to do something a little more interesting, right? Well, for a start, we might want to use Python to execute some operation (say: sum two numbers like 2 and 3) and process the result to print it on the screen, process it, and reuse it as many time as we want...
Variables is what we use to store values. Think of it as a shoebox where you place your content; next time you need that content (i.e. the result of a previous operation, or for example some input you've read from a file) you simply call the shoebox name... | result = 2 + 3
#now we print the result
print(result)
# by the way, I'm a comment. I'm not executed
# every line of code following the sign # is ignored:
# print("I'm line n. 3: do you see me?")
# see? You don't see me...
print("I'm line nr. 5 and you DO see me!") | participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb | mromanello/SunoikisisDC_NER | gpl-3.0 |
That's it! As easy as that (yes, in some programming languages you have to create or declare the variable first and then use it to fill the shoebox; in Python, you go ahead and simply use it!)
Now, what do you think we will get when we execute the following code? | result + 5 | participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb | mromanello/SunoikisisDC_NER | gpl-3.0 |
What types of values can we put into a variable? What goes into the shoebox? We can start by the members of this list:
Integers (-1,0,1,2,3,4...)
Strings ("Hello", "s", "Wolfgang Amadeus Mozart", "I am the α and the ω!"...)
floats (3.14159; 2.71828...)
Booleans (True, False)
If you're not sure what type of value you're dealing with, you can use the function type(). Yes, it works with variables too...! | type("I am the α and the ω!")
type(2.7182818284590452353602874713527)
type(True)
result = "hello"
type(result) | participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb | mromanello/SunoikisisDC_NER | gpl-3.0 |
You declare strings with single ('') or double ("") quote: it's totally indifferent! But now two questions:
1. what happens if you forget the quotes?
2. what happens if you put quotes around a number? | hello = "goodbye"
print(hello)
print("hello")
type("2") | participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb | mromanello/SunoikisisDC_NER | gpl-3.0 |
String, integer, float... Why is that so important? Well, try to sum two strings and see what happens... | "2" + "3"
#probably you wanted this...
int("2") + int("3") | participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb | mromanello/SunoikisisDC_NER | gpl-3.0 |
But if we are working with strings, then the "+" sign is used to concatenate the strings: | a = "interesting!"
print("not very " + a) | participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb | mromanello/SunoikisisDC_NER | gpl-3.0 |
Lists and dictionaries
Lists and dictionaries are two very useful types to store whole collections of data | beatles = ["John", "Paul", "George", "Ringo"]
type(beatles)
# dictionaries collections of key : value pairs
beatles_dictionary = { "john" : "John Lennon" ,
"paul" : "Paul McCartney",
"george" : "George Harrison",
"ringo" : "Ringo Starr"}
type(beatles_dictionary) | participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb | mromanello/SunoikisisDC_NER | gpl-3.0 |
(there are also other types of collection, like Tuples and Sets, but we won't talk about them now; read the links if you're interested!)
Items in list are accessible using their index. Do remember that indexing starts from 0! | print(beatles[0])
#indexes can be negative!
beatles[-1] | participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb | mromanello/SunoikisisDC_NER | gpl-3.0 |
Dictionaries are collections of key : value pairs. You access the value using the key as index | beatles_dictionary["john"]
beatles_dictionary[0] | participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb | mromanello/SunoikisisDC_NER | gpl-3.0 |
There are a bunch of methods that you can apply to list to work with them.
You can append items at the end of a list | beatles.append("Billy Preston")
beatles | participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb | mromanello/SunoikisisDC_NER | gpl-3.0 |
You can learn the index of an item | beatles.index("George") | participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb | mromanello/SunoikisisDC_NER | gpl-3.0 |
You can insert elements at a predefinite index: | beatles.insert(0, "Pete Best")
print(beatles.index("George"))
beatles | participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb | mromanello/SunoikisisDC_NER | gpl-3.0 |
But most importantly, you can slice lists, producing sub-lists by specifying the range of indexes you want: | beatles[1:5] | participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb | mromanello/SunoikisisDC_NER | gpl-3.0 |
Do you notice something strange? Yes, the limit index is not inclusive (i.e. item beatles[5] is not included) | beatles[5] | participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb | mromanello/SunoikisisDC_NER | gpl-3.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.