markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Preprocessing We're going to clean up our input data first by doing operations such as lower text, removing stop (filler) words, filters using regular expressions, etc.
import nltk from nltk.corpus import stopwords from nltk.stem import PorterStemmer import re nltk.download('stopwords') STOPWORDS = stopwords.words('english') print (STOPWORDS[:5]) porter = PorterStemmer() def preprocess(text, stopwords=STOPWORDS): """Conditional preprocessing on our text unique to our task.""" ...
Sharon Accepts Plan to Reduce Gaza Army Operation, Haaretz Says sharon accepts plan reduce gaza army operation haaretz says
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Split data
import collections from sklearn.model_selection import train_test_split TRAIN_SIZE = 0.7 VAL_SIZE = 0.15 TEST_SIZE = 0.15 def train_val_test_split(X, y, train_size): """Split dataset into data splits.""" X_train, X_, y_train, y_ = train_test_split(X, y, train_size=TRAIN_SIZE, stratify=y) X_val, X_test, y_va...
X_train: (84000,), y_train: (84000,) X_val: (18000,), y_val: (18000,) X_test: (18000,), y_test: (18000,) Sample point: china battles north korea nuclear talks → World
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
LabelEncoder Next we'll define a `LabelEncoder` to encode our text labels into unique indices
import itertools class LabelEncoder(object): """Label encoder for tag labels.""" def __init__(self, class_to_index={}): self.class_to_index = class_to_index self.index_to_class = {v: k for k, v in self.class_to_index.items()} self.classes = list(self.class_to_index.keys()) def __len...
counts: [21000 21000 21000 21000] weights: {0: 4.761904761904762e-05, 1: 4.761904761904762e-05, 2: 4.761904761904762e-05, 3: 4.761904761904762e-05}
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Tokenizer We'll define a `Tokenizer` to convert our text input data into token indices.
import json from collections import Counter from more_itertools import take class Tokenizer(object): def __init__(self, char_level, num_tokens=None, pad_token='<PAD>', oov_token='<UNK>', token_to_index=None): self.char_level = char_level self.separator = '' if self...
Text to indices: (preprocessed) → china battles north korea nuclear talks (tokenized) → [ 16 1491 285 142 114 24]
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Padding We'll need to do 2D padding to our tokenized text.
def pad_sequences(sequences, max_seq_len=0): """Pad sequences to max length in sequence.""" max_seq_len = max(max_seq_len, max(len(sequence) for sequence in sequences)) padded_sequences = np.zeros((len(sequences), max_seq_len)) for i, sequence in enumerate(sequences): padded_sequences[i][:len(se...
(3, 6) [[1.600e+01 1.491e+03 2.850e+02 1.420e+02 1.140e+02 2.400e+01] [1.445e+03 2.300e+01 6.560e+02 2.197e+03 1.000e+00 0.000e+00] [1.200e+02 1.400e+01 1.955e+03 1.005e+03 1.529e+03 4.014e+03]]
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Datasets We're going to create Datasets and DataLoaders to be able to efficiently create batches with our data splits.
class Dataset(torch.utils.data.Dataset): def __init__(self, X, y,): self.X = X self.y = y def __len__(self): return len(self.y) def __str__(self): return f"<Dataset(N={len(self)})>" def __getitem__(self, index): X = self.X[index] y = self.y[index] ...
Sample batch: X: [64, 14] seq_lens: [64] y: [64] Sample point: X: tensor([ 16, 1491, 285, 142, 114, 24, 0, 0, 0, 0, 0, 0, 0, 0], device='cpu') seq_len: 6 y: 3
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Trainer Let's create the `Trainer` class that we'll use to facilitate training for our experiments.
class Trainer(object): def __init__(self, model, device, loss_fn=None, optimizer=None, scheduler=None): # Set params self.model = model self.device = device self.loss_fn = loss_fn self.optimizer = optimizer self.scheduler = scheduler def train_step(self, dataloa...
_____no_output_____
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Vanilla RNN Inputs to RNNs are sequential like text or time-series.
BATCH_SIZE = 64 EMBEDDING_DIM = 100 # Input sequence_size = 8 # words per input x = torch.rand((BATCH_SIZE, sequence_size, EMBEDDING_DIM)) seq_lens = torch.randint(high=sequence_size, size=(1, BATCH_SIZE)) print (x.shape) print (seq_lens.shape)
torch.Size([64, 8, 100]) torch.Size([1, 64])
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
RNN forward pass for a single time step $X_t$:$h_t = tanh(W_{hh}h_{t-1} + W_{xh}X_t+b_h)$*where*:* $W_{hh}$ = hidden units weights| $\in \mathbb{R}^{HXH}$ ($H$ is the hidden dim)* $h_{t-1}$ = previous timestep's hidden state $\in \mathbb{R}^{NXH}$* $W_{xh}$ = input weights| $\in \mathbb{R}^{EXH}$* $X_t$ = input at time...
RNN_HIDDEN_DIM = 128 DROPOUT_P = 0.1 RNN_DROPOUT_P = 0.1 # Initialize hidden state hidden_t = torch.zeros((BATCH_SIZE, RNN_HIDDEN_DIM)) print (hidden_t.size())
torch.Size([64, 128])
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
We'll show how to create an RNN cell using PyTorch's [`RNNCell`](https://pytorch.org/docs/stable/generated/torch.nn.RNNCell.htmltorch.nn.RNNCell) and the more abstracted [`RNN`](https://pytorch.org/docs/stable/generated/torch.nn.RNN.htmltorch.nn.RNN).
# Initialize RNN cell rnn_cell = nn.RNNCell(EMBEDDING_DIM, RNN_HIDDEN_DIM) print (rnn_cell) # Forward pass through RNN x = x.permute(1, 0, 2) # RNN needs batch_size to be at dim 1 # Loop through the inputs time steps hiddens = [] for t in range(sequence_size): hidden_t = rnn_cell(x[t], hidden_t) hiddens.append...
tensor([[-0.5056, 0.6157, 0.4275, ..., 0.1804, -0.1480, -0.4822], [-0.1490, 0.6549, 0.3184, ..., 0.2831, -0.3557, -0.5438], [-0.5290, 0.4321, 0.0885, ..., 0.4848, -0.2672, -0.2660], ..., [-0.3273, 0.6155, -0.2170, ..., 0.1718, -0.1623, -0.3876], [-0.3860, 0.3749, ...
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
In our model, we want to use the RNN's output after the last relevant token in the sentence is processed. The last relevant token doesn't refer the `` tokens but to the last actual word in the sentence and its index is different for each input in the batch. This is why we included a `seq_lens` tensor in our batches.
def gather_last_relevant_hidden(hiddens, seq_lens): """Extract and collect the last relevant hidden state based on the sequence length.""" seq_lens = seq_lens.long().detach().cpu().numpy() - 1 out = [] for batch_index, column_index in enumerate(seq_lens): out.append(hiddens[batch_index, col...
_____no_output_____
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
There are many different ways to use RNNs. So far we've processed our inputs one timestep at a time and we could either use the RNN's output at each time step or just use the final input timestep's RNN output. Let's look at a few other possibilities. Model
import torch.nn.functional as F HIDDEN_DIM = 100 class RNN(nn.Module): def __init__(self, embedding_dim, vocab_size, rnn_hidden_dim, hidden_dim, dropout_p, num_classes, padding_idx=0): super(RNN, self).__init__() # Initialize embeddings self.embeddings = nn.Embeddin...
<bound method Module.named_parameters of RNN( (embeddings): Embedding(5000, 100, padding_idx=0) (rnn): RNN(100, 128, batch_first=True) (dropout): Dropout(p=0.1, inplace=False) (fc1): Linear(in_features=128, out_features=100, bias=True) (fc2): Linear(in_features=100, out_features=4, bias=True) )>
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Training
from torch.optim import Adam NUM_LAYERS = 1 LEARNING_RATE = 1e-4 PATIENCE = 10 NUM_EPOCHS = 50 # Define Loss class_weights_tensor = torch.Tensor(list(class_weights.values())).to(device) loss_fn = nn.CrossEntropyLoss(weight=class_weights_tensor) # Define optimizer & scheduler optimizer = Adam(model.parameters(), lr=LEAR...
Epoch: 1 | train_loss: 1.25500, val_loss: 1.12003, lr: 1.00E-04, _patience: 10 Epoch: 2 | train_loss: 1.03130, val_loss: 0.97659, lr: 1.00E-04, _patience: 10 Epoch: 3 | train_loss: 0.89955, val_loss: 0.87245, lr: 1.00E-04, _patience: 10 Epoch: 4 | train_loss: 0.79928, val_loss: 0.79484, lr: 1.00E-04, _patience: 10 Epoc...
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Evaluation
import json from sklearn.metrics import precision_recall_fscore_support def get_performance(y_true, y_pred, classes): """Per-class performance metrics.""" # Performance performance = {"overall": {}, "class": {}} # Overall performance metrics = precision_recall_fscore_support(y_true, y_pred, average...
{ "precision": 0.8211429212272771, "recall": 0.8212777777777778, "f1": 0.8208202838924475, "num_samples": 18000.0 }
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Gated RNNs: LSTMs & GRUs While our simple RNNs so far are great for sequentially processing our inputs, they have quite a few disadvantages. They commonly suffer from exploding or vanishing gradients as a result using the same set of weights ($W_{xh}$ and $W_{hh}$) with each timestep's input. During backpropagation, t...
# Input sequence_size = 8 # words per input x = torch.rand((BATCH_SIZE, sequence_size, EMBEDDING_DIM)) print (x.shape) # GRU gru = nn.GRU(input_size=EMBEDDING_DIM, hidden_size=RNN_HIDDEN_DIM, batch_first=True) # Forward pass out, h_n = gru(x) print (f"out: {out.shape}") print (f"h_n: {h_n.shape}")
out: torch.Size([64, 8, 128]) h_n: torch.Size([1, 64, 128])
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Bidirectional RNN We can also have RNNs that process inputs from both directions (first token to last token and vice versa) and combine their outputs. This architecture is known as a bidirectional RNN.
# GRU gru = nn.GRU(input_size=EMBEDDING_DIM, hidden_size=RNN_HIDDEN_DIM, batch_first=True, bidirectional=True) # Forward pass out, h_n = gru(x) print (f"out: {out.shape}") print (f"h_n: {h_n.shape}")
out: torch.Size([64, 8, 256]) h_n: torch.Size([2, 64, 128])
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Notice that the output for each sample at each timestamp has size 256 (double the `RNN_HIDDEN_DIM`). This is because this includes both the forward and backward directions from the BiRNN. Model
class GRU(nn.Module): def __init__(self, embedding_dim, vocab_size, rnn_hidden_dim, hidden_dim, dropout_p, num_classes, padding_idx=0): super(GRU, self).__init__() # Initialize embeddings self.embeddings = nn.Embedding(embedding_dim=embedding_dim, ...
<bound method Module.named_parameters of GRU( (embeddings): Embedding(5000, 100, padding_idx=0) (rnn): GRU(100, 128, batch_first=True, bidirectional=True) (dropout): Dropout(p=0.1, inplace=False) (fc1): Linear(in_features=256, out_features=100, bias=True) (fc2): Linear(in_features=100, out_features=4, bias=Tr...
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Training
# Define Loss class_weights_tensor = torch.Tensor(list(class_weights.values())).to(device) loss_fn = nn.CrossEntropyLoss(weight=class_weights_tensor) # Define optimizer & scheduler optimizer = Adam(model.parameters(), lr=LEARNING_RATE) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau( optimizer, mode='min', ...
Epoch: 1 | train_loss: 1.16930, val_loss: 0.94210, lr: 1.00E-04, _patience: 10 Epoch: 2 | train_loss: 0.82127, val_loss: 0.72819, lr: 1.00E-04, _patience: 10 Epoch: 3 | train_loss: 0.65862, val_loss: 0.64288, lr: 1.00E-04, _patience: 10 Epoch: 4 | train_loss: 0.58088, val_loss: 0.60078, lr: 1.00E-04, _patience: 10 Epoc...
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Evaluation
from pathlib import Path # Get predictions test_loss, y_true, y_prob = trainer.eval_step(dataloader=test_dataloader) y_pred = np.argmax(y_prob, axis=1) # Determine performance performance = get_performance( y_true=y_test, y_pred=y_pred, classes=label_encoder.classes) print (json.dumps(performance['overall'], indent...
_____no_output_____
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Inference
def get_probability_distribution(y_prob, classes): """Create a dict of class probabilities from an array.""" results = {} for i, class_ in enumerate(classes): results[class_] = np.float64(y_prob[i]) sorted_results = {k: v for k, v in sorted( results.items(), key=lambda item: item[1], rev...
{ "Sports": 0.9944021105766296, "World": 0.004813326057046652, "Sci/Tech": 0.0007053817971609533, "Business": 7.924934470793232e-05 }
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Getting started with TensorFlow**Learning Objectives** 1. Practice defining and performing basic operations on constant Tensors 1. Use Tensorflow's automatic differentiation capability 1. Learn how to train a linear regression from scratch with TensorFLow In this notebook, we will start by reviewing the main operat...
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst # Ensure the right version of Tensorflow is installed. !pip freeze | grep tensorflow==2.1 import numpy as np from matplotlib import pyplot as plt import tensorflow as tf print(tf.__version__)
2.1.3
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
anoopdobhal/training-data-analyst
Operations on Tensors Variables and Constants Tensors in TensorFlow are either contant (`tf.constant`) or variables (`tf.Variable`).Constant values can not be changed, while variables values can be.The main difference is that instances of `tf.Variable` have methods allowing us to change their values while tensors con...
x = tf.constant([2, 3, 4]) x x = tf.Variable(2.0, dtype=tf.float32, name='my_variable') x.assign(45.8) # TODO 1 x x.assign_add(4) # TODO 2 x x.assign_sub(3) # TODO 3 x
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
anoopdobhal/training-data-analyst
Point-wise operations Tensorflow offers similar point-wise tensor operations as numpy does: * `tf.add` allows to add the components of a tensor * `tf.multiply` allows us to multiply the components of a tensor* `tf.subtract` allow us to substract the components of a tensor* `tf.math.*` contains the usual math operat...
a = tf.constant([5, 3, 8]) # TODO 1 b = tf.constant([3, -1, 2]) c = tf.add(a, b) d = a + b print("c:", c) print("d:", d) a = tf.constant([5, 3, 8]) # TODO 2 b = tf.constant([3, -1, 2]) c = tf.multiply(a, b) d = a * b print("c:", c) print("d:", d) # tf.math.exp expects floats so we need to explicitly give the type a =...
b: tf.Tensor([ 148.41316 20.085537 2980.958 ], shape=(3,), dtype=float32)
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
anoopdobhal/training-data-analyst
NumPy InteroperabilityIn addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
# native python list a_py = [1, 2] b_py = [3, 4] tf.add(a_py, b_py) # TODO 1 # numpy arrays a_np = np.array([1, 2]) b_np = np.array([3, 4]) tf.add(a_np, b_np) # TODO 2 # native TF tensor a_tf = tf.constant([1, 2]) b_tf = tf.constant([3, 4]) tf.add(a_tf, b_tf) # TODO 3
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
anoopdobhal/training-data-analyst
You can convert a native TF tensor to a NumPy array using .numpy()
a_tf.numpy()
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
anoopdobhal/training-data-analyst
Linear RegressionNow let's use low level tensorflow operations to implement linear regression.Later in the course you'll see abstracted ways to do this using high level TensorFlow. Toy DatasetWe'll model the following function:\begin{equation}y= 2x + 10\end{equation}
X = tf.constant(range(10), dtype=tf.float32) Y = 2 * X + 10 print("X:{}".format(X)) print("Y:{}".format(Y))
X:[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] Y:[10. 12. 14. 16. 18. 20. 22. 24. 26. 28.]
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
anoopdobhal/training-data-analyst
Let's also create a test dataset to evaluate our models:
X_test = tf.constant(range(10, 20), dtype=tf.float32) Y_test = 2 * X_test + 10 print("X_test:{}".format(X_test)) print("Y_test:{}".format(Y_test))
X_test:[10. 11. 12. 13. 14. 15. 16. 17. 18. 19.] Y_test:[30. 32. 34. 36. 38. 40. 42. 44. 46. 48.]
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
anoopdobhal/training-data-analyst
Loss Function The simplest model we can build is a model that for each value of x returns the sample mean of the training set:
y_mean = Y.numpy().mean() def predict_mean(X): y_hat = [y_mean] * len(X) return y_hat Y_hat = predict_mean(X_test)
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
anoopdobhal/training-data-analyst
Using mean squared error, our loss is:\begin{equation}MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2\end{equation} For this simple model the loss is then:
errors = (Y_hat - Y)**2 loss = tf.reduce_mean(errors) loss.numpy()
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
anoopdobhal/training-data-analyst
This values for the MSE loss above will give us a baseline to compare how a more complex model is doing. Now, if $\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model\begin{equation}\hat{Y} = w_0X + w_1\end{equation}we can write a loss function taking as arguments the ...
def loss_mse(X, Y, w0, w1): Y_hat = w0 * X + w1 errors = (Y_hat - Y)**2 return tf.reduce_mean(errors)
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
anoopdobhal/training-data-analyst
Gradient FunctionTo use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!During gradient descent we think of the loss as a function ...
# TODO 1 def compute_gradients(X, Y, w0, w1): with tf.GradientTape() as tape: loss = loss_mse(X, Y, w0, w1) return tape.gradient(loss, [w0, w1]) w0 = tf.Variable(0.0) w1 = tf.Variable(0.0) dw0, dw1 = compute_gradients(X, Y, w0, w1) print("dw0:", dw0.numpy()) print("dw1", dw1.numpy())
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
anoopdobhal/training-data-analyst
Training LoopHere we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
STEPS = 1000 LEARNING_RATE = .02 MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n" w0 = tf.Variable(0.0) w1 = tf.Variable(0.0) for step in range(0, STEPS + 1): dw0, dw1 = compute_gradients(X, Y, w0, w1) w0.assign_sub(dw0 * LEARNING_RATE) w1.assign_sub(dw1 * LEARNING_RATE) if step % 100 == 0...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
anoopdobhal/training-data-analyst
Now let's compare the test loss for this linear regression to the test loss from the baseline model that outputs always the mean of the training set:
loss = loss_mse(X_test, Y_test, w0, w1) loss.numpy()
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
anoopdobhal/training-data-analyst
This is indeed much better! Bonus Try modelling a non-linear function such as: $y=xe^{-x^2}$
X = tf.constant(np.linspace(0, 2, 1000), dtype=tf.float32) Y = X * tf.exp(-X**2) %matplotlib inline plt.plot(X, Y) def make_features(X): f1 = tf.ones_like(X) # Bias. f2 = X f3 = tf.square(X) f4 = tf.sqrt(X) f5 = tf.exp(X) return tf.stack([f1, f2, f3, f4, f5], axis=1) def predict(X, W): ret...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
anoopdobhal/training-data-analyst
Creating a Sentiment Analysis Web App Using PyTorch and SageMaker_Deep Learning Nanodegree Program | Deployment_---Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to ente...
%mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
--2019-10-01 04:56:57-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Sa...
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Step 2: Preparing and Processing the dataAlso, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a...
import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[d...
IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] ...
IMDb reviews (combined): train = 25000, test = 25000
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loade...
print(train_X[100]) print(train_y[100])
FORBIDDEN PLANET is one of the best examples of Hollywood SF films. Its influence was felt for more than a decade. However, certain elements relating to how this wide-screen entertainment was aimed at a mid-fifties audience that is now gone have dated it quite a bit, and the film's sometimes sluggish pacing doesn't hel...
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
import nltk from nltk.corpus import stopwords from nltk.stem.porter import * import re from bs4 import BeautifulSoup def review_to_words(review): nltk.download("stopwords", quiet=True) stemmer = PorterStemmer() text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.su...
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.
# TODO: Apply review_to_words to a review (train_X[100] or any other review) review_to_words(train_X[100])
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do t...
import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl...
['perspect', 'good', 'thing', 'sinc', 'releas', 'star', 'war', 'episod', 'phantom', 'menac', 'claim', 'counter', 'claim', 'episod', 'ii', 'iii', 'eventu', 'taken', 'spotlight', 'origin', 'star', 'war', 'film', 'make', 'part', 'cohes', 'whole', 'rather', 'segreg', 'older', 'new', 'film', 'separ', 'trilog', 'new', 'film'...
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Transform the dataIn the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of cou...
from collections import Counter base_dict = {} def tokenize(corpus): for review in corpus: for word in review: if word in base_dict: base_dict[word] += 1 else: base_dict[word] = 1 return(base_dict) tokenize(train_X) from sklearn.feature_extractio...
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set? **Answer:**
# TODO: Use this space to determine the five most frequently appearing words in the training set.
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Save `word_dict`Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.
data_dir = '../data/pytorch' # The folder we will use for storing data if not os.path.exists(data_dir): # Make sure that the folder exists os.makedirs(data_dir) with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f: pickle.dump(word_dict, f)
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Transform the reviewsNow that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.
def convert_and_pad(word_dict, sentence, pad=500): NOWORD = 0 # We will use 0 to represent the 'no word' category INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict working_sentence = [NOWORD] * pad for word_index, word in enumerate(sentence[:pa...
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem? **Answer:** Step 3: Upload the data to S3As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our ...
import pandas as pd pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \ .to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Uploading the training dataNext, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.
import sagemaker sagemaker_session = sagemaker.Session() bucket = sagemaker_session.default_bucket() prefix = 'sagemaker/sentiment_rnn' role = sagemaker.get_execution_role() input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also ...
!pygmentize train/model.py
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training ...
import torch import torch.utils.data # Read in only the first 250 rows train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250) # Turn the input pandas dataframe into tensors train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze() train_sample_X = torch...
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
(TODO) Writing the training methodNext we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.
def train(model, train_loader, epochs, optimizer, loss_fn, device): for epoch in range(1, epochs + 1): model.train() total_loss = 0 for batch in train_loader: batch_X, batch_y = batch batch_X = batch_X.to(device) batch_y = batch_y.to(...
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early ...
import torch.optim as optim from train.model import LSTMClassifier device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = LSTMClassifier(32, 100, 5000).to(device) optimizer = optim.Adam(model.parameters()) loss_fn = torch.nn.BCELoss() train(model, train_sample_dl, 5, optimizer, loss_fn, device)
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one)...
from sagemaker.pytorch import PyTorch estimator = PyTorch(entry_point="train.py", source_dir="train", role=role, framework_version='0.4.0', train_instance_count=1, train_instance_type='ml.p2.xlarge', ...
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Step 5: Testing the modelAs mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly. Step 6: Deploy the model for testingNow that we have ...
# TODO: Deploy the trained model
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Step 7 - Use the model for testingOnce deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.
test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1) # We split the data into chunks and send each chunk seperately, accumulating the results. def predict(data, rows=512): split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1)) predictions = np.array([]) for array i...
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis? **Answer:** (TODO) More testingWe now have a trained model which has been deployed and which we can send processed r...
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
The question we now need to answer is, how do we send this review to our model?Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews. - Removed any html tags and stemmed the input - Encoded the review as a se...
# TODO: Convert test_review into a form usable by the model and save the results in test_data test_data = None
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.
predictor.predict(test_data)
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive. Delete the endpointOf course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delet...
estimator.delete_endpoint()
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Step 6 (again) - Deploy the model for the web appNow that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.As we saw above, by default the estimator which we created, w...
!pygmentize serve/predict.py
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.**TODO**: Complete th...
from sagemaker.predictor import RealTimePredictor from sagemaker.pytorch import PyTorchModel class StringPredictor(RealTimePredictor): def __init__(self, endpoint_name, sagemaker_session): super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain') model = PyTorchMod...
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Testing the modelNow that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is t...
import glob def test_reviews(data_dir='../data/aclImdb', stop=250): results = [] ground = [] # We make sure to test both positive and negative reviews for sentiment in ['pos', 'neg']: path = os.path.join(data_dir, 'test', sentiment, '*.txt') files = glob.glob(path...
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
As an additional test, we can try sending the `test_review` that we looked at earlier.
predictor.predict(test_review)
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back. Step 7 (again): Use the model for th...
predictor.endpoint
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function. Setting up API GatewayNow that our Lambda function is set up, it is time to create a new API using API Gateway that wi...
predictor.delete_endpoint()
_____no_output_____
MIT
Project/SageMaker Project.ipynb
radleap/sagemaker-deployment
Precision, recall and f-measure George Tzanetakis, University of Victoria In this notebook we go over the terminology of retrieval metrics: precision, recall and f-measure and how they are used in a variety of Music Information Retrieval (MIR) tasks. We first examine their classic original use as measures for the eff...
import numpy as np # suppose that are query is beatles and we are interested in retrieving documents # about the corresponding insect species query = 'beatles' # Let's say a set of 10 documents is returned - their exact representation is not important # and could be a bag-of-words representation, text, a list of ...
_____no_output_____
CC0-1.0
course2/session5/kadenze_mir_c2_s5_5_precion_recall_fmeasure.ipynb
Achilleasein/mir_program_kadenze
Precision is defined as the number of relevant retrieved documents divded by the number of returned documents. So for our particular example the precision is 0.70 or 70%.
import numpy as np def precision(retrieved): retrieved_relevant = np.count_nonzero(retrieved == 1) return (retrieved_relevant / len(retrieved)) print("Precision = %.2f\n" % (precision(retrieved)))
Precision = 0.70
CC0-1.0
course2/session5/kadenze_mir_c2_s5_5_precion_recall_fmeasure.ipynb
Achilleasein/mir_program_kadenze
Notice that we can improve the precision in this case by returning less items. For example the precision when returning the first two items is 1 or 100%.
less_retrieved = np.array([1,1]) print("Precision = %.2f\n" % (precision(retrieved)))
Precision = 0.70
CC0-1.0
course2/session5/kadenze_mir_c2_s5_5_precion_recall_fmeasure.ipynb
Achilleasein/mir_program_kadenze
Now suppose that in the set of documents we are considering there are 15 documents about beatles (the insect species). Our set of retrieved results only contains 7 of them and therefore the recall in this case is 7/15=0.47
def recall(retrieved, num_relevant): retrieved_relevant = np.count_nonzero(retrieved == 1) return (retrieved_relevant / num_relevant) retrieved = np.array([1,1,0,1,1,1,1,0,1,0]) print("Recall = %.2f\n" % (recall(retrieved,15)))
Recall = 0.47
CC0-1.0
course2/session5/kadenze_mir_c2_s5_5_precion_recall_fmeasure.ipynb
Achilleasein/mir_program_kadenze
We can trivially increase recall by returning more items with the extreme cases of returning all documents as relevant at the expense of diminishing precision. Alternatively we can only return less and less document increasing precision at the expense of recall. An effective retrieval system should achieve both high pr...
def f1_score(precision, recall): return 2 * ((precision * recall) / (precision+recall)) precision = 0.7 recall = 0.47 f1 = f1_score(precision,recall) print("F1 = %.2f\n" % f1)
F1 = 0.56
CC0-1.0
course2/session5/kadenze_mir_c2_s5_5_precion_recall_fmeasure.ipynb
Achilleasein/mir_program_kadenze
Binary Classification For a binary classification problem we can consider the set of predictions as the retrieved documents and the ground truth as the annotations of relevance. For example suppose that we have a music/speech classification system that predicts 100 instances and lets say that 50 of them are labeled as...
import glob import librosa import numpy as np fnames = glob.glob("/Users/georgetzanetakis/data/sound/genres/*/*.wav") genres = ['classical', 'country', 'disco', 'hiphop', 'jazz', 'rock', 'blues', 'reggae', 'pop', 'metal'] # allocate matrix for audio features and target audio_features = np.zeros((len(fnames), 40))...
Processing 0 /Users/georgetzanetakis/data/sound/genres/pop/pop.00027.wav Processing 1 /Users/georgetzanetakis/data/sound/genres/pop/pop.00033.wav Processing 2 /Users/georgetzanetakis/data/sound/genres/pop/pop.00032.wav Processing 3 /Users/georgetzanetakis/data/sound/genres/pop/pop.00026.wav Processing 4 /Users/georgetz...
CC0-1.0
course2/session5/kadenze_mir_c2_s5_5_precion_recall_fmeasure.ipynb
Achilleasein/mir_program_kadenze
We can view the confusion matrix and classificaton report with micro and macro average retrieval metrics. You can observe from the confusion matrix that classical is the easiest genre to classify with 89/100 instances classified correctly. This is reflected on the corresponding f1-score of 0.89. Because the classes are...
from sklearn.model_selection import cross_val_predict from sklearn import svm, metrics clf = svm.SVC(gamma='scale', kernel='linear') # perform 10-fold cross-validation to calculate accuracy and confusion matrix predicted = cross_val_predict(clf, audio_features, target, cv=10) print("Confusion matrix:\n%s" % metrics...
Confusion matrix: [[95 1 1 0 0 1 0 2 0 0] [ 7 59 3 0 0 9 11 6 4 1] [ 2 8 52 9 0 12 3 7 6 1] [ 0 2 9 62 0 3 3 16 1 4] [83 5 2 0 0 5 4 1 0 0] [ 7 6 17 4 0 36 17 4 3 6] [ 6 15 10 1 0 8 48 5 0 7] [ 2 7 10 23 0 4 7 40 6 1] [ 2 4 10 5 0 2 0 4 73 0] [ 0 ...
CC0-1.0
course2/session5/kadenze_mir_c2_s5_5_precion_recall_fmeasure.ipynb
Achilleasein/mir_program_kadenze
Notice that because of the unbalanced support of the different classes after merging classical and jazz the values of the micro-average f1-score and macro-average f1-score are different. MIR tasks with time markers Finally let's look at a last usage of retrieval metrics (precision, recall and f-measure for tasks wher...
# Let's consider a simple toy example for calculating the retrieval metrics based on a time markers # To make things simple the tolerance is +/-1 i.e an estimated time marker is consider correct if it is # within +/-1 of the reference ground truth. import numpy as np ref_times = np.array([0, 4, 8, 12, 16]) est_tim...
_____no_output_____
CC0-1.0
course2/session5/kadenze_mir_c2_s5_5_precion_recall_fmeasure.ipynb
Achilleasein/mir_program_kadenze
Who is this course for?We have designed this course for a technical audience with basic coding and statistical skill sets who are not already familiar with time series analysis. Readers who struggle to code in python are advised to take an introductory python coding course before going on. Readers without some backgro...
import pandas as pd df_normal = pd.DataFrame.from_dict({'Name':['Frank','Herbert','Lucy'], 'Social Security Number':[123456789,132985867,918276037]}) print df_normal
Name Social Security Number 0 Frank 123456789 1 Herbert 132985867 2 Lucy 918276037
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
Website TrafficAll websites need to be hosted on physical servers somewhere. It costs money to rent these servers, and the more servers, the more money! If we can forecast when website traffic will be high, requiring more servers, and low, requiring fewer servers, then we could only rent the number of servers we need ...
df_timeseries = pd.DataFrame.from_dict({'Hour':[1,2,3,4,5,6,7,8,9,10,11,12], 'Web Visitors':[100,150,300,200,100,50,120,180,250,200,130,75]}) print df_timeseries
Hour Web Visitors 0 1 100 1 2 150 2 3 300 3 4 200 4 5 100 5 6 50 6 7 120 7 8 180 8 9 250 9 10 200 10 11 130 11 12 75
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
Author's NoteI'll somtimes include author's notes in my tutorials. These are **optional** sections that are interesting, but are either more advanced that some readers might care to dig into or are of an opinionated anture and so not canon. Data with an inherent order of observations present a number of unique problem...
# Setting up program with necessary tools and visuals import pandas as pd import numpy as np import datetime import matplotlib.pyplot as plt import seaborn as sns sns.set_style("white") numdays = 30 date_list = [datetime.datetime.today() - datetime.timedelta(days=x) for x in range(0, numdays)] ticket_prices_list = nu...
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
It should be obvious that the level is $40.95. This is easy to identify as there is no trend. The series does not generally increase or decrease. We also see no patterns or cycles. This means that there is no seasonality. Finally, we see no noise (unexplained variability) in the data. In fact, there is no variability a...
numdays = 30 date_list = [datetime.datetime.today() - datetime.timedelta(days=x) for x in range(0, numdays)] ticket_prices_list = np.linspace(40.95,47.82,numdays)[::-1] plt.figure(figsize=(10,4)) plt.plot(date_list, ticket_prices_list) plt.title('Price of Carnival Tickets By Day') plt.ylabel('Carnival Ticket Price') p...
level = mean = 44.385 trend = slope = 0.236896551724
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
We first start with the level of the series which is the series average: level = $44.39. We can then describe the general increase in the series over time as the slope of our line: trend = .24. We see no repeating patterns nor unexplained variability, so there is no seasonality or noise. Round Two, FIGHTOk, let's put ...
series = pd.Series.from_csv('./AirPassengers.csv', header=0) plt.figure(figsize=(10,4)) series.plot() plt.ylabel('# Passengers') plt.xlabel('Date (Month)') plt.title('Volume of US Airline Passengers by Month') plt.show()
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
We can definitely see that there is a trend, seasonality, and some noise in this data based on the similarly shaped, repeated portions of the line! How can we get a clearer picture of each core component?First, we need to make an assumption about how these four components combine to create the series. We typically make...
from statsmodels.tsa.seasonal import seasonal_decompose result = seasonal_decompose(series, model='multiplicative') result.plot() plt.show()
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
COOL! Now we have a rough estiamte of the trend, seasonality, and noise in our time series. What now? Typically we use this kind of demcomposition for **time series analysis** (induction). For example, we could take a close look at the seasonality to try to identify some interesting insights.
plt.figure(figsize=(10,4)) plt.plot(result.seasonal[:24]) plt.show()
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
**Observations and Theories**From this plot, it looks like there is a very clear seasonal trend in passenger volume throughout the year. Peak travel time is July/August (vacaton season). It looks like passenger volume dips around the holidays (perhaps people want to be with their families rather than travel?). Assump...
trend = np.sin( np.linspace(-20*np.pi, 20*np.pi, 1000) ) * 20 + 100 plt.figure(figsize=(10, 4)) plt.plot(trend) plt.title('Example of a Stationary Trend') plt.show()
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
A non-stationary trend is anything else! See the example below.
trend = trend + np.linspace(0, 30, 1000) plt.figure(figsize=(10, 4)) plt.plot(trend) plt.title('Example of a Non-Stationary Trend') plt.show()
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
Non-stationary trends do not need to be linear. I've included a non-linear trend example to demonstrate this.
trend = trend + np.exp( np.linspace(0, 5, 1000) ) plt.figure(figsize=(10, 4)) plt.plot(trend) plt.title('Example of a Non-Stationary Trend') plt.show()
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
Many models assume that a series is stationary. [KK FIXME: Why?] We can apply transformations and filters on the series to make the series conditionally stationary. The most popular method for doing this is called *differencing*. You will often see series described as 'Difference Stationary' (stationary only after diff...
series = pd.read_csv('Shampoo.csv', header=0, index_col=0, squeeze=True) plt.figure(figsize=(10,4)) plt.ylabel('Shampoo Sales') plt.xlabel('Date (Month)') plt.title('Shampoo Sales Over Time') series.plot() plt.show()
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
We can rather simply write our own first order differencing function. Let's try that!
# create a differenced series def difference(dataset, interval=1): diff = list() for i in range(interval, len(dataset)): value = dataset[i] - dataset[i - interval] diff.append(value) return pd.Series(diff) X = series.values diff = difference(X) plt.figure(figsize=(10,4)) plt.ylabel('Shampo...
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
Cool, this no looks just like any old stationary trend! It is worth noting that `pandas` comes with a function to implicitly do differencing. I show an example of this below.
diff = series.diff() plt.figure(figsize=(10,4)) plt.ylabel('Shampoo Sales') plt.xlabel('Date (Month)') plt.title('Shampoo Sales Over Time') diff.plot() plt.show()
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
Now remember, for a series to be stationary, all of its attributes must be stationary, not just the trend. A good example of this are heteroscedastic series. These are series where the amount of noise is not constant over time, A homoscedastic series has equal noise over time (aka the noise is stationary). An example o...
trend = 20.42*np.ones(3000) + np.random.normal(1,2,3000) plt.figure(figsize=(10, 4)) plt.plot(trend) plt.title('Example of a Homoscedastic Series') plt.show() trend = 20.42*np.ones(3000) + np.linspace(1,10,3000)*np.random.normal(1,2,3000) plt.figure(figsize=(10, 4)) plt.plot(trend) plt.title('Example of a Heteroscedas...
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
Changes in variance can be hard to deal with and often throw a wrench into our models. We see in many types of customer data, for instance, that variance is proportional to signal volume. This means that the variance in our signal will increase as the amplitude of the signal increases.We can see an example of this belo...
trend = np.sin( np.linspace(-20*np.pi, 20*np.pi, 1000) ) * 20 + 100 trend = trend + trend*np.random.normal(.5,3,1000) plt.figure(figsize=(10, 4)) plt.plot(trend) plt.title('Example of a Heteroscedastic Series') plt.show()
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
Another nasty feature we might run across are discontinuities. How we handle a discontinuity depends entirely on what caused the discontinuity. Was there a fundamental change in the series? Is the discontinuity caused by missing data? Do we expect more discontinuities in the future? An example of a discontinuity is sho...
trend = np.sin( np.linspace(-20*np.pi, 20*np.pi, 1000) ) * 20 + 100 trend = trend + np.concatenate([np.zeros(500),20*np.ones(500)]) plt.figure(figsize=(10, 4)) plt.plot(trend) plt.title('Example of a Discontinuity') plt.show()
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
Finally, we want to know if the seasonality is 'local'? Basicaly, is there only one seasonal trend? Perhaps the series has a yearly and monthly period! An example of a series with multiple cycles/seasons is shown below.
trend = np.sin( np.linspace(-20*np.pi, 20*np.pi, 1000) ) * 20 + 100 trend = trend + np.sin( .1*np.linspace(-20*np.pi, 20*np.pi, 1000) ) * 20 + 100 plt.figure(figsize=(10, 4)) plt.plot(trend) plt.title('Example of Multiple Cycles') plt.show()
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
How can we try to understand the cyclical patterns in our data? Well, there are a couple different ways to do this... Intro to Signal ProcessingWe will start with the **Fourier transform**. Fourier transforms are amazing (as are their more general form, the Laplacian transform). When I did physics research in a past ...
from scipy.fftpack import fft # Number of sample points N = 600 # sample spacing T = 1.0 / 800.0 x = np.linspace(0.0, N*T, N) y = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(80.0 * 2.0*np.pi*x) yf = fft(y) xf = np.linspace(0.0, 1.0/(2.0*T), N//2) plt.figure(figsize=(10, 4)) plt.title('Original Signal in Time Domain') plt....
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
The periodogram presents the two sine functions clearly. This is the [FFT of the output of the autocorrelation function](https://www.mathworks.com/matlabcentral/answers/56001-difference-b-w-periodogram-and-square-of-ft-of-signal-method-to-calculate-the-psd). We will address autocorrelation in a bit. For now, just think...
from statsmodels.tsa.stattools import acf series = pd.Series.from_csv('./AirPassengers.csv', header=0) # plot plt.figure(figsize=(20,5)) series.plot() plt.ylabel("Passenger Volums") plt.xlabel("Time (Months)") plt.title("Airline Passenger Volumes", fontsize=20) plt.show() # ACF with lags 1,2,3,4 acf_7 = acf(series, ...
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
Let's try calculating the autocorrelation within the airline dataset over a wide range of lags. We can plot autocorrelation by lag to make the information more easily digestible. We call this an autocorrelation plot. This plot gives us information about the relationship between observations in the data set and observat...
from statsmodels.graphics.tsaplots import plot_acf # ACF plot fig, ax = plt.subplots(figsize=(20,5)) plt.xlabel('lag') plt.ylabel('correlation coefficient') _ = plot_acf(series, lags=100, ax=ax) plt.show()
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
Ok, so how do we read this? Each bar represents the magnitude of the correlation of points seperated by the lag on the x axis. We notice that points at zero lag are exactly correlated...because we are comparing a point to itself! We then notice that the auto-correlation of points at different lags takes on a cyclical p...
from statsmodels.graphics.tsaplots import plot_pacf # PACF plot fig, ax = plt.subplots(figsize=(12,5)) plt.xlabel('lag') plt.ylabel('partial correlation coefficient') _ = plot_pacf(series, lags=100, ax=ax) plt.show()
_____no_output_____
MIT
time_series_course/Time_Series_Modeling_Ch1_Part_1.ipynb
looyclark/blog_material
Query prototypingThis notebook computes interesting metrics from the raw wheel rotation data stored in S3. It's meant as a prototype for queries that'll run to produce summary data for use by a web frontend later on.An earlier version of the notebook worked with the raw rotation CSV files locally using pandas. Ultimat...
import datetime import getpass import math import time import boto3 # import matplotlib.pyplot as plt # import pandas as pd # import pytz import requests %config InlineBackend.print_figure_kwargs={'facecolor' : "w"}
_____no_output_____
MIT
notebooks/query_prototyping.ipynb
parente/honey.data
Constants to convert [wheel rotations](https://www.amazon.com/gp/product/B019RH7PPE/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1) into distances.
wheel_diameter = 8.5 # inches, not quite the 9" advertised, I measured wheel_circumference = math.pi * wheel_diameter / 12 / 5280 # miles athena = boto3.client('athena') s3 = boto3.client('s3')
_____no_output_____
MIT
notebooks/query_prototyping.ipynb
parente/honey.data
Utils I'll execute Athena queries using boto3. I'm not using `pyathena` to keep demands on the Raspberry Pi light.
def q(query, max_checks=30): """Executes an Athena query, waits for success or failure, and returns the first page of the query results. Waits up to max_checks * 10 seconds for the query to complete before raising. """ resp = athena.start_query_execution( QueryString=query, Quer...
_____no_output_____
MIT
notebooks/query_prototyping.ipynb
parente/honey.data
Interesting metrics
update_partitions()
_____no_output_____
MIT
notebooks/query_prototyping.ipynb
parente/honey.data
How far has Honey run since we started tracking?This can serve as input to the geopoint API naming a city she could have reached if she traveled this far in straight line distance.
qid, resp = q(''' select sum(rotations) total from incoming_rotations ''')
_____no_output_____
MIT
notebooks/query_prototyping.ipynb
parente/honey.data
The web client will end up using published CSVs instead of results fetched using the API. Therefore, I'm not investing in data type parsing in this notebook.
total_miles = int(resp['ResultSet']['Rows'][1]['Data'][0]['VarCharValue']) * wheel_circumference total_miles
_____no_output_____
MIT
notebooks/query_prototyping.ipynb
parente/honey.data