markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
The numpy.random module adds to the standard built-in Python random functions for generating efficiently whole arrays of sample values with many kinds of probability distributions.
Example: build a 4x4 array of samples from the standard normal distribution,
|
samples = np.random.normal(size=(4,4))
samples
|
notebooks/Random Numbers in NumPy Campus.ipynb
|
rcrehuet/Python_for_Scientists_2017
|
gpl-3.0
|
Advantages? Built-in Random Python only samples one value at a time and it is significantly less efficient.
The following block builds an array with 10$^7$ normally distributed values:
|
import random
N = 10000000
%timeit samples = [random.normalvariate(0,1) for i in range(N)]
|
notebooks/Random Numbers in NumPy Campus.ipynb
|
rcrehuet/Python_for_Scientists_2017
|
gpl-3.0
|
Write the equivalent code using the np.random.normal() function and time it! Keep in mind that the NumPy function is vectorized!
See the Numpy documentation site for detailed info on the numpy.random module
Random Walks
Using standard Python builtin functions, try to write a piece of code corresponding to a 1D Random walker initially located at 0 and taking steps of 1 or -1 with equal probability.
Hint: use a list to keep track of the path of your random walker and have a look at the random.randint() function to generate the steps.
If it's too hard to start from scratch, you may want to peek at my solution.
Use matplotlib to plot the path of your random walker
|
import matplotlib.pyplot as plt
%matplotlib inline
#plt.plot(INSERT THE NAME OF THE VARIABLE CONTAINING THE PATH)
|
notebooks/Random Numbers in NumPy Campus.ipynb
|
rcrehuet/Python_for_Scientists_2017
|
gpl-3.0
|
Now think of a possible alternative code using NumPy. Keep in mind that:
NumPy offers arrays to store the path
numpy.random offers a vectorized version of random generating functions
Again, here is my solution.
Let's have a look at it:
|
#plt.plot(INSERT THE NAME OF THE VARIABLE CONTAINING THE PATH)
|
notebooks/Random Numbers in NumPy Campus.ipynb
|
rcrehuet/Python_for_Scientists_2017
|
gpl-3.0
|
To test your feature derivartive run the following:
|
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([0., 0.]) # this makes all the predictions 0
test_predictions = predict_output(example_features, my_weights)
# just like SFrames 2 numpy arrays can be elementwise subtracted with '-':
errors = test_predictions - example_output # prediction errors in this case is just the -example_output
feature = example_features[:,0] # let's compute the derivative with respect to 'constant', the ":" indicates "all rows"
derivative = feature_derivative(errors, feature)
print derivative
print -np.sum(example_output)*2 # should be the same as derivative
print example_output
print errors
print feature
|
machine_learning/2_regression/assignment/week2/week-2-multiple-regression-assignment-2-exercise.ipynb
|
tuanavu/coursera-university-of-washington
|
mit
|
Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function.
The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. We define this by requiring that the magnitude (length) of the gradient vector to be smaller than a fixed 'tolerance'.
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent we update the weight for each feature befofe computing our stopping criteria
|
from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2)
|
machine_learning/2_regression/assignment/week2/week-2-multiple-regression-assignment-2-exercise.ipynb
|
tuanavu/coursera-university-of-washington
|
mit
|
Discussion
https://www.coursera.org/learn/ml-regression/module/MFwVC/discussions/nNP11JhqEeWy0Q7ABZMsnQ
|
def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):
converged = False
weights = np.array(initial_weights) # make sure it's a numpy array
while not converged:
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
errors = predictions - output
gradient_sum_squares = 0 # initialize the gradient sum of squares
# while we haven't reached the tolerance yet, update each feature's weight
for i in range(len(weights)): # loop over each weight
# Recall that feature_matrix[:, i] is the feature column associated with weights[i]
# compute the derivative for weight[i]:
derivative = feature_derivative(errors, feature_matrix[:, i])
# add the squared value of the derivative to the gradient magnitude (for assessing convergence)
gradient_sum_squares += (derivative**2)
# subtract the step size times the derivative from the current weight
weights[i] -= (step_size * derivative)
# compute the square-root of the gradient sum of squares to get the gradient matnigude:
gradient_magnitude = sqrt(gradient_sum_squares)
if gradient_magnitude < tolerance:
converged = True
return(weights)
|
machine_learning/2_regression/assignment/week2/week-2-multiple-regression-assignment-2-exercise.ipynb
|
tuanavu/coursera-university-of-washington
|
mit
|
Next run your gradient descent with the above parameters.
|
test_weight = regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, tolerance)
print test_weight
|
machine_learning/2_regression/assignment/week2/week-2-multiple-regression-assignment-2-exercise.ipynb
|
tuanavu/coursera-university-of-washington
|
mit
|
Now compute your predictions using test_simple_feature_matrix and your weights from above.
|
test_predictions = predict_output(test_simple_feature_matrix, test_weight)
print test_predictions
|
machine_learning/2_regression/assignment/week2/week-2-multiple-regression-assignment-2-exercise.ipynb
|
tuanavu/coursera-university-of-washington
|
mit
|
Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).
|
test_residuals = test_output - test_predictions
test_RSS = (test_residuals * test_residuals).sum()
print test_RSS
|
machine_learning/2_regression/assignment/week2/week-2-multiple-regression-assignment-2-exercise.ipynb
|
tuanavu/coursera-university-of-washington
|
mit
|
Use the above parameters to estimate the model weights. Record these values for your quiz.
|
weight_2 = regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance)
print weight_2
|
machine_learning/2_regression/assignment/week2/week-2-multiple-regression-assignment-2-exercise.ipynb
|
tuanavu/coursera-university-of-washington
|
mit
|
Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!
|
(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
test_predictions_2 = predict_output(test_feature_matrix, weight_2)
print test_predictions_2
|
machine_learning/2_regression/assignment/week2/week-2-multiple-regression-assignment-2-exercise.ipynb
|
tuanavu/coursera-university-of-washington
|
mit
|
Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 2 (round to nearest dollar)?
|
print test_predictions_2[0]
|
machine_learning/2_regression/assignment/week2/week-2-multiple-regression-assignment-2-exercise.ipynb
|
tuanavu/coursera-university-of-washington
|
mit
|
What is the actual price for the 1st house in the test data set?
|
print test_data['price'][0]
|
machine_learning/2_regression/assignment/week2/week-2-multiple-regression-assignment-2-exercise.ipynb
|
tuanavu/coursera-university-of-washington
|
mit
|
Quiz Question: Which estimate was closer to the true price for the 1st house on the Test data set, model 1 or model 2?
Now use your predictions and the output to compute the RSS for model 2 on TEST data.
|
test_residuals_2 = test_output - test_predictions_2
test_RSS_2 = (test_residuals_2**2).sum()
print test_RSS_2
|
machine_learning/2_regression/assignment/week2/week-2-multiple-regression-assignment-2-exercise.ipynb
|
tuanavu/coursera-university-of-washington
|
mit
|
Useful data and Methods for our Dataset manipulation
|
#Column names for our data
header = ["color","diameter","label"]
"""Find the unique values for a column in dataset"""
def unique_values(rows,col):
return set([row[col] for row in rows])
"""count the no of examples for each label in a dataset"""
def class_counts(rows):
counts = {} # a dictionary of label -> count.
for row in rows:
# in our dataset format, the label is always the last column
label = row[-1]
if label not in counts:
counts[label] = 0
counts[label] += 1
return counts
"""Check if the value is numeric"""
def is_numeric(value):
return isinstance(value, int) or isinstance(value, float)
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
Let's write a class for a question which can be asked to partition the data
Each object of a question class holds a column_no and a col_value
Eg. column_no = 0 denotes color and so col_value can be Green, Yellow or Red
We can write a method which would compare the feature value of example with the feature value of Question
|
class Question:
def __init__(self,col, val):
self.col = col
self.val = val
def match(self,example):
# Compare the feature value in an example to the
# feature value in this question.
value = example[self.col]
if is_numeric(value):
return value >= self.val
else:
return value == self.val
def __repr__(self):
# method to print the question in a readable format.
condition = "=="
if is_numeric(self.val):
condition = ">="
return "Is %s %s %s?" % (
header[self.col], condition, str(self.val))
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
Question format -
|
#create a new question with col = 1 and val = 3
q = Question(1,3)
#print q
q
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
Define a function which partitions the dataset on given question in True and False rows/examples
|
"""For each row in the dataset, check if it satisfies the question. If
so, add it to 'true rows', otherwise, add it to 'false rows'.
"""
def partition(rows, question):
true_rows, false_rows = [], []
for row in rows:
if question.match(row):
true_rows.append(row)
else:
false_rows.append(row)
return true_rows, false_rows
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
Now calculate a Gini Impurity for a node with given input rows of training dataset
|
"""Calculate the Gini Impurity for a list of rows."""
def gini(rows):
counts = class_counts(rows)
impurity = 1
for lbl in counts:
prob_of_lbl = counts[lbl] / float(len(rows))
impurity -= prob_of_lbl**2
return impurity
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
Calculate the Information gain for a question given uncertainity at present node and incertainities at left and right child nodes
|
def info_gain(left, right, current_uncertainty):
#we need to calculate weighted avg of impurities at both child nodes
p = float(len(left)) / (len(left) + len(right))
return current_uncertainty - p * gini(left) - (1 - p) * gini(right)
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
Which question to ask ??
|
"""Find the best question to ask by iterating over every feature / value
and calculating the information gain."""
def find_best_split(rows):
best_gain = 0 # keep track of the best information gain
best_question = None # keep train of the feature / value that produced it
current_uncertainty = gini(rows)
n_features = len(rows[0]) - 1 # number of columns
for col in range(n_features): # for each feature
values = set([row[col] for row in rows]) # unique values in the column
for val in values: # for each value
question = Question(col, val)
# try splitting the dataset
true_rows, false_rows = partition(rows, question)
# Skip this split if it doesn't divide the
# dataset.
if len(true_rows) == 0 or len(false_rows) == 0:
continue
# Calculate the information gain from this split
gain = info_gain(true_rows, false_rows, current_uncertainty)
# You actually can use '>' instead of '>=' here
# but I wanted the tree to look a certain way for our
# toy dataset.
if gain >= best_gain:
best_gain, best_question = gain, question
return best_gain, best_question
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
Define nodes in tree
1. Decision Node - Node with Question to ask
|
"""
A Decision Node asks a question.
This holds a reference to the question, and to the two child nodes.
"""
class Decision_Node:
def __init__(self,question,true_branch,false_branch):
self.question = question
self.true_branch = true_branch
self.false_branch = false_branch
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
2. Leaf node - Gives prediction
|
"""
A Leaf node classifies data.
This holds a dictionary of class (e.g., "Apple") -> number of time it
appears in the rows from the training data that reach this leaf.
"""
class Leaf:
def __init__(self, rows):
self.predictions = class_counts(rows)
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
Build a Tree
|
def build_tree(rows):
# Try partitioing the dataset on each of the unique attribute,
# calculate the information gain,
# and return the question that produces the highest gain.
gain, question = find_best_split(rows)
# Base case: no further info gain
# Since we can ask no further questions,
# we'll return a leaf.
if gain == 0:
return Leaf(rows)
# If we reach here, we have found a useful feature / value
# to partition on.
true_rows, false_rows = partition(rows, question)
# Recursively build the true branch.
true_branch = build_tree(true_rows)
# Recursively build the false branch.
false_branch = build_tree(false_rows)
# Return a Question node.
# This records the best feature / value to ask at this point,
# as well as the branches to follow
# dependingo on the answer.
return Decision_Node(question, true_branch, false_branch)
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
Print the Tree
|
def print_tree(node, spacing=""):
# Base case: we've reached a leaf
if isinstance(node, Leaf):
print (spacing + "Predict", node.predictions)
return
# Print the question at this node
print (spacing + str(node.question))
# Call this function recursively on the true branch
print (spacing + '--> True:')
print_tree(node.true_branch, spacing + " ")
# Call this function recursively on the false branch
print (spacing + '--> False:')
print_tree(node.false_branch, spacing + " ")
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
All Work Done !!! Now It's time to Build a Model from given Training data
|
my_tree = build_tree(training_data)
print_tree(my_tree)
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
Test the model with test data
Write a function to classify the test data
|
def classify(row, node):
# Base case: we've reached a leaf
if isinstance(node, Leaf):
return node.predictions
# Decide whether to follow the true-branch or the false-branch.
# Compare the feature / value stored in the node,
# to the example we're considering.
if node.question.match(row):
return classify(row, node.true_branch)
else:
return classify(row, node.false_branch)
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
Print Prediction at Leaf Node
|
"""A nicer way to print the predictions at a leaf."""
def print_leaf(counts):
total = sum(counts.values()) * 1.0
probs = {}
for lbl in counts.keys():
probs[lbl] = str(int(counts[lbl] / total * 100)) + "%"
return probs
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
Check for example
|
print_leaf(classify(training_data[0],my_tree))
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
Test Data
|
testing_data = [
['Green', 3, 'Apple'],
['Yellow', 4, 'Apple'],
['Red', 2, 'Grape'],
['Red', 1, 'Grape'],
['Yellow', 3, 'Lemon'],
]
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
Evaluate
|
for row in testing_data:
print ("Actual: %s. Predicted: %s" %
(row[-1], print_leaf(classify(row, my_tree))))
|
DecisionTree_Math_Fruits.ipynb
|
imamol555/Machine-Learning
|
mit
|
CNTK Time series prediction with LSTM
This demo demonstrates how to use CNTK to predict future values in a time series using a Recurrent Neural Network (RNN). It is based on a LSTM tutorial that comes with the CNTK distribution.
RNNs are particularly well suited to learn sequence data.
For details on RNNs, see this excellent post.
Goal
We will download stock prices for a chosen symbol, then train a recurrent neural net to predict the closing price on the following day from $N$ previous days' closing prices.
Our RNN will We will use Long-Short Term Memory LSTM units in the hidden layer. An LSTM network is well-suited to learn from experience to classify, process or predict time series when there are time lags of unknown size between important events.
Organization
The example has the following sections:
- Download and prepare data
- LSTM network modeling
- Model training and evaluation
|
# Standard packages
import math
from matplotlib import pyplot as plt
import numpy as np
import os
import pandas as pd
import time
# Helpers for reading stock prices
import pandas_datareader.data as pdr
import datetime as dt
# Images
from IPython.display import Image
# CNTK packages
import cntk as C
import cntk.axis
from cntk.layers import Input, Dense, Dropout, Recurrence
%matplotlib inline
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
Select the notebook runtime environment devices / settings
Set the device. If you have both CPU and GPU on your machine, you can optionally switch the devices.
|
# If you have a GPU, uncomment the GPU line below
#C.device.set_default_device(C.device.gpu(0))
C.device.set_default_device(C.device.cpu())
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
Download and Prepare Data
Here we define helper methods to prepare the data.
download_data()
Queries Yahoo Finance for daily close price of a given stock ticker symbol. Returns an array of floats.
|
def download_data(symbol='MSFT', start=dt.datetime(2017, 1, 1), end=dt.datetime(2017, 3, 1)):
"""
Download daily close and volume for specified stock symbol from Yahoo Finance
Returns pandas DataFrame
"""
data = pdr.DataReader(symbol, 'yahoo', start, end)
data.rename(inplace = True, columns={'Close':'data'})
rv = data['data'].diff()[1:] / 50.0
return rv
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
As an alternative, we have code to read two CSV files downloaded from DataMarket. The files are
1. Mean daily temperature, Fisher River near Dallas, Jan 01, 1988 to Dec 31, 1991
2. Monthly milk production: pounds per cow. Jan 62 – Dec 75
|
def read_data(which = "milk"):
"""
Read csv
"""
if(which == 'temp'):
data = pd.read_csv('data/mean-daily-temperature-fisher-ri.csv')
name = 'Mean Daily Temperature - Fisher River'
data.rename(inplace = True, columns={'temp':'data'})
rv = data['data']/100
else:
data = pd.read_csv('data/monthly-milk-production-pounds-p.csv')
name = 'Monthly Milk Production'
data.rename(inplace = True, columns={'milk':'data'})
rv = data['data'].diff()[1:] / 50.0
f, a = plt.subplots(1, 1, figsize=(12, 5))
a.plot(data['data'], label=name)
a.legend();
return rv
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
generate_RNN_data()
The RNN will be trained on sequences of length $N$ of single values (scalars), meaning that each training sample is a $N\times1$ matrix. CNTK requires us to shape our input data as an array with each element being an observation. So, for inputs, $X$, we need to create a tensor or 3-D array with dimensions $[M, N, 1]$ where $M$ is the number of training samples.<br/>
As output we want our network to predict the next value of the sequence, so our target, $Y$, is a $[M, 1]$ array containing the next day's value of the stock price after the sequence presented in $X$.</br/>
We do this by sampling from the sequence of stock prices and creating numpy ndarrays.
|
def generate_RNN_data(x, time_steps=10):
"""
Generate sequences to feed to rnn
x: DataFrame, daily close
time_steps: int, number of days in sequences used to train the RNN
"""
rnn_x = []
for i in range(len(x) - (time_steps+1)):
# Each training sample is a sequence of length time_steps
xi = x[i: i + time_steps].astype(np.float32).as_matrix()
# We need to reshape as a column vector as the model expects
# 1 float per time point
xi = xi.reshape(-1,1)
rnn_x.append(xi)
rnn_x = np.array(rnn_x)
# The target values are a single float per training sequence
rnn_y = np.array(x[time_steps+1:].astype(np.float32)).reshape(-1,1)
return split_data(rnn_x, 0.2, 0.2), split_data(rnn_y, 0.2, 0.2)
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
split_data()
This function will split the data into training, validation and test sets and return a list with those elements, each containing a ndarray as described above.
|
def split_data(data, val_size=0.1, test_size=0.1):
"""
splits np.array into training, validation and test
"""
pos_test = int(len(data) * (1 - test_size))
pos_val = int(len(data[:pos_test]) * (1 - val_size))
train, val, test = data[:pos_val], data[pos_val:pos_test], data[pos_test:]
return {"train": train, "val": val, "test": test}
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
Execute
Download data, generate the RNN training and evaluation data and visualize
|
symbol = 'MSFT'
start = dt.datetime(2010, 1, 1)
end = dt.datetime(2017, 3, 1)
window = 30
#raw_data = download_data1(symbol=symbol, start=start, end=end)
#rd100 = raw_data['Close']/10.0
raw_data = read_data('milk')
X, Y = generate_RNN_data(raw_data, window)
f, a = plt.subplots(3, 1, figsize=(12, 8))
for j, ds in enumerate(['train', 'val', 'test']):
a[j].plot(Y[ds], label=ds + ' raw')
[i.legend() for i in a];
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
Quick check on the dimensions of the data + make sure we don't have any NaNs
|
print([(a, X[a].shape) for a in X.keys()])
print([(a, Y[a].shape) for a in Y.keys()])
print([(a, np.isnan(X[a]).any()) for a in X.keys()])
print([(a, np.isnan(Y[a]).any()) for a in Y.keys()])
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
We define the next_batch() iterator that produces batches we can feed to the training function.
Note that because CNTK supports variable sequence length, we must feed the batches as list of sequences. This is a convenience function to generate small batches of data often referred to as minibatch.
|
def next_batch(x, y, ds, size=10):
"""get the next batch to process"""
for i in range(0, len(x[ds])-size, size):
yield np.array(x[ds][i:i+size]), np.array(y[ds][i:i+size])
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
Network modeling
We setup our network with $N$ LSTM cells, each receiving the single value of our sequence as input at every time step. The $N$ outputs from the LSTM layer are the input into a dense layer that produces a single output. So, we have 1 input, $N$ hidden LSTM nodes and again a single output node.
To train, CNTK will unroll the network over time steps and backpropagate the error over time.
Between LSTM and dense layer we insert a special dropout operation that randomly ignores 20% of the values coming the LSTM during training to prevent overfitting. When using the model to make predictions all values will be retained.
We are only interested in predicting one step ahead when we get to the end of each training sequence, so we use another operator to identify the last item in the sequence before connecting the output layer.
Using CNTK we can easily express our model:
|
def create_model(I, H, O):
"""Create the model for time series prediction"""
with C.layers.default_options(initial_state = 0.1):
x = C.layers.Input(I)
m = C.layers.Recurrence(C.layers.LSTM(H))(x)
m = C.ops.sequence.last(m)
m = C.layers.Dropout(0.2)(m)
m = cntk.layers.Dense(O)(m)
# Also create a layer to represent the target. It has the same number of units as the output
# and has to share the same dynamic axes
y = C.layers.Input(1, dynamic_axes=m.dynamic_axes, name="y")
return (m, x, y)
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
CNTK inputs, outputs and parameters are organized as tensors, or n-dimensional arrays. CNTK refers to these different dimensions as axes.
Every CNTK tensor has some static axes and some dynamic axes. The static axes have the same length throughout the life of the network whereas the dynamic axes can vary in length from instance to instance.
The axis over which you run a recurrence is dynamic and thus its dimensions are unknown at the time you define your variable. Thus the input variable only lists the shapes of the static axes. Since our inputs are a sequence of one dimensional numbers we specify the input as
C.layers.Input(1)
Both the $N$ instances in the sequence (training window) and the number of sequences that form a mini-batch are implicitly represented in the default dynamic axis as shown below in the form of defaults.
x_axes = [C.Axis.default_batch_axis(), C.Axis.default_dynamic_axis()]
C.layers.Input(1, dynamic_axes=x_axes)
More information here.
The trainer needs definition of the loss function and the optimization algorithm.
|
def create_trainer(model, output, learning_rate = 0.001, batch_size = 20):
# the learning rate
lr_schedule = C.learning_rate_schedule(learning_rate, C.UnitType.minibatch)
# loss function
loss = C.ops.squared_error(model, output)
# use squared error for training
error = C.ops.squared_error(model, output)
# use adam optimizer
momentum_time_constant = C.learner.momentum_as_time_constant_schedule(batch_size / -math.log(0.9))
learner = C.learner.adam_sgd(z.parameters,
lr = lr_schedule,
momentum = momentum_time_constant,
unit_gain = True)
# Construct the trainer
return(C.Trainer(model, (loss, error), [learner]))
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
Setup everything else we need for training the model: define user specified training parameters, define inputs, outputs, model and the optimizer.
|
# create the model with 1 input (x), 10 LSTM units, and 1 output unit (y)
(z, x, y) = create_model(1, 10, 1)
# Construct the trainer
BATCH_SIZE = 2
trainer = create_trainer(z, y, learning_rate=0.0002, batch_size=BATCH_SIZE)
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
Training the network
We are ready to train. 100 epochs should yield acceptable results.
|
# Training parameters
EPOCHS = 500
# train
loss_summary = []
start = time.time()
for epoch in range(0, EPOCHS):
for x1, y1 in next_batch(X, Y, "train", BATCH_SIZE):
trainer.train_minibatch({x: x1, y: y1})
if epoch % (EPOCHS / 20) == 0:
training_loss = cntk.utils.get_train_loss(trainer)
loss_summary.append((epoch, training_loss))
print("epoch: {:3d}/{}, loss: {:.5f}".format(epoch, EPOCHS, training_loss))
loss_summary = np.array(loss_summary)
print("training took {0:.1f} sec".format(time.time() - start))
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
Let's look at how the loss function decreases over time to see if the model is converging
|
plt.plot(loss_summary[:,0], loss_summary[:,1], label='training loss');
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
Normally we would validate the training on the data that we set aside for validation but since the input data is small we can run validattion on all parts of the dataset.
|
# validate
def get_mse(X,Y,labeltxt):
result = 0.0
for x1, y1 in next_batch(X, Y, labeltxt, BATCH_SIZE):
eval_error = trainer.test_minibatch({x:x1, y:y1})
result += eval_error
return result/len(X[labeltxt])
# Print the train and validation errors
for labeltxt in ["train", "val", 'test']:
print("mse for {}: {:.6f}".format(labeltxt, get_mse(X, Y, labeltxt)))
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
We check that the errors are roughly the same for train, validation and test sets. We also plot the expected output (Y) and the prediction our model made to shows how well the simple LSTM approach worked.
|
# predict
f, a = plt.subplots(3, 1, figsize = (12, 8))
for j, ds in enumerate(["train", "val", "test"]):
results = []
for x1, y1 in next_batch(X, Y, ds, BATCH_SIZE):
pred = z.eval({x: x1})
results.extend(pred[:, 0])
a[j].plot(Y[ds], label = ds + ' raw')
a[j].plot(results, label = ds + ' predicted')
[i.legend() for i in a];
|
LSTM_Timeseries_predict.ipynb
|
jspoelstra/cntk-rnn
|
mit
|
User Defined Module
|
# save the following code as example.py
def add(a,b):
return a+b
# now you can import example.py
# import example
# example.add(5,4)
|
Modules+and+Packages.ipynb
|
vravishankar/Jupyter-Books
|
mit
|
Import with renaming
We can import a module by renaming it as follows:
|
import math as m
print(m.pi)
|
Modules+and+Packages.ipynb
|
vravishankar/Jupyter-Books
|
mit
|
from...import statement
We can import specific names from am module without importing the full module.
|
from math import pi
print(pi) # please note the dot operator is not required
|
Modules+and+Packages.ipynb
|
vravishankar/Jupyter-Books
|
mit
|
To import all definitions from the module just specify '*' as below. Please note that this not a good practice as it can lead to duplicate definitions for an identifier.
|
from math import *
print(pi)
|
Modules+and+Packages.ipynb
|
vravishankar/Jupyter-Books
|
mit
|
Module Search Path
While importing a module python looks for a definition at various places and in the following order.
First it looks for a built-in module
Looks at the current directory
PYTHONPATH ( environment variable with a list of directory )
Installation dependent default directory
|
import sys
sys.path
|
Modules+and+Packages.ipynb
|
vravishankar/Jupyter-Books
|
mit
|
Reloading a Module
Python loads modules only once even though you try to import it multiple times. But if the module is changed for some reasons and you want to reload the module you can use the .reload() function inside the 'imp' module.
|
# my_module.py
# print('This code got executed')
# import imp
# import my_module
# This code got executed
# import my_module
# import my_module
# imp.reload(my_module)
|
Modules+and+Packages.ipynb
|
vravishankar/Jupyter-Books
|
mit
|
Module Functions
|
print(dir(os))
import math
print(math.__doc__)
math.__name__
|
Modules+and+Packages.ipynb
|
vravishankar/Jupyter-Books
|
mit
|
Packages
A package is just a way of collecting related modules together within a single tree-like hierarchy.
Like we organise the files in directories, Python has packages for directories and modules for files. Similar, as a directory can contain sub-directories and files, a python package can have sub-package and modules.
A directory must contain a file named __init.py in order for python to consider the directory as a package. This file can be empty but in general it is used for initialisation code for that package.
The following are different ways of importing the packages.
import <package name>
import <package name>.<module name>
from <package name> import <module name>
|
# examples
import math
from math import pi
print(pi)
|
Modules+and+Packages.ipynb
|
vravishankar/Jupyter-Books
|
mit
|
Finally, we can do post-estimation prediction and forecasting. Notice that the end period can be specified as a date.
|
# Perform prediction and forecasting
predict = res.get_prediction()
forecast = res.get_forecast('2014')
fig, ax = plt.subplots(figsize=(10,4))
# Plot the results
df['lff'].plot(ax=ax, style='k.', label='Observations')
predict.predicted_mean.plot(ax=ax, label='One-step-ahead Prediction')
predict_ci = predict.conf_int(alpha=0.05)
predict_index = np.arange(len(predict_ci))
ax.fill_between(predict_index[2:], predict_ci[2:, 0], predict_ci[2:, 1], alpha=0.1)
forecast.predicted_mean.plot(ax=ax, style='r', label='Forecast')
forecast_ci = forecast.conf_int()
forecast_index = np.arange(len(predict_ci), len(predict_ci) + len(forecast_ci))
ax.fill_between(forecast_index, forecast_ci[:, 0], forecast_ci[:, 1], alpha=0.1)
# Cleanup the image
ax.set_ylim((4, 8));
legend = ax.legend(loc='lower left');
|
examples/notebooks/statespace_local_linear_trend.ipynb
|
edhuckle/statsmodels
|
bsd-3-clause
|
Signals
Then the input signal can be plotted as following
(the x-axis is wrong of course, we will tackle this later on).
At first glance, it might be obvious (not for me though)
that the input is a superposition of two sine waves.
|
import matplotlib.pyplot as plt
def quickplt(sequence):
"""Plot the signal as-is."""
plt.plot(sequence)
plt.show()
quickplt(input_1kHz_15kHz)
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
To confirm this suspection, we apply FFT on the signals
and plot the results:
|
from math import pi
import numpy as np
from numpy.fft import fft
def plt_polar(sequence):
"""Plot the complex signal in polar coordinate, from 0 to pi*2."""
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2)
domain = np.linspace(0, pi*2, len(sequence))
ax1.plot(domain, np.abs(sequence))
ax1.set_title('magnitude')
ax2.plot(domain, np.angle(sequence))
ax2.set_title('phase')
plt.show()
inpft = fft(input_1kHz_15kHz)
plt_polar(inpft)
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
Since the low frequencies are lurking around k*pi*2 and the high frequencies are around k*pi*2 + pi, we can make a good guess that the higher peak is of 1 kHz and the lower one is of 15 kHz. The sample rate would be at exactlyk*pi*2 + pi and can be calculated as
|
sample_rate = (len(inpft)/2) / np.argmax(inpft) * 1000
print(sample_rate)
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
We can then replot the signal with the correct scaling
|
def plt_time(sequence):
"""Plot the signal in time domain."""
length = len(sequence)
plt.plot(np.linspace(0, length/sample_rate, length), sequence)
plt.show()
plt_time(input_1kHz_15kHz)
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
The plot of these in cartesian coordinates doesn't give me any further understanding, however:
|
def plt_rect(sequence):
"""Plot the complex signal in rectangular coordinate, from 0 to pi*2."""
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2)
domain = np.linspace(0, pi*2, len(sequence))
ax1.plot(domain, np.real(sequence))
ax1.set_title('real')
ax2.plot(domain, np.imag(sequence))
ax2.set_title('imaginary')
plt.show()
plt_rect(inpft)
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
Systems
Low-pass Filter
In this section, we also try to do the same thing
for the impulse response, which seems to be a sinc function.
|
quickplt(impulse_response)
lfft = fft(impulse_response)
plt_polar(lfft)
plt_rect(lfft)
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
As shown in the graphs, the system is a low-pass filter.
Applying the system to the input we get what is undeniably the sinusoidal signal of frequency of 1 kHz:
|
output = np.convolve(input_1kHz_15kHz, impulse_response)
plt_time(output)
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
Alternative to convolution in time domain, we can multiply the FTs:
|
from numpy.fft import ifft
from scipy import interpolate
f = interpolate.interp1d(np.linspace(0, pi*2, len(lfft)), lfft, kind='zero')
plt_time(np.real(ifft(inpft*f(np.linspace(0, pi*2, len(inpft))))))
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
It is noticeable that the wave is now distorted in shape. Funny enough, using other interpolation methods, the result is much worse (using the convoluted one as the reference), for example the linear one:
|
f = interpolate.interp1d(np.linspace(0, pi*2, len(lfft)), lfft, kind='linear')
plt_time(np.real(ifft(inpft*f(np.linspace(0, pi*2, len(inpft))))))
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
Notice that the low-pass filter filtered out the 15 kHz wave:
|
plt_polar(fft(output))
plt_rect(fft(output))
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
High-pass filter
To turn the given low-pass filter to a high-pass one,
we subtract it from the impulse signal (which is equivalent to
subtracting it from 1 in the frequency domain thanks to linearity):
|
high_pass = ((lambda m: [0]*m + [1] + [0]*m)(np.argmax(impulse_response))
- np.array(impulse_response))
quickplt(high_pass)
plt_polar(fft(high_pass))
plt_rect(fft(high_pass))
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
We then apply it to the input and get the high frequency signal of 15 kHz:
|
outputhf = np.convolve(input_1kHz_15kHz, high_pass)
plt_time(outputhf)
plt_polar(fft(outputhf))
plt_rect(fft(outputhf))
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
Other methods to create a HF filter have been tried, however the result is nowhere as good:
Shifting the low-pass filter by pi in frequency domain:
|
high_pass_bad = ifft(np.roll(lfft, len(impulse_response)>>1))
outputhf_bad = np.convolve(input_1kHz_15kHz, high_pass_bad)
plt_rect(outputhf_bad)
plt_polar(fft(outputhf_bad))
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
Multiply the low-pass in time domain with (-1)^n (which effectively also shift it by pi):
|
high_pass_worse = np.fromiter(((-1)**n for n in range(len(impulse_response))),
dtype=float) * impulse_response
outputhf_worse = np.convolve(input_1kHz_15kHz, high_pass_worse)
plt_time(outputhf_worse)
plt_polar(fft(outputhf_worse))
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
While both of these produce a high frequency signal for most of the interval, at the start and end the volume is significantly higher and there are many different frequencies instead of just 15 kHz. This seems disagrees with the theory at first, but the theory is only supposed to apply to infinite length impulse response; manipulate a rather small in size sample cannot give perfect results.
Electrocardiogram
Given the following ECG signal:
|
ecg = [
0, 0.0010593, 0.0021186, 0.003178, 0.0042373, 0.0052966, 0.0063559,
0.0074153, 0.0084746, 0.045198, 0.081921, 0.11864, 0.15537, 0.19209,
0.22881, 0.26554, 0.30226, 0.33898, 0.30226, 0.26554, 0.22881, 0.19209,
0.15537, 0.11864, 0.081921, 0.045198, 0.0084746, 0.0077684, 0.0070621,
0.0063559, 0.0056497, 0.0049435, 0.0042373, 0.0035311, 0.0028249,
0.0021186, 0.0014124, 0.00070621, 0, -0.096045, -0.19209, -0.28814,
-0.073446, 0.14124, 0.35593, 0.57062, 0.78531, 1, 0.73729, 0.47458,
0.21186, -0.050847, -0.31356, -0.57627, -0.83898, -0.55932, -0.27966, 0,
0.00073692, 0.0014738, 0.0022108, 0.0029477, 0.0036846, 0.0044215,
0.0051584, 0.0058954, 0.0066323, 0.0073692, 0.0081061, 0.008843, 0.00958,
0.010317, 0.011054, 0.011791, 0.012528, 0.013265, 0.014001, 0.014738,
0.015475, 0.016212, 0.016949, 0.03484, 0.052731, 0.070621, 0.088512,
0.1064, 0.12429, 0.14218, 0.16008, 0.17797, 0.16186, 0.14576, 0.12966,
0.11356, 0.097458, 0.081356, 0.065254, 0.049153, 0.033051, 0.016949,
0.013559, 0.010169, 0.0067797, 0.0033898, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0010593, 0.0021186, 0.003178,
0.0042373, 0.0052966, 0.0063559, 0.0074153, 0.0084746, 0.045198, 0.081921,
0.11864, 0.15537, 0.19209, 0.22881, 0.26554, 0.30226, 0.33898, 0.30226,
0.26554, 0.22881, 0.19209, 0.15537, 0.11864, 0.081921, 0.045198, 0.0084746,
0.0077684, 0.0070621, 0.0063559, 0.0056497, 0.0049435, 0.0042373,
0.0035311, 0.0028249, 0.0021186, 0.0014124, 0.00070621, 0, -0.096045,
-0.19209, -0.28814, -0.073446, 0.14124, 0.35593, 0.57062, 0.78531, 1,
0.73729, 0.47458, 0.21186, -0.050847, -0.31356, -0.57627, -0.83898,
-0.55932, -0.27966, 0, 0.00073692, 0.0014738, 0.0022108, 0.0029477,
0.0036846, 0.0044215, 0.0051584, 0.0058954, 0.0066323, 0.0073692,
0.0081061, 0.008843, 0.00958, 0.010317, 0.011054, 0.011791, 0.012528,
0.013265, 0.014001, 0.014738, 0.015475, 0.016212, 0.016949, 0.03484,
0.052731, 0.070621, 0.088512, 0.1064, 0.12429, 0.14218, 0.16008, 0.17797,
0.16186, 0.14576, 0.12966, 0.11356, 0.097458, 0.081356, 0.065254, 0.049153,
0.033051, 0.016949, 0.013559, 0.010169, 0.0067797, 0.0033898, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0010593,
0.0021186, 0.003178, 0.0042373, 0.0052966, 0.0063559, 0.0074153, 0.0084746,
0.045198, 0.081921, 0.11864, 0.15537, 0.19209, 0.22881, 0.26554, 0.30226,
0.33898, 0.30226, 0.26554, 0.22881, 0.19209, 0.15537, 0.11864, 0.081921,
0.045198, 0.0084746, 0.0077684, 0.0070621, 0.0063559, 0.0056497, 0.0049435,
0.0042373, 0.0035311, 0.0028249, 0.0021186, 0.0014124, 0.00070621, 0,
-0.096045, -0.19209, -0.28814, -0.073446, 0.14124, 0.35593, 0.57062,
0.78531, 1, 0.73729, 0.47458, 0.21186, -0.050847, -0.31356, -0.57627,
-0.83898, -0.55932, -0.27966, 0, 0.00073692, 0.0014738, 0.0022108,
0.0029477, 0.0036846, 0.0044215, 0.0051584, 0.0058954, 0.0066323,
0.0073692, 0.0081061, 0.008843, 0.00958, 0.010317, 0.011054, 0.011791,
0.012528, 0.013265, 0.014001, 0.014738, 0.015475, 0.016212, 0.016949,
0.03484, 0.052731, 0.070621, 0.088512, 0.1064, 0.12429, 0.14218, 0.16008,
0.17797, 0.16186, 0.14576, 0.12966, 0.11356, 0.097458, 0.081356, 0.065254,
0.049153, 0.033051, 0.016949, 0.013559, 0.010169, 0.0067797, 0.0033898, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0.0010593, 0.0021186, 0.003178, 0.0042373, 0.0052966, 0.0063559, 0.0074153,
0.0084746, 0.045198, 0.081921, 0.11864, 0.15537, 0.19209, 0.22881, 0.26554,
0.30226, 0.33898, 0.30226, 0.26554, 0.22881, 0.19209, 0.15537, 0.11864,
0.081921, 0.045198, 0.0084746, 0.0077684, 0.0070621, 0.0063559, 0.0056497,
0.0049435, 0.0042373, 0.0035311, 0.0028249, 0.0021186, 0.0014124,
0.00070621, 0, -0.096045, -0.19209, -0.28814, -0.073446, 0.14124, 0.35593,
0.57062, 0.78531, 1, 0.73729, 0.47458, 0.21186, -0.050847, -0.31356,
-0.57627, -0.83898, -0.55932, -0.27966, 0, 0.00073692, 0.0014738,
0.0022108, 0.0029477, 0.0036846, 0.0044215, 0.0051584, 0.0058954,
0.0066323, 0.0073692, 0.0081061, 0.008843, 0.00958, 0.010317, 0.011054,
0.011791, 0.012528, 0.013265, 0.014001, 0.014738, 0.015475, 0.016212,
0.016949, 0.03484, 0.052731, 0.070621, 0.088512, 0.1064, 0.12429, 0.14218,
0.16008, 0.17797, 0.16186, 0.14576, 0.12966, 0.11356, 0.097458, 0.081356,
0.065254, 0.049153, 0.033051, 0.016949, 0.013559, 0.010169, 0.0067797,
0.0033898, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.0010593, 0.0021186, 0.003178, 0.0042373, 0.0052966,
0.0063559, 0.0074153, 0.0084746, 0.045198, 0.081921, 0.11864, 0.15537,
0.19209, 0.22881, 0.26554, 0.30226, 0.33898, 0.30226, 0.26554, 0.22881,
0.19209, 0.15537, 0.11864, 0.081921, 0.045198, 0.0084746, 0.0077684,
0.0070621, 0.0063559, 0.0056497, 0.0049435, 0.0042373, 0.0035311,
0.0028249, 0.0021186, 0.0014124, 0.00070621, 0, -0.096045, -0.19209,
-0.28814, -0.073446, 0.14124, 0.35593, 0.57062, 0.78531, 1, 0.73729,
0.47458, 0.21186, -0.050847, -0.31356, -0.57627, -0.83898, -0.55932,
-0.27966, 0, 0.00073692, 0.0014738, 0.0022108, 0.0029477, 0.0036846,
0.0044215, 0.0051584, 0.0058954, 0.0066323, 0.0073692, 0.0081061, 0.008843,
0.00958, 0.010317, 0.011054, 0.011791, 0.012528, 0.013265, 0.014001,
0.014738, 0.015475, 0.016212, 0.016949, 0.03484, 0.052731, 0.070621,
0.088512, 0.1064, 0.12429, 0.14218, 0.16008, 0.17797, 0.16186, 0.14576,
0.12966, 0.11356, 0.097458, 0.081356, 0.065254, 0.049153, 0.033051,
0.016949, 0.013559, 0.010169, 0.0067797, 0.0033898, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
quickplt(ecg)
plt_polar(fft(ecg))
plt_rect(fft(ecg))
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
The plots of the ECG gives us the initial intuition that it's a rather low frequency signal (in fact we have the heartrate of somewhere between 50 to 500 Hz) with multiple face each priod. This is confirmed by the low-passed response, which looks surprisingly similar to the original in time domain
|
ecglf = np.convolve(ecg, impulse_response)
quickplt(ecglf)
plt_polar(fft(ecglf))
plt_rect(fft(ecglf))
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
Just for fun, we add some high frequency noise to the signal (because heartbeats are repetitive and boring!):
|
real, imag = np.random.random((2, len(ecg)-len(high_pass)+1))
whitenoise = ifft(real + imag*1j)
noise_hf = abs(np.convolve(whitenoise, high_pass))
quickplt(noise_hf)
noisy_ecg = ecg + noise_hf
quickplt(noisy_ecg)
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
To smoothen the noisy signal back to normal, the low pass filter shoud be able to does the job:
|
recovered_ecg = np.convolve(noisy_ecg, impulse_response)
quickplt(recovered_ecg)
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
Bonus
In this section, we'll try to have some fun playing the signals
using palace. While the main purpose
of palace is positional audio rendering and environmental effect,
it also provide a handy decoder base class, which can be easily derived:
|
!pip install palace
from palace import BaseDecoder, Buffer, Context, Device
class Dec(BaseDecoder):
"""Generator of elementary signals."""
def __init__(self, content):
self.content, self.size = content.copy(), len(content)
@BaseDecoder.frequency.getter
def frequency(self) -> int: return int(sample_rate)
@BaseDecoder.channel_config.getter
def channel_config(self) -> str:
return 'Mono'
@BaseDecoder.sample_type.getter
def sample_type(self) -> str:
return '32-bit float'
@BaseDecoder.length.getter
def length(self) -> int: return self.size
def seek(self, pos: int) -> bool: return False
@BaseDecoder.loop_points.getter
def loop_points(self): return 0, 0
def read(self, count: int) -> bytes:
if count > len(self.content):
try:
return np.float32(self.content).tobytes()
finally:
self.content = []
data, self.content = self.content[:count], self.content[count:]
return np.float32(data).tobytes()
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
The input and output signals can then be played by running:
|
from time import sleep
with Device() as d, Context(d) as c:
with Buffer.from_decoder(Dec(input_1kHz_15kHz), 'input') as b, b.play() as s:
sleep(1)
with Buffer.from_decoder(Dec(output), 'lf') as b, b.play() as s:
sleep(1)
with Buffer.from_decoder(Dec(outputhf), 'hf') as b, b.play() as s:
sleep(1)
|
usth/ICT2.9/practical/dsp.ipynb
|
McSinyx/hsg
|
gpl-3.0
|
and the system, where we, as we did in part I, only commit the orientation of the dipole moment to the particles and take the magnitude into account in the prefactor of Dipolar P3M (for more details see part I).
Hint:
It should be noted that we seed both the Langevin thermostat and the random number generator of numpy. Latter means that the initial configuration of our system is the same every time this script is executed. As the time evolution of the system depends not solely on the Langevin thermostat but also on the numeric accuracy and DP3M (the tuned parameters are slightly different every time) it is only partly predefined. You can change the seeds to simulate with a different initial configuration and a guaranteed different time evolution.
|
system = espressomd.System(box_l=(box_size,box_size,box_size))
system.time_step = dt
system.thermostat.set_langevin(kT=kT, gamma=gamma, seed=1)
# Lennard Jones interaction
system.non_bonded_inter[0,0].lennard_jones.set_params(epsilon=lj_epsilon,sigma=lj_sigma,cutoff=lj_cut, shift="auto")
# Random dipole moments
np.random.seed(seed = 1)
dip_phi = np.random.random((N,1)) *2. * np.pi
dip_cos_theta = 2*np.random.random((N,1)) -1
dip_sin_theta = np.sin(np.arccos(dip_cos_theta))
dip = np.hstack((
dip_sin_theta *np.sin(dip_phi),
dip_sin_theta *np.cos(dip_phi),
dip_cos_theta))
# Random positions in system volume
pos = box_size * np.random.random((N,3))
# Add particles
system.part.add(pos=pos, rotation=N*[(1,1,1)], dip=dip)
# Remove overlap between particles by means of the steepest descent method
system.integrator.set_steepest_descent(
f_max=0,gamma=0.1,max_displacement=0.05)
while system.analysis.energy()["total"] > 5*kT*N:
system.integrator.run(20)
# Switch to velocity Verlet integrator
system.integrator.set_vv()
# tune verlet list skin
system.cell_system.skin = 0.8
# Setup dipolar P3M
accuracy = 5E-4
system.actors.add(DipolarP3M(accuracy=accuracy,prefactor=dip_lambda*lj_sigma**3*kT))
|
doc/tutorials/11-ferrofluid/11-ferrofluid_part3.ipynb
|
mkuron/espresso
|
gpl-3.0
|
The Python code below encodes a Tetrahedron type based solely on its six edge lengths. The code makes no attempt to determine the consequent angles.
A complicated volume formula, mined from the history books and streamlined by mathematician Gerald de Jong, outputs the volume of said tetrahedron in both IVM and XYZ units.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/45589318711/in/dateposted-public/" title="dejong"><img src="https://farm2.staticflickr.com/1935/45589318711_677d272397.jpg" width="417" height="136" alt="dejong"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
The unittests that follow assure it's producing the expected results. The formula bears great resemblance to the one by Piero della Francesca.
|
from math import sqrt as rt2
from qrays import Qvector, Vector
R =0.5
D =1.0
S3 = pow(9/8, 0.5)
root2 = rt2(2)
root3 = rt2(3)
root5 = rt2(5)
root6 = rt2(6)
PHI = (1 + root5)/2.0
class Tetrahedron:
"""
Takes six edges of tetrahedron with faces
(a,b,d)(b,c,e)(c,a,f)(d,e,f) -- returns volume
in ivm and xyz units
"""
def __init__(self, a,b,c,d,e,f):
self.a, self.a2 = a, a**2
self.b, self.b2 = b, b**2
self.c, self.c2 = c, c**2
self.d, self.d2 = d, d**2
self.e, self.e2 = e, e**2
self.f, self.f2 = f, f**2
def ivm_volume(self):
ivmvol = ((self._addopen() - self._addclosed() - self._addopposite())/2) ** 0.5
return ivmvol
def xyz_volume(self):
xyzvol = rt2(8/9) * self.ivm_volume()
return xyzvol
def _addopen(self):
a2,b2,c2,d2,e2,f2 = self.a2, self.b2, self.c2, self.d2, self.e2, self.f2
sumval = f2*a2*b2
sumval += d2 * a2 * c2
sumval += a2 * b2 * e2
sumval += c2 * b2 * d2
sumval += e2 * c2 * a2
sumval += f2 * c2 * b2
sumval += e2 * d2 * a2
sumval += b2 * d2 * f2
sumval += b2 * e2 * f2
sumval += d2 * e2 * c2
sumval += a2 * f2 * e2
sumval += d2 * f2 * c2
return sumval
def _addclosed(self):
a2,b2,c2,d2,e2,f2 = self.a2, self.b2, self.c2, self.d2, self.e2, self.f2
sumval = a2 * b2 * d2
sumval += d2 * e2 * f2
sumval += b2 * c2 * e2
sumval += a2 * c2 * f2
return sumval
def _addopposite(self):
a2,b2,c2,d2,e2,f2 = self.a2, self.b2, self.c2, self.d2, self.e2, self.f2
sumval = a2 * e2 * (a2 + e2)
sumval += b2 * f2 * (b2 + f2)
sumval += c2 * d2 * (c2 + d2)
return sumval
def make_tet(v0,v1,v2):
"""
three edges from any corner, remaining three edges computed
"""
tet = Tetrahedron(v0.length(), v1.length(), v2.length(),
(v0-v1).length(), (v1-v2).length(), (v2-v0).length())
return tet.ivm_volume(), tet.xyz_volume()
tet = Tetrahedron(D, D, D, D, D, D)
print(tet.ivm_volume())
|
Computing Volumes.ipynb
|
4dsolutions/Python5
|
mit
|
The make_tet function takes three vectors from a common corner, in terms of vectors with coordinates, and computes the remaining missing lengths, thereby getting the information it needs to use the Tetrahedron class as before.
|
import unittest
from qrays import Vector, Qvector
class Test_Tetrahedron(unittest.TestCase):
def test_unit_volume(self):
tet = Tetrahedron(D, D, D, D, D, D)
self.assertEqual(tet.ivm_volume(), 1, "Volume not 1")
def test_e_module(self):
e0 = D
e1 = root3 * PHI**-1
e2 = rt2((5 - root5)/2)
e3 = (3 - root5)/2
e4 = rt2(5 - 2*root5)
e5 = 1/PHI
tet = Tetrahedron(e0, e1, e2, e3, e4, e5)
self.assertTrue(1/23 > tet.ivm_volume()/8 > 1/24, "Wrong E-mod")
def test_unit_volume2(self):
tet = Tetrahedron(R, R, R, R, R, R)
self.assertAlmostEqual(float(tet.xyz_volume()), 0.117851130)
def test_phi_edge_tetra(self):
tet = Tetrahedron(D, D, D, D, D, PHI)
self.assertAlmostEqual(float(tet.ivm_volume()), 0.70710678)
def test_right_tetra(self):
e = pow((root3/2)**2 + (root3/2)**2, 0.5) # right tetrahedron
tet = Tetrahedron(D, D, D, D, D, e)
self.assertAlmostEqual(tet.xyz_volume(), 1)
def test_quadrant(self):
qA = Qvector((1,0,0,0))
qB = Qvector((0,1,0,0))
qC = Qvector((0,0,1,0))
tet = make_tet(qA, qB, qC)
self.assertAlmostEqual(tet[0], 0.25)
def test_octant(self):
x = Vector((0.5, 0, 0))
y = Vector((0 , 0.5, 0))
z = Vector((0 , 0 , 0.5))
tet = make_tet(x,y,z)
self.assertAlmostEqual(tet[1], 1/6, 5) # good to 5 places
def test_quarter_octahedron(self):
a = Vector((1,0,0))
b = Vector((0,1,0))
c = Vector((0.5,0.5,root2/2))
tet = make_tet(a, b, c)
self.assertAlmostEqual(tet[0], 1, 5) # good to 5 places
def test_xyz_cube(self):
a = Vector((0.5, 0.0, 0.0))
b = Vector((0.0, 0.5, 0.0))
c = Vector((0.0, 0.0, 0.5))
R_octa = make_tet(a,b,c)
self.assertAlmostEqual(6 * R_octa[1], 1, 4) # good to 4 places
def test_s3(self):
D_tet = Tetrahedron(D, D, D, D, D, D)
a = Vector((0.5, 0.0, 0.0))
b = Vector((0.0, 0.5, 0.0))
c = Vector((0.0, 0.0, 0.5))
R_cube = 6 * make_tet(a,b,c)[1]
self.assertAlmostEqual(D_tet.xyz_volume() * S3, R_cube, 4)
def test_martian(self):
p = Qvector((2,1,0,1))
q = Qvector((2,1,1,0))
r = Qvector((2,0,1,1))
result = make_tet(5*q, 2*p, 2*r)
self.assertAlmostEqual(result[0], 20, 7)
def test_phi_tet(self):
"edges from common vertex: phi, 1/phi, 1"
p = Vector((1, 0, 0))
q = Vector((1, 0, 0)).rotz(60) * PHI
r = Vector((0.5, root3/6, root6/3)) * 1/PHI
result = make_tet(p, q, r)
self.assertAlmostEqual(result[0], 1, 7)
def test_phi_tet_2(self):
p = Qvector((2,1,0,1))
q = Qvector((2,1,1,0))
r = Qvector((2,0,1,1))
result = make_tet(PHI*q, (1/PHI)*p, r)
self.assertAlmostEqual(result[0], 1, 7)
def test_phi_tet_3(self):
T = Tetrahedron(PHI, 1/PHI, 1.0,
root2, root2/PHI, root2)
result = T.ivm_volume()
self.assertAlmostEqual(result, 1, 7)
def test_koski(self):
a = 1
b = PHI ** -1
c = PHI ** -2
d = (root2) * PHI ** -1
e = (root2) * PHI ** -2
f = (root2) * PHI ** -1
T = Tetrahedron(a,b,c,d,e,f)
result = T.ivm_volume()
self.assertAlmostEqual(result, PHI ** -3, 7)
a = Test_Tetrahedron()
R =0.5
D =1.0
suite = unittest.TestLoader().loadTestsFromModule(a)
unittest.TextTestRunner().run(suite)
|
Computing Volumes.ipynb
|
4dsolutions/Python5
|
mit
|
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/41211295565/in/album-72157624750749042/" title="Martian Multiplication"><img src="https://farm1.staticflickr.com/907/41211295565_59145e2f63.jpg" width="500" height="312" alt="Martian Multiplication"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
The above tetrahedron has a=2, b=2, c=5, for a volume of 20. The remaining three lengths have not been computed as it's sufficient to know only a, b, c if the angles between them are those of the regular tetrahedron.
That's how IVM volume is computed: multiply a * b * c from a regular tetrahedron corner, then "close the lid" to see the volume.
|
a = 2
b = 4
c = 5
d = 3.4641016151377544
e = 4.58257569495584
f = 4.358898943540673
tetra = Tetrahedron(a,b,c,d,e,f)
print("IVM volume of tetra:", round(tetra.ivm_volume(),5))
|
Computing Volumes.ipynb
|
4dsolutions/Python5
|
mit
|
Lets define a MITE, one of these 24 identical space-filling tetrahedrons, with reference to D=1, R=0.5, as this is how our Tetrahedron class is calibrated. The cubes 12 edges will all be √2/2.
Edges 'a' 'b' 'c' fan out from the cube center, with 'b' going up to a face center, with 'a' and 'c' to adjacent ends of the face's edge.
From the cube's center to mid-face is √2/4 (half an edge), our 'b'. 'a' and 'c' are both half the cube's body diagonal of √(3/2)/2 or √(3/8).
Edges 'd', 'e' and 'f' define the facet opposite the cube's center.
'd' and 'e' are both half face diagonals or 0.5, whereas 'f' is a cube edge, √2/2. This gives us our tetrahedron:
|
b = rt2(2)/4
a = c = rt2(3/8)
d = e = 0.5
f = rt2(2)/2
mite = Tetrahedron(a, b, c, d, e, f)
print("IVM volume of Mite:", round(mite.ivm_volume(),5))
print("XYZ volume of Mite:", round(mite.xyz_volume(),5))
|
Computing Volumes.ipynb
|
4dsolutions/Python5
|
mit
|
Allowing for floating point error, this space-filling right tetrahedron has a volume of 0.125 or 1/8. Since 24 of them form a cube, said cube has a volume of 3. The XYZ volume, on the other hand, is what we'd expect from a regular tetrahedron of edges 0.5 in the current calibration system.
|
regular = Tetrahedron(0.5, 0.5, 0.5, 0.5, 0.5, 0.5)
print("MITE volume in XYZ units:", round(regular.xyz_volume(),5))
print("XYZ volume of 24-Mite Cube:", round(24 * regular.xyz_volume(),5))
|
Computing Volumes.ipynb
|
4dsolutions/Python5
|
mit
|
The MITE (minimum tetrahedron) further dissects into component modules, a left and right A module, then either a left or right B module. Outwardly, the positive and negative MITEs look the same. Here are some drawings from R. Buckminster Fuller's research, the chief popularizer of the A and B modules.
In a different Jupyter Notebook, we could run these tetrahedra through our volume computer to discover both As and Bs have a volume of 1/24 in IVM units.
Instead, lets take a look at the E-module and compute its volume.
<br />
The black hub is at the center of the RT, as shown here...
<br />
<div style="text-align: center">
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/24971714468/in/dateposted-public/" title="E module with origin"><img src="https://farm5.staticflickr.com/4516/24971714468_46e14ce4b5_z.jpg" width="640" height="399" alt="E module with origin"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<b>RT center is the black hub (Koski with vZome)</b>
</div>
|
from math import sqrt as rt2
from tetravolume import make_tet, Vector
ø = (rt2(5)+1)/2
e0 = Black_Yellow = rt2(3)*ø**-1
e1 = Black_Blue = 1
e3 = Yellow_Blue = (3 - rt2(5))/2
e6 = Black_Red = rt2((5 - rt2(5))/2)
e7 = Blue_Red = 1/ø
# E-mod is a right tetrahedron, so xyz is easy
v0 = Vector((Black_Blue, 0, 0))
v1 = Vector((Black_Blue, Yellow_Blue, 0))
v2 = Vector((Black_Blue, 0, Blue_Red))
# assumes R=0.5 so computed result is 8x needed
# volume, ergo divide by 8.
ivm, xyz = make_tet(v0,v1,v2)
print("IVM volume:", round(ivm/8, 5))
print("XYZ volume:", round(xyz/8, 5))
|
Computing Volumes.ipynb
|
4dsolutions/Python5
|
mit
|
Compute and visualize ERDS maps
This example calculates and displays ERDS maps of event-related EEG data. ERDS
(sometimes also written as ERD/ERS) is short for event-related
desynchronization (ERD) and event-related synchronization (ERS)
:footcite:PfurtschellerLopesdaSilva1999.
Conceptually, ERD corresponds to a decrease in power in a specific frequency
band relative to a baseline. Similarly, ERS corresponds to an increase in
power. An ERDS map is a time/frequency representation of ERD/ERS over a range
of frequencies :footcite:GraimannEtAl2002. ERDS maps are also known as ERSP
(event-related spectral perturbation) :footcite:Makeig1993.
We use a public EEG BCI data set containing two different motor imagery tasks
available at PhysioNet. The two tasks are imagined hand and feet movement. Our
goal is to generate ERDS maps for each of the two tasks.
First, we load the data and create epochs of 5s length. The data sets contain
multiple channels, but we will only consider the three channels C3, Cz, and C4.
We compute maps containing frequencies ranging from 2 to 35Hz. We map ERD to
red color and ERS to blue color, which is the convention in many ERDS
publications. Finally, we perform cluster-based permutation tests to estimate
significant ERDS values (corrected for multiple comparisons within channels).
|
# Authors: Clemens Brunner <clemens.brunner@gmail.com>
# Felix Klotzsche <klotzsche@cbs.mpg.de>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import mne
from mne.datasets import eegbci
from mne.io import concatenate_raws, read_raw_edf
from mne.time_frequency import tfr_multitaper
from mne.stats import permutation_cluster_1samp_test as pcluster_test
from mne.viz.utils import center_cmap
# load and preprocess data ####################################################
subject = 1 # use data from subject 1
runs = [6, 10, 14] # use only hand and feet motor imagery runs
fnames = eegbci.load_data(subject, runs)
raws = [read_raw_edf(f, preload=True) for f in fnames]
raw = concatenate_raws(raws)
raw.rename_channels(lambda x: x.strip('.')) # remove dots from channel names
events, _ = mne.events_from_annotations(raw, event_id=dict(T1=2, T2=3))
want_chs = ['C3', 'Cz', 'C4']
picks = mne.pick_channels(raw.info["ch_names"], want_chs)
# epoch data ##################################################################
tmin, tmax = -1, 4 # define epochs around events (in s)
event_ids = dict(hands=2, feet=3) # map event IDs to tasks
epochs = mne.Epochs(raw, events, event_ids, tmin - 0.5, tmax + 0.5,
picks=picks, baseline=None, preload=True)
# compute ERDS maps ###########################################################
freqs = np.arange(2, 36, 1) # frequencies from 2-35Hz
n_cycles = freqs # use constant t/f resolution
vmin, vmax = -1, 1.5 # set min and max ERDS values in plot
baseline = [-1, 0] # baseline interval (in s)
cmap = center_cmap(plt.cm.RdBu, vmin, vmax) # zero maps to white
kwargs = dict(n_permutations=100, step_down_p=0.05, seed=1,
buffer_size=None, out_type='mask') # for cluster test
# Run TF decomposition overall epochs
tfr = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
use_fft=True, return_itc=False, average=False,
decim=2)
tfr.crop(tmin, tmax)
tfr.apply_baseline(baseline, mode="percent")
for event in event_ids:
# select desired epochs for visualization
tfr_ev = tfr[event]
fig, axes = plt.subplots(1, 4, figsize=(12, 4),
gridspec_kw={"width_ratios": [10, 10, 10, 1]})
for ch, ax in enumerate(axes[:-1]): # for each channel
# positive clusters
_, c1, p1, _ = pcluster_test(tfr_ev.data[:, ch, ...], tail=1, **kwargs)
# negative clusters
_, c2, p2, _ = pcluster_test(tfr_ev.data[:, ch, ...], tail=-1,
**kwargs)
# note that we keep clusters with p <= 0.05 from the combined clusters
# of two independent tests; in this example, we do not correct for
# these two comparisons
c = np.stack(c1 + c2, axis=2) # combined clusters
p = np.concatenate((p1, p2)) # combined p-values
mask = c[..., p <= 0.05].any(axis=-1)
# plot TFR (ERDS map with masking)
tfr_ev.average().plot([ch], vmin=vmin, vmax=vmax, cmap=(cmap, False),
axes=ax, colorbar=False, show=False, mask=mask,
mask_style="mask")
ax.set_title(epochs.ch_names[ch], fontsize=10)
ax.axvline(0, linewidth=1, color="black", linestyle=":") # event
if ch != 0:
ax.set_ylabel("")
ax.set_yticklabels("")
fig.colorbar(axes[0].images[-1], cax=axes[-1])
fig.suptitle("ERDS ({})".format(event))
fig.show()
|
0.23/_downloads/d12911920e4d160c9fd8c97cffdda6b7/time_frequency_erds.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
This allows us to use additional plotting functions like
:func:seaborn.lineplot to plot confidence bands:
|
df = tfr.to_data_frame(time_format=None, long_format=True)
# Map to frequency bands:
freq_bounds = {'_': 0,
'delta': 3,
'theta': 7,
'alpha': 13,
'beta': 35,
'gamma': 140}
df['band'] = pd.cut(df['freq'], list(freq_bounds.values()),
labels=list(freq_bounds)[1:])
# Filter to retain only relevant frequency bands:
freq_bands_of_interest = ['delta', 'theta', 'alpha', 'beta']
df = df[df.band.isin(freq_bands_of_interest)]
df['band'] = df['band'].cat.remove_unused_categories()
# Order channels for plotting:
df['channel'] = df['channel'].cat.reorder_categories(want_chs, ordered=True)
g = sns.FacetGrid(df, row='band', col='channel', margin_titles=True)
g.map(sns.lineplot, 'time', 'value', 'condition', n_boot=10)
axline_kw = dict(color='black', linestyle='dashed', linewidth=0.5, alpha=0.5)
g.map(plt.axhline, y=0, **axline_kw)
g.map(plt.axvline, x=0, **axline_kw)
g.set(ylim=(None, 1.5))
g.set_axis_labels("Time (s)", "ERDS (%)")
g.set_titles(col_template="{col_name}", row_template="{row_name}")
g.add_legend(ncol=2, loc='lower center')
g.fig.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.08)
|
0.23/_downloads/d12911920e4d160c9fd8c97cffdda6b7/time_frequency_erds.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
Polynomial regression
|
import numpy as np
import numpy.random as rnd
np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_data_plot")
plt.show()
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly_features.fit_transform(X)
X[0]
X_poly[0]
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
lin_reg.intercept_, lin_reg.coef_
X_new=np.linspace(-3, 3, 100).reshape(100, 1)
X_new_poly = poly_features.transform(X_new)
y_new = lin_reg.predict(X_new_poly)
plt.plot(X, y, "b.")
plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_predictions_plot")
plt.show()
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)):
polybig_features = PolynomialFeatures(degree=degree, include_bias=False)
std_scaler = StandardScaler()
lin_reg = LinearRegression()
polynomial_regression = Pipeline([
("poly_features", polybig_features),
("std_scaler", std_scaler),
("lin_reg", lin_reg),
])
polynomial_regression.fit(X, y)
y_newbig = polynomial_regression.predict(X_new)
plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width)
plt.plot(X, y, "b.", linewidth=3)
plt.legend(loc="upper left")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
save_fig("high_degree_polynomials_plot")
plt.show()
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
def plot_learning_curves(model, X, y):
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10)
train_errors, val_errors = [], []
for m in range(1, len(X_train)):
model.fit(X_train[:m], y_train[:m])
y_train_predict = model.predict(X_train[:m])
y_val_predict = model.predict(X_val)
train_errors.append(mean_squared_error(y_train_predict, y_train[:m]))
val_errors.append(mean_squared_error(y_val_predict, y_val))
plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train")
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val")
plt.legend(loc="upper right", fontsize=14) # not shown in the book
plt.xlabel("Training set size", fontsize=14) # not shown
plt.ylabel("RMSE", fontsize=14) # not shown
lin_reg = LinearRegression()
plot_learning_curves(lin_reg, X, y)
plt.axis([0, 80, 0, 3]) # not shown in the book
save_fig("underfitting_learning_curves_plot") # not shown
plt.show() # not shown
from sklearn.pipeline import Pipeline
polynomial_regression = Pipeline([
("poly_features", PolynomialFeatures(degree=10, include_bias=False)),
("lin_reg", LinearRegression()),
])
plot_learning_curves(polynomial_regression, X, y)
plt.axis([0, 80, 0, 3]) # not shown
save_fig("learning_curves_plot") # not shown
plt.show() # not shown
|
HandsOnML/code/04_training_linear_models.ipynb
|
atulsingh0/MachineLearning
|
gpl-3.0
|
Regularized models
|
from sklearn.linear_model import Ridge
np.random.seed(42)
m = 20
X = 3 * np.random.rand(m, 1)
y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5
X_new = np.linspace(0, 3, 100).reshape(100, 1)
def plot_model(model_class, polynomial, alphas, **model_kargs):
for alpha, style in zip(alphas, ("b-", "g--", "r:")):
model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression()
if polynomial:
model = Pipeline([
("poly_features", PolynomialFeatures(degree=10, include_bias=False)),
("std_scaler", StandardScaler()),
("regul_reg", model),
])
model.fit(X, y)
y_new_regul = model.predict(X_new)
lw = 2 if alpha > 0 else 1
plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha))
plt.plot(X, y, "b.", linewidth=3)
plt.legend(loc="upper left", fontsize=15)
plt.xlabel("$x_1$", fontsize=18)
plt.axis([0, 3, 0, 4])
plt.figure(figsize=(8,4))
plt.subplot(121)
plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(122)
plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42)
save_fig("ridge_regression_plot")
plt.show()
from sklearn.linear_model import Ridge
ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42)
ridge_reg.fit(X, y)
ridge_reg.predict([[1.5]])
sgd_reg = SGDRegressor(penalty="l2", random_state=42)
sgd_reg.fit(X, y.ravel())
sgd_reg.predict([[1.5]])
ridge_reg = Ridge(alpha=1, solver="sag", random_state=42)
ridge_reg.fit(X, y)
ridge_reg.predict([[1.5]])
from sklearn.linear_model import Lasso
plt.figure(figsize=(8,4))
plt.subplot(121)
plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(122)
plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42)
save_fig("lasso_regression_plot")
plt.show()
from sklearn.linear_model import Lasso
lasso_reg = Lasso(alpha=0.1)
lasso_reg.fit(X, y)
lasso_reg.predict([[1.5]])
from sklearn.linear_model import ElasticNet
elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42)
elastic_net.fit(X, y)
elastic_net.predict([[1.5]])
np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1)
X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10)
poly_scaler = Pipeline([
("poly_features", PolynomialFeatures(degree=90, include_bias=False)),
("std_scaler", StandardScaler()),
])
X_train_poly_scaled = poly_scaler.fit_transform(X_train)
X_val_poly_scaled = poly_scaler.transform(X_val)
sgd_reg = SGDRegressor(n_iter=1,
penalty=None,
eta0=0.0005,
warm_start=True,
learning_rate="constant",
random_state=42)
n_epochs = 500
train_errors, val_errors = [], []
for epoch in range(n_epochs):
sgd_reg.fit(X_train_poly_scaled, y_train)
y_train_predict = sgd_reg.predict(X_train_poly_scaled)
y_val_predict = sgd_reg.predict(X_val_poly_scaled)
train_errors.append(mean_squared_error(y_train_predict, y_train))
val_errors.append(mean_squared_error(y_val_predict, y_val))
best_epoch = np.argmin(val_errors)
best_val_rmse = np.sqrt(val_errors[best_epoch])
plt.annotate('Best model',
xy=(best_epoch, best_val_rmse),
xytext=(best_epoch, best_val_rmse + 1),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.05),
fontsize=16,
)
best_val_rmse -= 0.03 # just to make the graph look better
plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2)
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set")
plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set")
plt.legend(loc="upper right", fontsize=14)
plt.xlabel("Epoch", fontsize=14)
plt.ylabel("RMSE", fontsize=14)
save_fig("early_stopping_plot")
plt.show()
from sklearn.base import clone
sgd_reg = SGDRegressor(n_iter=1, warm_start=True, penalty=None,
learning_rate="constant", eta0=0.0005, random_state=42)
minimum_val_error = float("inf")
best_epoch = None
best_model = None
for epoch in range(1000):
sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off
y_val_predict = sgd_reg.predict(X_val_poly_scaled)
val_error = mean_squared_error(y_val_predict, y_val)
if val_error < minimum_val_error:
minimum_val_error = val_error
best_epoch = epoch
best_model = clone(sgd_reg)
best_epoch, best_model
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5
# ignoring bias term
t1s = np.linspace(t1a, t1b, 500)
t2s = np.linspace(t2a, t2b, 500)
t1, t2 = np.meshgrid(t1s, t2s)
T = np.c_[t1.ravel(), t2.ravel()]
Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]])
yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:]
J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape)
N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape)
N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape)
t_min_idx = np.unravel_index(np.argmin(J), J.shape)
t1_min, t2_min = t1[t_min_idx], t2[t_min_idx]
t_init = np.array([[0.25], [-1]])
def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50):
path = [theta]
for iteration in range(n_iterations):
gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta
theta = theta - eta * gradients
path.append(theta)
return np.array(path)
plt.figure(figsize=(12, 8))
for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")):
JR = J + l1 * N1 + l2 * N2**2
tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape)
t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx]
levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J)
levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR)
levelsN=np.linspace(0, np.max(N), 10)
path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0)
path_JR = bgd_path(t_init, Xr, yr, l1, l2)
path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0)
plt.subplot(221 + i * 2)
plt.grid(True)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9)
plt.contour(t1, t2, N, levels=levelsN)
plt.plot(path_J[:, 0], path_J[:, 1], "w-o")
plt.plot(path_N[:, 0], path_N[:, 1], "y-^")
plt.plot(t1_min, t2_min, "rs")
plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16)
plt.axis([t1a, t1b, t2a, t2b])
plt.subplot(222 + i * 2)
plt.grid(True)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9)
plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o")
plt.plot(t1r_min, t2r_min, "rs")
plt.title(title, fontsize=16)
plt.axis([t1a, t1b, t2a, t2b])
for subplot in (221, 223):
plt.subplot(subplot)
plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0)
for subplot in (223, 224):
plt.subplot(subplot)
plt.xlabel(r"$\theta_1$", fontsize=20)
save_fig("lasso_vs_ridge_plot")
plt.show()
|
HandsOnML/code/04_training_linear_models.ipynb
|
atulsingh0/MachineLearning
|
gpl-3.0
|
Feeding Data from Python Code:
|
# Create python list constants:
constantX = [ 1.0, 2.0, 3.0 ]
constantY = [ 10.0, 20.0, 30.0 ]
# Create addition operation (for constants):
addConstants = tf.add( constantX, constantY )
# Create session:
with tf.Session() as sess:
# Run session on constants and print output:
print sess.run( addConstants )
# Create placeholders:
placeholderX = tf.placeholder( tf.float32 )
placeholderY = tf.placeholder( tf.float32 )
# Create addition operation (for placeholders):
addPlaceholders = tf.add( placeholderX, placeholderY )
# Create session:
with tf.Session() as sess:
# Run session on placeholders and print output:
print sess.run( addPlaceholders, feed_dict={ placeholderX: constantX, placeholderY: constantY } )
|
tensorflow_tutorials/Tutorial_03_LoadingData.ipynb
|
Hebali/learning_machines
|
mit
|
Loading Data from File:
|
# Define file-reader function:
def read_file(filepath):
file_queue = tf.train.string_input_producer( [ filepath ] )
file_reader = tf.WholeFileReader()
_, contents = file_reader.read( file_queue )
return contents
# Create PNG image loader operation:
load_op = tf.image.decode_png( read_file( 'data/tf.png' ) )
# Create JPG image loader operation:
# load_op = tf.image.decode_jpg( read_file( 'data/myimage.jpg' ) )
# Create session:
with tf.Session() as sess:
# Initialize global variables:
sess.run( tf.global_variables_initializer() )
# Start queue coordinator:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners( coord=coord )
# Run session on image loader op:
image = sess.run( load_op )
# Terminate queue coordinator:
coord.request_stop()
coord.join( threads )
# Show image:
imshow( np.asarray( image ) )
|
tensorflow_tutorials/Tutorial_03_LoadingData.ipynb
|
Hebali/learning_machines
|
mit
|
This works and is the correct product even if $v$ is not really a column vector:
|
A = np.array([[1, 2], [3, 4]])
v = np.array([5, 6])
A.dot(v)
|
Foundations/Math/linear-algebra_exercise.ipynb
|
aleph314/K2
|
gpl-3.0
|
This also works because $v$ is truly a row vector, so we have $1\times2$ times $2\times2$ and all is good:
|
v.dot(A)
|
Foundations/Math/linear-algebra_exercise.ipynb
|
aleph314/K2
|
gpl-3.0
|
2 - Using $\vec{v}$ above, compute the inner, or dot, product, $\vec{v} \cdot \vec{v}$. Is this quantity reminiscent of another vector quantity?
|
v.dot(v)
|
Foundations/Math/linear-algebra_exercise.ipynb
|
aleph314/K2
|
gpl-3.0
|
This quantity is the same as the norm squared:
|
np.linalg.norm(v)**2
|
Foundations/Math/linear-algebra_exercise.ipynb
|
aleph314/K2
|
gpl-3.0
|
3 - Create 3 matrices $\textbf{A}$, $\textbf{B}$, $\textbf{C}$ of dimension $2\times2$, $3\times2$, and $2\times3$ respectively such that $$\textbf{A} = \begin{bmatrix} 1 & 2 \ 3 & 4 \end{bmatrix} \textbf{B} = \begin{bmatrix} 1 & 2 \ 3 & 4 \ 5 & 6\end{bmatrix} \textbf{C} = \begin{bmatrix} 1 & 2 & 3\ 4 & 5 & 6 \end{bmatrix}$$ and perform the following multiplications, stating the final dimensions of each: $\textbf{AA}$, $\textbf{AB}$, $\textbf{AC}$, $\textbf{BB}$, $\textbf{BA}$, $\textbf{BC}$, $\textbf{CC}$, $\textbf{CA}$, $\textbf{CB}$. Comment on your results.
|
A = np.arange(1, 5).reshape(2, 2)
B = np.arange(1, 7).reshape(3, 2)
C = np.arange(1, 7).reshape(2, 3)
|
Foundations/Math/linear-algebra_exercise.ipynb
|
aleph314/K2
|
gpl-3.0
|
We can compute the product only if the number of columns of the first matrix is the same as the number of rows of the second:
|
for tab in [(x, y) for x in (A, B, C) for y in (A, B, C)]:
try:
if tab[0] is A:
left = 'A'
elif tab[0] is B:
left = 'B'
else:
left = 'C'
if tab[1] is A:
right = 'A'
elif tab[1] is B:
right = 'B'
else:
right = 'C'
print('{}{} =\n{}'.format(left, right, tab[0].dot(tab[1])))
except ValueError as e:
print('{}{} returns the error: {}'.format(left, right, e))
|
Foundations/Math/linear-algebra_exercise.ipynb
|
aleph314/K2
|
gpl-3.0
|
4 - Using $\textbf{A}$ and $\textbf{B}$ above, compute $(\textbf{BA})^T$ and $\textbf{A}^T \textbf{B}^T$. What can you say about your results?
The result is the same:
|
print((B.dot(A)).T)
print(A.T.dot(B.T))
|
Foundations/Math/linear-algebra_exercise.ipynb
|
aleph314/K2
|
gpl-3.0
|
5 - Using $\textbf{A}$, $\textbf{B}$, and $\textbf{C}$ above, compute the following sums: $\textbf{A+A}$, $\textbf{A+B}$, $\textbf{A+C}$, $\textbf{B+B}$, $\textbf{B+A}$, $\textbf{B+C}$, $\textbf{C+C}$, $\textbf{C+A}$, $\textbf{C+B}$. Comment on your results.
We can only sum the matrices with themselves because they have to have the same dimension in order to be added (and because of their dimensions numpy can't broadcast them):
|
for tab in [(x, y) for x in (A, B, C) for y in (A, B, C)]:
try:
if tab[0] is A:
left = 'A'
elif tab[0] is B:
left = 'B'
else:
left = 'C'
if tab[1] is A:
right = 'A'
elif tab[1] is B:
right = 'B'
else:
right = 'C'
print('{}+{} =\n{}'.format(left, right, tab[0] + tab[1]))
except ValueError as e:
print('{}+{} returns the error: {}'.format(left, right, e))
|
Foundations/Math/linear-algebra_exercise.ipynb
|
aleph314/K2
|
gpl-3.0
|
6 - Construct three matrices $\textbf{I}_A$, $\textbf{I}_B$, and $\textbf{I}_C$ such that $\textbf{I}_A\textbf{A} = \textbf{A}$, $\textbf{I}_B\textbf{B} = \textbf{B}$, and $\textbf{I}_C\textbf{C} = \textbf{C}$.
|
Ia = np.eye(2)
Ib = np.eye(3)
Ic = np.eye(2)
print(A)
print(Ia.dot(A))
print(B)
print(Ib.dot(B))
print(C)
print(Ic.dot(C))
|
Foundations/Math/linear-algebra_exercise.ipynb
|
aleph314/K2
|
gpl-3.0
|
7 - Construct three matrices $\textbf{A}^{-1}$, $\textbf{B}^{-1}$, and $\textbf{C}^{-1}$ such that $\textbf{A}^{-1}\textbf{A} = \textbf{I}_A$, $\textbf{B}^{-1}\textbf{B} = \textbf{I}_B$, and $\textbf{C}^{-1}\textbf{C} = \textbf{I}_C$. Comment on your results. Hint This may not always be possible!
|
Ainv = np.linalg.inv(A)
print(A)
print(Ainv.dot(A))
print(B)
print('Not possible because B is 3x2 and Ib is 3x3')
print(C)
print('Not possible because C is 2x3 and Ib is 2x2')
|
Foundations/Math/linear-algebra_exercise.ipynb
|
aleph314/K2
|
gpl-3.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.