repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
kgrodzicki/machine-learning-specialization | course-3-classification/module-3-linear-classifier-learning-assignment-blank.ipynb | mit | import graphlab
"""
Explanation: Implementing logistic regression from scratch
The goal of this notebook is to implement your own logistic regression classifier. You will:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Implement the link function for logistic regression.
Write a function to compute the derivative of the log likelihood function with respect to a single coefficient.
Implement gradient ascent.
Given a set of coefficients, predict sentiments.
Compute classification accuracy for the logistic regression model.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create. Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
"""
products = graphlab.SFrame('amazon_baby_subset.gl/')
"""
Explanation: Load review dataset
For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.
End of explanation
"""
products['sentiment']
"""
Explanation: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
End of explanation
"""
products.head(10)['name']
nr_of_positive_reviews = len(products[products['sentiment']==1])
nr_of_negative_reviews = len(products[products['sentiment']==-1])
print '# of positive reviews =', nr_of_positive_reviews
print '# of negative reviews =', nr_of_negative_reviews
"""
Explanation: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
End of explanation
"""
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print important_words
"""
Explanation: Note: For this assignment, we eliminated class imbalance by choosing
a subset of the data with a similar number of positive and negative reviews.
Apply text cleaning on the review data
In this section, we will perform some simple feature cleaning using SFrames. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file.
Now, we will load these words from this JSON file:
End of explanation
"""
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
"""
Explanation: Now, we will perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for important_words)
We start with Step 1 which can be done as follows:
End of explanation
"""
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
"""
Explanation: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Note: There are several ways of doing this. In this assignment, we use the built-in count function for Python lists. Each review string is first split into individual words and the number of occurances of a given word is counted.
End of explanation
"""
products['perfect']
"""
Explanation: The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
End of explanation
"""
products['contains_perfect'] = products['perfect'] >= 1
products['contains_perfect']
"""
Explanation: Now, write some code to compute the number of product reviews that contain the word perfect.
Hint:
* First create a column called contains_perfect which is set to 1 if the count of the word perfect (stored in column perfect) is >= 1.
* Sum the number of 1s in the column contains_perfect.
End of explanation
"""
print len(products[products['contains_perfect']==1])
import numpy as np
"""
Explanation: Convert SFrame to NumPy array
As you have seen previously, NumPy is a powerful library for doing matrix manipulation. Let us convert our data to matrices and then implement our algorithms with matrices.
First, make sure you can perform the following import.
Quiz Question. How many reviews contain the word perfect?
End of explanation
"""
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
"""
Explanation: We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. Note that the feature matrix includes an additional column 'intercept' to take account of the intercept term.
End of explanation
"""
# Warning: This may take a few minutes...
# feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment')
"""
Explanation: Let us convert the data into NumPy arrays.
End of explanation
"""
arrays = np.load('module-3-assignment-numpy-arrays.npz')
feature_matrix, sentiment = arrays['feature_matrix'], arrays['sentiment']
feature_matrix.shape
"""
Explanation: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-3-assignment-numpy-arrays.npz')
feature_matrix, sentiment = arrays['feature_matrix'], arrays['sentiment']
End of explanation
"""
sentiment
"""
Explanation: Quiz Question: How many features are there in the feature_matrix?
Quiz Question: Assuming that the intercept is present, how does the number of features in feature_matrix relate to the number of features in the logistic regression model?
Now, let us see what the sentiment column looks like:
End of explanation
"""
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
# YOUR CODE HERE
score = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
# YOUR CODE HERE
predictions = 1/ (1 + np.exp(-score))
# return predictions
return predictions
"""
Explanation: Estimating conditional probability with link function
Recall from lecture that the link function is given by:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ represents the word counts of important_words in the review $\mathbf{x}_i$. Complete the following function that implements the link function:
End of explanation
"""
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_predictions =', correct_predictions
print 'output of predict_probability =', predict_probability(dummy_feature_matrix, dummy_coefficients)
"""
Explanation: Aside. How the link function works with matrix algebra
Since the word counts are stored as columns in feature_matrix, each $i$-th row of the matrix corresponds to the feature vector $h(\mathbf{x}_i)$:
$$
[\text{feature_matrix}] =
\left[
\begin{array}{c}
h(\mathbf{x}_1)^T \
h(\mathbf{x}_2)^T \
\vdots \
h(\mathbf{x}_N)^T
\end{array}
\right] =
\left[
\begin{array}{cccc}
h_0(\mathbf{x}_1) & h_1(\mathbf{x}_1) & \cdots & h_D(\mathbf{x}_1) \
h_0(\mathbf{x}_2) & h_1(\mathbf{x}_2) & \cdots & h_D(\mathbf{x}_2) \
\vdots & \vdots & \ddots & \vdots \
h_0(\mathbf{x}_N) & h_1(\mathbf{x}_N) & \cdots & h_D(\mathbf{x}_N)
\end{array}
\right]
$$
By the rules of matrix multiplication, the score vector containing elements $\mathbf{w}^T h(\mathbf{x}_i)$ is obtained by multiplying feature_matrix and the coefficient vector $\mathbf{w}$.
$$
[\text{score}] =
[\text{feature_matrix}]\mathbf{w} =
\left[
\begin{array}{c}
h(\mathbf{x}_1)^T \
h(\mathbf{x}_2)^T \
\vdots \
h(\mathbf{x}_N)^T
\end{array}
\right]
\mathbf{w}
= \left[
\begin{array}{c}
h(\mathbf{x}_1)^T\mathbf{w} \
h(\mathbf{x}_2)^T\mathbf{w} \
\vdots \
h(\mathbf{x}_N)^T\mathbf{w}
\end{array}
\right]
= \left[
\begin{array}{c}
\mathbf{w}^T h(\mathbf{x}_1) \
\mathbf{w}^T h(\mathbf{x}_2) \
\vdots \
\mathbf{w}^T h(\mathbf{x}_N)
\end{array}
\right]
$$
Checkpoint
Just to make sure you are on the right track, we have provided a few examples. If your predict_probability function is implemented correctly, then the outputs will match:
End of explanation
"""
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
derivative = np.dot(errors, feature)
# Return the derivative
return derivative
"""
Explanation: Compute derivative of log likelihood with respect to a single coefficient
Recall from lecture:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
We will now write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts two arguments:
* errors vector containing $\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ for all $i$.
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$.
Complete the following code block:
End of explanation
"""
def compute_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)
return lp
"""
Explanation: In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.
The log likelihood is computed using the following formula (see the advanced optional video if you are curious about the derivation of this equation):
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
We provide a function to compute the log likelihood for the entire dataset.
End of explanation
"""
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
dummy_sentiment = np.array([-1, 1])
correct_indicators = np.array( [ -1==+1, 1==+1 ] )
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_first_term = np.array( [ (correct_indicators[0]-1)*correct_scores[0], (correct_indicators[1]-1)*correct_scores[1] ] )
correct_second_term = np.array( [ np.log(1. + np.exp(-correct_scores[0])), np.log(1. + np.exp(-correct_scores[1])) ] )
correct_ll = sum( [ correct_first_term[0]-correct_second_term[0], correct_first_term[1]-correct_second_term[1] ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_log_likelihood =', correct_ll
print 'output of compute_log_likelihood =', compute_log_likelihood(dummy_feature_matrix, dummy_sentiment, dummy_coefficients)
"""
Explanation: Checkpoint
Just to make sure we are on the same page, run the following code block and check that the outputs match.
End of explanation
"""
from math import sqrt
def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment == +1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
# YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[:,j])
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
coefficients[j] += step_size * derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood(feature_matrix, sentiment, coefficients)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
"""
Explanation: Taking gradient steps
Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum.
Complete the following function to solve the logistic regression model using gradient ascent:
End of explanation
"""
coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194),
step_size=1e-7, max_iter=301)
"""
Explanation: Now, let us run the logistic regression solver.
End of explanation
"""
# Compute the scores as a dot product between feature_matrix and coefficients.
scores = np.dot(feature_matrix, coefficients)
scores
"""
Explanation: Quiz question: As each iteration of gradient ascent passes, does the log likelihood increase or decrease?
Predicting sentiments
Recall from lecture that class predictions for a data point $\mathbf{x}$ can be computed from the coefficients $\mathbf{w}$ using the following formula:
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & \mathbf{x}_i^T\mathbf{w} > 0 \
-1 & \mathbf{x}_i^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Now, we will write some code to compute class predictions. We will do this in two steps:
* Step 1: First compute the scores using feature_matrix and coefficients using a dot product.
* Step 2: Using the formula above, compute the class predictions from the scores.
Step 1 can be implemented as follows:
End of explanation
"""
myFun = np.vectorize(lambda x : 1 if x > 0 else 0) # easier to count later on
class_prediction = myFun(scores)
class_prediction
"""
Explanation: Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above:
End of explanation
"""
nr_of_positive = np.count_nonzero(class_prediction)
nr_of_positive
"""
Explanation: Quiz question: How many reviews were predicted to have positive sentiment?
End of explanation
"""
num_mistakes = nr_of_positive_reviews - nr_of_positive # YOUR CODE HERE
reviews_correctly_classified = len(products) - num_mistakes
accuracy = reviews_correctly_classified / (1.0 * len(products)) # YOUR CODE HERE
print "-----------------------------------------------------"
print '# Reviews correctly classified =', len(products) - num_mistakes
print '# Reviews incorrectly classified =', num_mistakes
print '# Reviews total =', len(products)
print "-----------------------------------------------------"
print 'Accuracy = %.2f' % accuracy
"""
Explanation: Measuring accuracy
We will now measure the classification accuracy of the model. Recall from the lecture that the classification accuracy can be computed as follows:
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Complete the following code block to compute the accuracy of the model.
End of explanation
"""
coefficients = list(coefficients[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
"""
Explanation: Quiz question: What is the accuracy of the model on predictions made above? (round to 2 digits of accuracy)
Which words contribute most to positive & negative sentiments?
Recall that in Module 2 assignment, we were able to compute the "most positive words". These are words that correspond most strongly with positive reviews. In order to do this, we will first do the following:
* Treat each coefficient as a tuple, i.e. (word, coefficient_value).
* Sort all the (word, coefficient_value) tuples by coefficient_value in descending order.
End of explanation
"""
word_coefficient_tuples[:10]
"""
Explanation: Now, word_coefficient_tuples contains a sorted list of (word, coefficient_value) tuples. The first 10 elements in this list correspond to the words that are most positive.
Ten "most positive" words
Now, we compute the 10 words that have the most positive coefficient values. These words are associated with positive sentiment.
End of explanation
"""
word_coefficient_tuples[-10:]
"""
Explanation: Quiz question: Which word is not present in the top 10 "most positive" words?
Ten "most negative" words
Next, we repeat this exercise on the 10 most negative words. That is, we compute the 10 words that have the most negative coefficient values. These words are associated with negative sentiment.
End of explanation
"""
|
ajgpitch/qutip-notebooks | development/development-ssesolve-tests.ipynb | lgpl-3.0 | %matplotlib inline
import matplotlib.pyplot as plt
from qutip import *
import numpy as np
"""
Explanation: Development notebook: Tests for QuTiP's stochastic Schrödinger equation solver
Copyright (C) 2011 and later, Paul D. Nation & Robert J. Johansson
In this notebook we test the qutip stochastic Schrödinger equation solver (ssesolve) with a few examples. The same examples are solved using the stochastic master equation solver in the notebook development-smesolve-tests.
<style>
.rendered_html {
font-family: Liberation Serif;
}
.rendered_html h1 {
font-family: Liberation Sans;
margin: 0 0;
}
.rendered_html h2 {
font-family: Liberation Sans;
margin: 0 0;
}
</style>
End of explanation
"""
N = 10
w0 = 0.5 * 2 * np.pi
times = np.linspace(0, 15, 150)
dt = times[1] - times[0]
gamma = 0.25
A = 2.5
ntraj = 50
nsubsteps = 100
a = destroy(N)
x = a + a.dag()
H = w0 * a.dag() * a
psi0 = fock(N, 5)
sc_ops = [np.sqrt(gamma) * a]
e_ops = [a.dag() * a, a + a.dag(), (-1j)*(a - a.dag())]
result_ref = mesolve(H, psi0, times, sc_ops, e_ops)
plot_expectation_values(result_ref);
"""
Explanation: Photo-count detection
Theory
Stochastic Schrödinger equation for photocurrent detection, in Milburn's formulation, is
$$
d|\psi(t)\rangle = -iH|\psi(t)\rangle dt + \left(\frac{a}{\sqrt{\langle a^\dagger a \rangle_\psi(t)}} - 1\right) |\psi(t)\rangle dN(t) - \frac{1}{2}\gamma\left(\langle a^\dagger a \rangle_\psi(t) - a^\dagger a \right) |\psi(t)\rangle dt
$$
and $dN(t)$ is a Poisson distributed increment with $E[dN(t)] = \gamma \langle a^\dagger a\rangle (t)$.
Formulation in QuTiP
In QuTiP we write the stochastic Schrödinger equation on the form:
$$
d|\psi(t)\rangle = -iH|\psi(t)\rangle dt + D_{1}[c] |\psi(t)\rangle dt + D_{2}[c] |\psi(t)\rangle dW
$$
where $c = \sqrt{\gamma} a$ is the collapse operator including the rate of the process as a coefficient in the operator. We can identify
$$
D_{1}[c] |\psi(t)\rangle = - \frac{1}{2}\left(\langle c^\dagger c \rangle_\psi(t) - c^\dagger c \right)
$$
$$
D_{2}[c] |\psi(t)\rangle = \frac{c}{\sqrt{\langle c^\dagger c \rangle_\psi(t)}} - 1
$$
$$dW = dN(t)$$
Reference solution: deterministic master equation
End of explanation
"""
result = photocurrent_sesolve(H, psi0, times, sc_ops, e_ops,
ntraj=ntraj*5, nsubsteps=nsubsteps, store_measurement=True, normalize=0)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.step(times, dt * m.real)
"""
Explanation: Solve using stochastic master equation
$\displaystyle D_{1}[a, |\psi\rangle] =
\frac{1}{2} ( \langle \psi|a^\dagger a |\psi\rangle - a^\dagger a )|\psi\rangle
$
$\displaystyle D_{2}[a, |\psi\rangle] =
\frac{a|\psi\rangle}{\sqrt{\langle \psi| a^\dagger a|\psi\rangle}} - |\psi\rangle
$
Using QuTiP built-in photo-current detection functions for $D_1$ and $D_2$
End of explanation
"""
ntraj = 50
nsubsteps = 200
H = w0 * a.dag() * a + A * (a + a.dag())
result_ref = mesolve(H, psi0, times, sc_ops, e_ops)
"""
Explanation: Homodyne detection
End of explanation
"""
op = sc_ops[0]
opp = (op + op.dag()).data
opn = (op.dag()*op).data
op = op.data
Hd = H.data * -1j
def d1_psi_func(t, psi):
e_x = cy.cy_expect_psi(opp, psi, 0)
return cy.spmv(Hd, psi) + 0.5 * (e_x * cy.spmv(op, psi) - cy.spmv(opn, psi) - 0.25 * e_x ** 2 * psi)
"""
Explanation: Theory
Stochastic master equation for homodyne can be written as (see Gardiner and Zoller, Quantum Noise)
$$
d|\psi(t)\rangle = -iH|\psi(t)\rangle dt +
\frac{1}{2}\gamma \left(
\langle a + a^\dagger \rangle_\psi a - a^\dagger a - \frac{1}{4}\langle a + a^\dagger \rangle_\psi^2 \right)|\psi(t)\rangle dt
+
\sqrt{\gamma}\left(
a - \frac{1}{2} \langle a + a^\dagger \rangle_\psi
\right)|\psi(t)\rangle dW
$$
where $dW(t)$ is a normal distributed increment with $E[dW(t)] = \sqrt{dt}$.
In QuTiP format we have:
$$
d|\psi(t)\rangle = -iH|\psi(t)\rangle dt + D_{1}[c, |\psi(t)\rangle] dt + D_{2}[c, |\psi(t)\rangle] dW
$$
where $c = \sqrt{\gamma} a$, so we can identify
$$
D_{1}[c, |\psi\rangle] =
\frac{1}{2}\left(
\langle c + c^\dagger \rangle_\psi c - c^\dagger c - \frac{1}{4}\langle c + c^\dagger \rangle_\psi^2
\right) |\psi(t)\rangle
$$
End of explanation
"""
def d2_psi_func(t, psi):
out = np.zeros((1,len(psi)), dtype=complex)
e_x = cy.cy_expect_psi(opp, psi, 0)
out[0,:] = cy.spmv(op,psi)
out -= 0.5 * e_x * psi
return out
result = general_stochastic(psi0, times, d1=d1_psi_func, d2=d2_psi_func,
e_ops=e_ops, ntraj=ntraj,
m_ops=[a + a.dag()], dW_factors=[1/np.sqrt(gamma)],
nsubsteps=nsubsteps, store_measurement=True)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0].real, 'b', alpha=0.1)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real, 'r', lw=2)
plt.plot(times, result_ref.expect[1], 'k', lw=2)
plt.ylim(-15, 15);
"""
Explanation: $$
D_{2}[c,|\psi\rangle] =
\left(c - \frac{1}{2} \langle c + c^\dagger \rangle_\psi \right)|\psi(t)\rangle
$$
End of explanation
"""
result = general_stochastic(psi0, times, d1=d1_psi_func, d2=d2_psi_func,
e_ops=e_ops, ntraj=ntraj, noise=result.noise,
m_ops=[a + a.dag()], dW_factors=[1/np.sqrt(gamma)],
nsubsteps=nsubsteps, store_measurement=True)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0].real, 'b', alpha=0.1)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real, 'r', lw=2)
plt.plot(times, result_ref.expect[1], 'k', lw=2)
plt.ylim(-15, 15);
"""
Explanation: Solve problem again, this time with a specified noise (from previous run)
End of explanation
"""
result = ssesolve(H, psi0, times, sc_ops, e_ops, ntraj=ntraj, nsubsteps=nsubsteps,
method='homodyne', store_measurement=True, dW_factors=[1])
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0].real, 'b', alpha=0.1)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real/np.sqrt(gamma), 'r', lw=2)
plt.plot(times, result_ref.expect[1], 'k', lw=2)
plt.ylim(-15, 15);
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real/np.sqrt(gamma), 'r', lw=2)
plt.plot(times, result_ref.expect[1], 'k', lw=2)
plt.plot(times, result.expect[1], 'b', lw=2)
"""
Explanation: Using QuTiP built-in homodyne detection functions for $D_1$ and $D_2$
End of explanation
"""
result = ssesolve(H, psi0, times, sc_ops, e_ops, ntraj=ntraj, nsubsteps=nsubsteps,
method='homodyne', store_measurement=True, noise=result.noise)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0].real, 'b', alpha=0.1)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real/np.sqrt(gamma), 'r', lw=2)
plt.plot(times, result_ref.expect[1], 'k', lw=2)
plt.ylim(-15, 15);
"""
Explanation: Solve problem again, this time with a specified noise (from previous run)
End of explanation
"""
op = sc_ops[0]
opd = (op.dag()).data
opp = (op + op.dag()).data
opm = (op + op.dag()).data
opn = (op.dag()*op).data
op = op.data
Hd = H.data * -1j
def d1_psi_func(t, psi):
e_xd = cy.cy_expect_psi(opd, psi, 0)
e_x = cy.cy_expect_psi(op, psi, 0)
return cy.spmv(Hd, psi) - 0.5 * (cy.spmv(opn, psi) - e_xd * cy.spmv(op, psi) + 0.5 * e_x * e_xd * psi)
"""
Explanation: Heterodyne detection
Stochastic Schrödinger equation for heterodyne detection can be written as
$$
d|\psi(t)\rangle = -iH|\psi(t)\rangle dt
-\frac{1}{2}\gamma\left(a^\dagger a - \langle a^\dagger \rangle a + \frac{1}{2}\langle a \rangle\langle a^\dagger \rangle)\right)|\psi(t)\rangle dt
+
\sqrt{\gamma/2}\left(
a - \frac{1}{2} \langle a + a^\dagger \rangle_\psi
\right)|\psi(t)\rangle dW_1
-i
\sqrt{\gamma/2}\left(
a - \frac{1}{2} \langle a - a^\dagger \rangle_\psi
\right)|\psi(t)\rangle dW_2
$$
where $dW_i(t)$ is a normal distributed increment with $E[dW_i(t)] = \sqrt{dt}$.
In QuTiP format we have:
$$
d|\psi(t)\rangle = -iH|\psi(t)\rangle dt + D_{1}[c, |\psi(t)\rangle] dt + \sum_n D_{2}^{(n)}[c, |\psi(t)\rangle] dW_n
$$
where $c = \sqrt{\gamma} a$, so we can identify
$$
D_1[c, |\psi(t)\rangle] =
-\frac{1}{2}\left(c^\dagger c - \langle c^\dagger \rangle c + \frac{1}{2}\langle c \rangle\langle c^\dagger \rangle \right) |\psi(t)\rangle
$$
End of explanation
"""
sqrt2 = np.sqrt(0.5)
def d2_psi_func(t, psi):
out = np.zeros((2,len(psi)), dtype=complex)
e_p = cy.cy_expect_psi(opp, psi, 0)
e_m = cy.cy_expect_psi(opm, psi, 0)
out[0,:] = (cy.spmv(op,psi) - e_p * 0.5 * psi)*sqrt2
out[1,:] = (cy.spmv(op,psi) - e_m * 0.5 * psi)*sqrt2*-1j
return out
result = general_stochastic(psi0, times, d1=d1_psi_func, d2=d2_psi_func,
e_ops=e_ops, ntraj=ntraj, len_d2=2,
m_ops=[(a + a.dag()), (-1j)*(a - a.dag())], dW_factors=[2/np.sqrt(gamma), 2/np.sqrt(gamma)],
nsubsteps=nsubsteps, store_measurement=True)
plot_expectation_values([result, result_ref]);
#fig, ax = subplots()
for m in result.measurement:
plt.plot(times, m[:, 0].real, 'r', alpha=0.025)
plt.plot(times, m[:, 1].real, 'b', alpha=0.025)
plt.plot(times, result_ref.expect[1], 'k', lw=2);
plt.plot(times, result_ref.expect[2], 'k', lw=2);
plt.ylim(-10, 10)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real, 'r', lw=2);
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,1].real, 'b', lw=2);
"""
Explanation: $$D_{2}^{(1)}[c, |\psi(t)\rangle] = \sqrt{1/2} (c - \langle c + c^\dagger \rangle / 2) \psi$$
$$D_{2}^{(1)}[c, |\psi(t)\rangle] = -i\sqrt{1/2} (c - \langle c - c^\dagger \rangle /2) \psi$$
End of explanation
"""
result = ssesolve(H, psi0, times, sc_ops, e_ops, ntraj=ntraj, nsubsteps=nsubsteps,
method='heterodyne', store_measurement=True)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0, 0].real, 'r', alpha=0.025)
plt.plot(times, m[:, 0, 1].real, 'b', alpha=0.025)
plt.plot(times, result_ref.expect[1], 'k', lw=2)
plt.plot(times, result_ref.expect[2], 'k', lw=2)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0,0].real/np.sqrt(gamma), 'r', lw=2)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0,1].real/np.sqrt(gamma), 'b', lw=2)
"""
Explanation: Using QuTiP built-in heterodyne detection functions for $D_1$ and $D_2$
End of explanation
"""
result = ssesolve(H, psi0, times, sc_ops, e_ops, ntraj=ntraj, nsubsteps=nsubsteps,
method='heterodyne', store_measurement=True, noise=result.noise)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0, 0].real, 'r', alpha=0.025)
plt.plot(times, m[:, 0, 1].real, 'b', alpha=0.025)
plt.plot(times, result_ref.expect[1], 'k', lw=2);
plt.plot(times, result_ref.expect[2], 'k', lw=2);
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0,0].real/np.sqrt(gamma), 'r', lw=2);
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0,1].real/np.sqrt(gamma), 'b', lw=2);
"""
Explanation: Solve problem again, this time with a specified noise (from previous run)
End of explanation
"""
from qutip.ipynbtools import version_table
version_table()
"""
Explanation: Software version
End of explanation
"""
|
aba1476/ds-for-wall-street | ds-for-ws-solutions.ipynb | apache-2.0 | %matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import seaborn as sns
"""
Explanation: Loading and Cleaning the Data
Turn on inline matplotlib plotting and import plotting dependencies.
End of explanation
"""
import numpy as np
import pandas as pd
import tsanalysis.loaddata as ld
import tsanalysis.tsutil as tsutil
import sparkts.timeseriesrdd as tsrdd
import sparkts.datetimeindex as dtindex
from sklearn import linear_model
"""
Explanation: Import analytic depedencies. Doc code for spark-timeseries and source code for tsanalysis.
End of explanation
"""
wiki_obs = ld.load_wiki_df(sqlCtx, '/user/srowen/wiki.tsv')
ticker_obs = ld.load_ticker_df(sqlCtx, '/user/srowen/ticker.tsv')
"""
Explanation: Load wiki page view and stock price data into Spark Dataframes.
wiki_obs is a Spark dataframe of (timestamp, page, views) of types (Timestamp, String, Double). ticker_obs is a Spark dataframe of (timestamp, symbol, price) of types (Timestamp, String, Double).
End of explanation
"""
wiki_obs.head(5)
"""
Explanation: Display the first 5 elements of the wiki_obs RDD.
wiki_obs contains Row objects with the fields (timestamp, page, views).
End of explanation
"""
ticker_obs.head(5)
"""
Explanation: Display the first 5 elements of the tickers_obs RDD.
ticker_obs contains Row objects with the fields (timestamp, symbol, views).
End of explanation
"""
def print_ticker_info(ticker):
print ('The first ticker symbol is: {} \nThe first 20 elements of the associated ' +\
'series are:\n {}').format(ticker[0], ticker[1][:20])
dt_index = dtindex.uniform('2015-08-03', end='2015-09-22', freq=dtindex.HourFrequency(1, sc), sc=sc)
ticker_tsrdd = tsrdd.time_series_rdd_from_observations(dt_index, ticker_obs, 'timestamp', 'symbol',
'price') \
.remove_instants_with_nans()
ticker_tsrdd.cache()
first_series = ticker_tsrdd.first()
print_ticker_info(first_series)
"""
Explanation: Create datetime index.
Create time series RDD from observations and index. Remove time instants with NaNs.
Cache the tsrdd.
Examine the first element in the RDD.
Time series have values and a datetime index. We can create a tsrdd for hourly stock prices from an index and a Spark DataFrame of observations. ticker_tsrdd is an RDD of tuples where each tuple has the form (ticker symbol, stock prices) where ticker symbol is a string and stock prices is a 1D np.ndarray. We create a nicely formatted string representation of this pair in print_ticker_info(). Notice how we access the two elements of the tuple.
End of explanation
"""
wiki_tsrdd = tsrdd.time_series_rdd_from_observations(ticker_tsrdd.index(), wiki_obs, 'timestamp', 'page', 'views') \
.fill('linear')
wiki_tsrdd.cache()
"""
Explanation: Create a wiki page view tsrdd and set the index to match the index of ticker_tsrdd.
Linearly interpolate to impute missing values.
wiki_tsrdd is an RDD of tuples where each tuple has the form (page title, wiki views) where page title is a string and wiki views is a 1D np.ndarray. We have cached both RDDs because we will be doing many subsequent operations on them.
End of explanation
"""
def count_nans(vec):
return np.count_nonzero(np.isnan(vec))
ticker_min_nans = ticker_tsrdd.map(lambda x: count_nans(x[1])).min()
ticker_price_tsrdd = ticker_tsrdd.filter(lambda x: count_nans(x[1]) == ticker_min_nans) \
.remove_instants_with_nans()
ticker_return_tsrdd = ticker_price_tsrdd.return_rates()
print_ticker_info(ticker_return_tsrdd.first())
"""
Explanation: Filter out symbols with more than the minimum number of NaNs.
Then filter out instants with NaNs.
End of explanation
"""
# a dict from wiki page name to ticker symbol
page_symbols = {}
for line in open('../symbolnames.tsv').readlines():
tokens = line[:-1].split('\t')
page_symbols[tokens[1]] = tokens[0]
def get_page_symbol(page_series):
if page_series[0] in page_symbols:
return [(page_symbols[page_series[0]], page_series[1])]
else:
return []
# reverse keys and values. a dict from ticker symbol to wiki page name.
symbol_pages = dict(zip(page_symbols.values(), page_symbols.keys()))
print page_symbols.items()[0]
print symbol_pages.items()[0]
"""
Explanation: Linking symbols and pages
We need to join together the wiki page and ticker data, but the time series RDDs are not directly joinable on their keys. To overcome this, we have create a dict from wikipage title to stock ticker symbol.
Create a dict from ticker symbols to page names.
Create another from page names to ticker symbols.
End of explanation
"""
joined = wiki_tsrdd.flatMap(get_page_symbol).join(ticker_tsrdd)
joined.cache()
joined.count()
"""
Explanation: Join together wiki_tsrdd and ticker_tsrdd.
First, we use this dict to look up the corresponding stock ticker symbol and rekey the wiki page view time series. We then join the data sets together. The result is an RDD of tuples where each element is of the form (ticker_symbol, (wiki_series, ticker_series)). We count the number of elements in the resulting rdd to see how many matches we have.
End of explanation
"""
from scipy.stats.stats import pearsonr
def wiki_vol_corr(page_key):
# lookup individual time series by key.
ticker = ticker_tsrdd.find_series(page_symbols[page_key]) # numpy array
wiki = wiki_tsrdd.find_series(page_key) # numpy array
return pearsonr(ticker, wiki)
def corr_with_offset(page_key, offset):
"""offset is an integer that describes how many time intervals we have slid
the wiki series ahead of the ticker series."""
ticker = ticker_tsrdd.find_series(page_symbols[page_key]) # numpy array
wiki = wiki_tsrdd.find_series(page_key) # numpy array
return pearsonr(ticker[offset:], wiki[:-offset])
print wiki_vol_corr('Netflix')
print corr_with_offset('Netflix', 1)
"""
Explanation: Correlation and Relationships
Define a function for computing the pearson r correlation of the stock price and wiki page traffic associated with a company.
Here we look up a specific stock and corrsponding wiki page, and provide an example of
computing the pearson correlation locally. We use scipy.stats.stats.pearsonr to compute the pearson correlation and corresponding two sided p value. wiki_vol_corr and corr_with_offset both return this as a tuple of (corr, p_value).
End of explanation
"""
def joint_plot(page_key, ticker, wiki, offset=0):
with sns.axes_style("white"):
sns.jointplot(x=ticker, y=wiki, kind="kde", color="b");
plt.xlabel('Stock Price')
plt.ylabel('Wikipedia Page Views')
plt.title('Joint distribution of {} stock price\n and Wikipedia page views.'\
+'\nWith a {} day offset'.format(page_key, offset), y=1.20)
def ticker_wiki_joint_plot(page_key, offset=0):
if (offset == 0):
ticker = ticker_tsrdd.find_series(page_symbols[page_key]) # numpy array
wiki = wiki_tsrdd.find_series(page_key)
else:
ticker = ticker_tsrdd.find_series(page_symbols[page_key])[offset:]
wiki = wiki_tsrdd.find_series(page_key)[:-offset]
joint_plot(page_key, ticker, wiki)
ticker_wiki_joint_plot('Apple_Inc.', offset=0)
"""
Explanation: Create a plot of the joint distribution of wiki trafiic and stock prices for a specific company using seaborn's jointplot function.
End of explanation
"""
joined.mapValues(lambda wiki_ticker: pearsonr(wiki_ticker[0], wiki_ticker[1])) \
.sortBy(keyfunc=lambda x: x[1]) \
.collect()
"""
Explanation: Find the companies with the highest correlation between stock prices time series and wikipedia page traffic.
Note that comparing a tuple means you compare the composite object lexicographically.
End of explanation
"""
top_correlations = joined.mapValues(lambda wiki_ticker: pearsonr(wiki_ticker[0], wiki_ticker[1])) \
.filter(lambda corr_tuple: not np.isnan(corr_tuple[1][0]))\
.sortBy(keyfunc=lambda x: np.abs(x[1][0])) \
.collect()
top_correlations
"""
Explanation: Add in filtering out less than useful correlation results.
There are a lot of invalid correlations that get computed, so lets filter those out.
End of explanation
"""
top_10_corr = top_correlations[-10:]
top_10_corr_tickers = dict(top_10_corr).keys()
top_10_corr_tickers
"""
Explanation: Find the top 10 correlations as defined by the ordering on tuples.
End of explanation
"""
pages = [symbol_pages[sym] for sym in top_10_corr_tickers]
ticker_wiki_joint_plot(pages[0], offset=1)
"""
Explanation: Create a joint plot of some of the stronger relationships.
End of explanation
"""
ticker_daily_vol = tsutil.sample_daily(ticker_tsrdd, 'std') \
.remove_instants_with_nans()
print ticker_daily_vol.first()
"""
Explanation: Volatility
Compute per-day volatility for each symbol.
End of explanation
"""
ticker_daily_vol.map(lambda x: count_nans(x[1])).top(1)
"""
Explanation: Make sure we don't have any NaNs.
End of explanation
"""
daily_vol_df = ticker_daily_vol.to_pandas_dataframe()
daily_vol_df.plot()
"""
Explanation: Visualize volatility
Plot daily volatility in stocks over time.
End of explanation
"""
sns.distplot(daily_vol_df.sum(axis=0), kde=False)
plt.xlabel('Volatility')
plt.ylabel('Fraction of observations')
ax = sns.heatmap(daily_vol_df)
plt.xlabel('Company')
plt.ylabel('Date')
"""
Explanation: What does the distribution of volatility for the whole market look like? Add volatility for individual stocks in a datetime bin.
End of explanation
"""
most_volatile_on_ave_stocks = tsutil.sample_daily(ticker_return_tsrdd, 'std') \
.remove_instants_with_nans() \
.mapValues(lambda x: (x, x.mean())) \
.top(10, lambda x: x[1][1])
[(stock_record[0], stock_record[1]) for stock_record in most_volatile_on_ave_stocks]
"""
Explanation: Find stocks with the highest average daily volatility.
End of explanation
"""
most_volatile_on_ave_stocks_set = set([x[0] for x in most_volatile_on_ave_stocks])
ticker_pandas_df = ticker_daily_vol.filter(lambda x: x[0] in most_volatile_on_ave_stocks_set) \
.to_pandas_dataframe()
ticker_pandas_df.plot()
"""
Explanation: Plot stocks with the highest average daily volatility over time.
End of explanation
"""
most_volatile_days = ticker_daily_vol.map(lambda x: (np.argmax(x[1]), 1)) \
.reduceByKey(lambda x, y: x + y) \
.top(10, lambda x: x[1])
def lookup_datetime(rdd, loc):
return rdd.index().datetime_at_loc(loc)
[(lookup_datetime(ticker_daily_vol, day), count ) for day, count in most_volatile_days]
"""
Explanation: We first map over ticker_daily_vol to find the index of the value with the highest volatility. We then relate that back to the index set on the RDD to find the corresponding datetime.
End of explanation
"""
from numpy import nansum
wiki_tsrdd_full = tsrdd.time_series_rdd_from_observations(dt_index, wiki_obs, 'timestamp',
'page', 'views')
wiki_daily_views = tsutil.sample_daily(wiki_tsrdd_full, nansum) \
.with_index(ticker_daily_vol.index())
wiki_daily_views.cache()
"""
Explanation: A large number of stock symbols had their most volatile days on August 24th and August 25th of
this year.
Regress volatility against page views
Resample the wiki page view data set so we have total pageviews by day.
Cache the wiki page view RDD.
Resample the wiki page view data set so we have total pageviews by day. This means reindexing the time series and aggregating data together with daily buckets. We use np.nansum to add up numbers while treating nans like zero.
End of explanation
"""
wiki_daily_views.map(lambda x: count_nans(x[1])).top(1)
most_viewy_days = wiki_daily_views.map(lambda x: (np.argmax(x[1]), 1)) \
.reduceByKey(lambda x, y: x + y) \
.top(10, lambda x: x[1])
[(lookup_datetime(ticker_daily_vol, day), count ) for day, count in most_viewy_days]
"""
Explanation: Validate data by checking for nans.
End of explanation
"""
def regress(X, y):
model = linear_model.LinearRegression()
model.fit(X, y)
score = model.score(X, y)
return (score, model)
lag = 2
lead = 2
joined = regressions = wiki_daily_views.flatMap(get_page_symbol) \
.join(ticker_daily_vol)
models = joined.mapValues(lambda x: regress(tsutil.lead_and_lag(lead, lag, x[0]), x[1][lag:-lead]))
models.cache()
models.count()
"""
Explanation: Fit a linear regression model to every pair in the joined wiki-ticker RDD and extract R^2 scores.
End of explanation
"""
models.map(lambda x: (x[1][0], x[0])).top(5)
"""
Explanation: Print out the symbols with the highest R^2 scores.
End of explanation
"""
page_views, returns = joined.filter(lambda x: x[0] == 'V').first()[1]
sns.set(color_codes=True)
sns.regplot(x=page_views, y=returns)
"""
Explanation: Plot the results of a linear model.
Plotting a linear model always helps me understand it better. Again, seaborn is super useful with smart defaults built in.
End of explanation
"""
def tukey_high_outlier_cutoff(sample):
sample = sorted(sample)
first_quartile = sample[len(sample) // 4]
third_quartile = sample[len(sample) * 3 // 4]
iqr = third_quartile - first_quartile
return third_quartile + iqr * 1.5
"""
Explanation: Box plot / Tukey outlier identification
Tukey originally proposed a method for identifying outliers in bow and whisker plots. Eseentially, we find the cut off value for the 75th percentile $P_{75} = percentile(sample, 0.75)$, and add a reasonable buffer (expressed in terms of the interquartile range) $1.5IQR = 1.5(P_{75}-P_{25})$ above that cutoff.
Write a function that returns the high value cutoff for Tukey's boxplot outlier criterion.
End of explanation
"""
cutoff = tukey_high_outlier_cutoff(returns)
filter(lambda x: x > cutoff, returns)
cutoff = tukey_high_outlier_cutoff(page_views)
filter(lambda x: x > cutoff, page_views)
"""
Explanation: Filter out any values below Tukey's boxplot criterion for outliers.
End of explanation
"""
black_monday = ticker_price_tsrdd['2015-08-24 13:00:00':'2015-08-24 20:00:00']
"""
Explanation: Black Monday
Select the date range comprising Black Monday 2015.
End of explanation
"""
black_monday.mapValues(lambda x: (x[-1] / x[0]) - 1) \
.takeOrdered(10, lambda x: x[1])
"""
Explanation: Which stocks saw the worst return for that day?
End of explanation
"""
series = wiki_daily_views \
.to_pandas_series_rdd() \
.flatMap(get_page_symbol) \
.filter(lambda x: x[0] == 'HCA') \
.first()
series[1].plot()
"""
Explanation: Plot wiki page views for one of those stocks.
End of explanation
"""
|
mcocdawc/chemcoord | Tutorial/Transformation.ipynb | lgpl-3.0 | import chemcoord as cc
import time
import pandas as pd
water = cc.Cartesian.read_xyz('water_dimer.xyz', start_index=1)
zwater = water.get_zmat()
small = cc.Cartesian.read_xyz('MIL53_small.xyz', start_index=1)
"""
Explanation: Transformation between internal and cartesian coordinates
End of explanation
"""
zwater.loc[:, ['b', 'a', 'd']]
"""
Explanation: Naming convention
The table which defines the used references of each atom will be called
construction table.
The contruction table of the zmatrix of the water dimer can be seen here:
End of explanation
"""
water = water - water.loc[5, ['x', 'y', 'z']]
zmolecule = water.get_zmat()
c_table = zmolecule.loc[:, ['b', 'a', 'd']]
c_table.loc[6, ['a', 'd']] = [2, 1]
zmolecule1 = water.get_zmat(construction_table=c_table)
zmolecule2 = zmolecule1.copy()
zmolecule3 = water.get_zmat(construction_table=c_table)
"""
Explanation: The absolute references are indicated by magic strings: ['origin', 'e_x', 'e_y', 'e_z'].
The atom which is to be set in the reference of three other atoms, is denoted $i$.
The bond-defining atom is represented by $b$.
The angle-defining atom is represented by $a$.
The dihedral-defining atom is represented by $d$.
Mathematical introduction
It is advantageous to treat a zmatrix simply as recursive spherical coordinates.
The $(n + 1)$-th atom uses three of the previous $n$ atoms as reference.
Those three atoms ($b, a, d$) are spanning a coordinate system, if we require righthandedness.
If we express the position of the atom $i$ in respect to this locally spanned coordinate system using
spherical coordinates, we arrive at the usual definition of a zmatrix.
PS: The question about right- or lefthandedness is equivalent to specifying a direction of rotation.
Chemcoord uses of course the IUPAC definition.
Ideal case
The ideal (and luckily most common) case is, that $\vec{ib}$, $\vec{ba}$, and $\vec{ad}$ are linearly independent.
In this case there exist a bijective mapping between spherical coordinates and cartesian coordinates and all angles, positions... are well defined.
Linear angle
One pathologic case appears, if $\vec{ib}$ and $\vec{ba}$ are linear dependent.
This means, that the angle in the zmatrix is either $0^\circ$ or $180^\circ$.
In this case there are infinitely many dihedral angles for the same configuration in cartesian space.
Or to say it in a more analytical way:
The transformation from spherical coordinates to cartesian coordinates is surjective, but not injective.
For nearly all cases (e.g. expressing the potential hyper surface in terms of internal coordinates), the surjectivity property is sufficient.
A lot of other problematic cases can be automatically solved by assigning a default value to the dihedral angle by definition ($0^\circ$ in the case of chemcoord).
Usually the user does not need to think about this case, which is automatically handled by chemcoord.
Linear reference
The real pathologic case appears, if the three reference atoms are linear.
It is important to note, that this is not a problem in the spherical coordinates of i.
The coordinate system itself which is spanned by b, a and d is undefined.
This means, that it is not visible directly from the values in the Zmatrix, if i uses an invalid reference.
I will use the term valid Zmatrix if all atoms i have valid references. In this case the transformation to cartesian coordinates is well defined.
Now there are two cases:
Creation of a valid Zmatrix
Chemcoord asserts, that the Zmatrix which is created from cartesian coordinates using get_zmat is a valid Zmatrix (or raises an explicit exception if it fails at finding valid references.) This is always done by choosing other references (instead of introducing dummy atoms.)
Manipulation of a valid Zmatrix
If a valid Zmatrix is manipulated after creation, it might occur because of an assignment, that b, a, and d are moved into a linear relationship. In this case a dummy atom is inserted which lies in the plane which was spanned by b, a, and d before the assignment. It uses the same references as the atom d, so changes in the references of b, a and d are also present in the position of the dummy atom X.
This is done using the safe assignment methods of chemcoord.
Example
End of explanation
"""
angle_before_assignment = zmolecule1.loc[4, 'angle']
zmolecule1.safe_loc[4, 'angle'] = 180
zmolecule1.safe_loc[5, 'dihedral'] = 90
zmolecule1.safe_loc[4, 'angle'] = angle_before_assignment
xyz1 = zmolecule1.get_cartesian()
xyz1.view()
"""
Explanation: Modifications on zmolecule1
End of explanation
"""
with cc.DummyManipulation(False):
try:
zmolecule3.safe_loc[4, 'angle'] = 180
except cc.exceptions.InvalidReference as e:
e.already_built_cartesian.view()
"""
Explanation: Contextmanager
With the following contextmanager we can switch the automatic insertion of dummy atoms of and look at the cartesian which is built after assignment of .safe_loc[4, 'angle'] = 180. It is obvious from the structure, that the coordinate system spanned by O - H - O is undefined. This was the second pathological case in the mathematical introduction.
End of explanation
"""
import sympy
sympy.init_printing()
d = sympy.Symbol('d')
symb_water = zwater.copy()
symb_water.safe_loc[4, 'bond'] = d
symb_water
symb_water.subs(d, 2)
cc.xyz_functions.view([symb_water.subs(d, i).get_cartesian() for i in range(2, 5)])
# If your viewer cannot open molden files you have to uncomment the following lines
# for i in range(2, 5):
# symb_water.subs(d, i).get_cartesian().view()
# time.sleep(1)
"""
Explanation: Symbolic evaluation
It is possible to use symbolic expressions from sympy.
End of explanation
"""
pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]], columns=['b', 'a', 'd'])
"""
Explanation: Definition of the construction table
The construction table in chemcoord is represented by a pandas DataFrame with the columns ['b', 'a', 'd'] which can be constructed manually.
End of explanation
"""
water.get_zmat()
"""
Explanation: It is possible to specify only the first $i$ rows of a Zmatrix, in order to compute the $i + 1$ to $n$ rows automatically.
If the molecule consists of unconnected fragments, the construction tables are created independently for each fragment and connected afterwards.
It is important to note, that an unfragmented, monolithic molecule is treated in the same way.
It just consists of one fragment.
This means that in several methods where a list of fragments is returned or taken,
an one element list appears.
If the Zmatrix is automatically created, the oxygen 1 is the first atom.
Let's assume, that we want to change the order of fragments.
End of explanation
"""
fragments = water.fragmentate()
c_table = water.get_construction_table(fragment_list=[fragments[1], fragments[0]])
water.get_zmat(c_table)
"""
Explanation: Let's fragmentate the water
End of explanation
"""
frag_c_table = pd.DataFrame([[4, 6, 5], [1, 4, 6], [1, 2, 4]], columns=['b', 'a', 'd'], index=[1, 2, 3])
c_table2 = water.get_construction_table(fragment_list=[fragments[1], (fragments[0], frag_c_table)])
water.get_zmat(c_table2)
"""
Explanation: If we want to specify the order in the second fragment, so that it connects via the oxygen 1, it is important to note, that we have to specify the full row. It is not possible to define just the order without the references.
End of explanation
"""
|
NathanYee/ThinkBayes2 | code/chap05mine.ipynb | gpl-2.0 | % matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Beta
import thinkplot
"""
Explanation: Think Bayes: Chapter 5
This notebook presents code and exercises from Think Bayes, second edition.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
def Odds(p):
return p / (1-p)
"""
Explanation: Odds
The following function converts from probabilities to odds.
End of explanation
"""
def Probability(o):
return o / (o+1)
"""
Explanation: And this function converts from odds to probabilities.
End of explanation
"""
p = 0.2
Odds(p)
"""
Explanation: If 20% of bettors think my horse will win, that corresponds to odds of 1:4, or 0.25.
End of explanation
"""
o = 1/5
Probability(o)
"""
Explanation: If the odds against my horse are 1:5, that corresponds to a probability of 1/6.
End of explanation
"""
prior_odds = 1
likelihood_ratio = 0.75 / 0.5
post_odds = prior_odds * likelihood_ratio
post_odds
"""
Explanation: We can use the odds form of Bayes's theorem to solve the cookie problem:
End of explanation
"""
post_prob = Probability(post_odds)
post_prob
"""
Explanation: And then we can compute the posterior probability, if desired.
End of explanation
"""
likelihood_ratio = 0.25 / 0.5
post_odds *= likelihood_ratio
post_odds
"""
Explanation: If we draw another cookie and it's chocolate, we can do another update:
End of explanation
"""
post_prob = Probability(post_odds)
post_prob
"""
Explanation: And convert back to probability.
End of explanation
"""
like1 = 0.01
like2 = 2 * 0.6 * 0.01
likelihood_ratio = like1 / like2
likelihood_ratio
"""
Explanation: Oliver's blood
The likelihood ratio is also useful for talking about the strength of evidence without getting bogged down talking about priors.
As an example, we'll solve this problem from MacKay's {\it Information Theory, Inference, and Learning Algorithms}:
Two people have left traces of their own blood at the scene of a crime. A suspect, Oliver, is tested and found to have type 'O' blood. The blood groups of the two traces are found to be of type 'O' (a common type in the local population, having frequency 60) and of type 'AB' (a rare type, with frequency 1). Do these data [the traces found at the scene] give evidence in favor of the proposition that Oliver was one of the people [who left blood at the scene]?
If Oliver is
one of the people who left blood at the crime scene, then he
accounts for the 'O' sample, so the probability of the data
is just the probability that a random member of the population
has type 'AB' blood, which is 1%.
If Oliver did not leave blood at the scene, then we have two
samples to account for. If we choose two random people from
the population, what is the chance of finding one with type 'O'
and one with type 'AB'? Well, there are two ways it might happen:
the first person we choose might have type 'O' and the second
'AB', or the other way around. So the total probability is
$2 (0.6) (0.01) = 1.2$%.
So the likelihood ratio is:
End of explanation
"""
post_odds = 1 * like1 / like2
Probability(post_odds)
"""
Explanation: Since the ratio is less than 1, it is evidence against the hypothesis that Oliver left blood at the scence.
But it is weak evidence. For example, if the prior odds were 1 (that is, 50% probability), the posterior odds would be 0.83, which corresponds to a probability of:
End of explanation
"""
post_odds2 = Odds(.9) * like1 / like2
Probability(post_odds2)
post_odds3 = Odds((.1)) * like1 / like2
Probability(post_odds3)
"""
Explanation: So this evidence doesn't "move the needle" very much.
Exercise: Suppose other evidence had made you 90% confident of Oliver's guilt. How much would this exculpatory evince change your beliefs? What if you initially thought there was only a 10% chance of his guilt?
Notice that evidence with the same strength has a different effect on probability, depending on where you started.
End of explanation
"""
rhode = Beta(1, 2, label='Rhode')
rhode.Update((22, 11))
wei = Beta(1, 2, label='Wei')
wei.Update((21, 12))
"""
Explanation: Comparing distributions
Let's get back to the Kim Rhode problem from Chapter 4:
At the 2016 Summer Olympics in the Women's Skeet event, Kim Rhode faced Wei Meng in the bronze medal match. They each hit 15 of 25 targets, sending the match into sudden death. In the first round, both hit 1 of 2 targets. In the next two rounds, they each hit 2 targets. Finally, in the fourth round, Rhode hit 2 and Wei hit 1, so Rhode won the bronze medal, making her the first Summer Olympian to win an individual medal at six consecutive summer games.
But after all that shooting, what is the probability that Rhode is actually a better shooter than Wei? If the same match were held again, what is the probability that Rhode would win?
I'll start with a uniform distribution for x, the probability of hitting a target, but we should check whether the results are sensitive to that choice.
First I create a Beta distribution for each of the competitors, and update it with the results.
End of explanation
"""
thinkplot.Pdf(rhode.MakePmf())
thinkplot.Pdf(wei.MakePmf())
thinkplot.Config(xlabel='x', ylabel='Probability')
"""
Explanation: Based on the data, the distribution for Rhode is slightly farther right than the distribution for Wei, but there is a lot of overlap.
End of explanation
"""
iters = 1000
count = 0
for _ in range(iters):
x1 = rhode.Random()
x2 = wei.Random()
if x1 > x2:
count += 1
count / iters
"""
Explanation: To compute the probability that Rhode actually has a higher value of p, there are two options:
Sampling: we could draw random samples from the posterior distributions and compare them.
Enumeration: we could enumerate all possible pairs of values and add up the "probability of superiority".
I'll start with sampling. The Beta object provides a method that draws a random value from a Beta distribution:
End of explanation
"""
rhode_sample = rhode.Sample(iters)
wei_sample = wei.Sample(iters)
np.mean(rhode_sample > wei_sample)
"""
Explanation: Beta also provides Sample, which returns a NumPy array, so we an perform the comparisons using array operations:
End of explanation
"""
def ProbGreater(pmf1, pmf2):
total = 0
for x1, prob1 in pmf1.Items():
for x2, prob2 in pmf2.Items():
if x1 > x2:
total += prob1 * prob2
return total
pmf1 = rhode.MakePmf(1001)
pmf2 = wei.MakePmf(1001)
ProbGreater(pmf1, pmf2)
pmf1.ProbGreater(pmf2)
pmf1.ProbLess(pmf2)
"""
Explanation: The other option is to make Pmf objects that approximate the Beta distributions, and enumerate pairs of values:
End of explanation
"""
import random
def flip(p):
return random.random() < p
"""
Explanation: Exercise: Run this analysis again with a different prior and see how much effect it has on the results.
Simulation
To make predictions about a rematch, we have two options again:
Sampling. For each simulated match, we draw a random value of x for each contestant, then simulate 25 shots and count hits.
Computing a mixture. If we knew x exactly, the distribution of hits, k, would be binomial. Since we don't know x, the distribution of k is a mixture of binomials with different values of x.
I'll do it by sampling first.
End of explanation
"""
iters = 1000
wins = 0
losses = 0
for _ in range(iters):
x1 = rhode.Random()
x2 = wei.Random()
count1 = count2 = 0
for _ in range(25):
if flip(x1):
count1 += 1
if flip(x2):
count2 += 1
if count1 > count2:
wins += 1
if count1 < count2:
losses += 1
wins/iters, losses/iters
"""
Explanation: flip returns True with probability p and False with probability 1-p
Now we can simulate 1000 rematches and count wins and losses.
End of explanation
"""
rhode_rematch = np.random.binomial(25, rhode_sample)
thinkplot.Hist(Pmf(rhode_rematch))
wei_rematch = np.random.binomial(25, wei_sample)
np.mean(rhode_rematch > wei_rematch)
np.mean(rhode_rematch < wei_rematch)
"""
Explanation: Or, realizing that the distribution of k is binomial, we can simplify the code using NumPy:
End of explanation
"""
from thinkbayes2 import MakeBinomialPmf
def MakeBinomialMix(pmf, label=''):
mix = Pmf(label=label)
for x, prob in pmf.Items():
binom = MakeBinomialPmf(n=25, p=x)
for k, p in binom.Items():
mix[k] += prob * p
return mix
rhode_rematch = MakeBinomialMix(rhode.MakePmf(), label='Rhode')
wei_rematch = MakeBinomialMix(wei.MakePmf(), label='Wei')
thinkplot.Pdf(rhode_rematch)
thinkplot.Pdf(wei_rematch)
thinkplot.Config(xlabel='hits')
rhode_rematch.ProbGreater(wei_rematch), rhode_rematch.ProbLess(wei_rematch)
"""
Explanation: Alternatively, we can make a mixture that represents the distribution of k, taking into account our uncertainty about x:
End of explanation
"""
from thinkbayes2 import MakeMixture
def MakeBinomialMix2(pmf):
binomials = Pmf()
for x, prob in pmf.Items():
binom = MakeBinomialPmf(n=25, p=x)
binomials[binom] = prob
return MakeMixture(binomials)
"""
Explanation: Alternatively, we could use MakeMixture:
End of explanation
"""
rhode_rematch = MakeBinomialMix2(rhode.MakePmf())
wei_rematch = MakeBinomialMix2(wei.MakePmf())
rhode_rematch.ProbGreater(wei_rematch), rhode_rematch.ProbLess(wei_rematch)
"""
Explanation: Here's how we use it.
End of explanation
"""
iters = 1000
pmf = Pmf()
for _ in range(iters):
k = rhode_rematch.Random() + wei_rematch.Random()
pmf[k] += 1
pmf.Normalize()
thinkplot.Hist(pmf)
"""
Explanation: Exercise: Run this analysis again with a different prior and see how much effect it has on the results.
Distributions of sums and differences
Suppose we want to know the total number of targets the two contestants will hit in a rematch. There are two ways we might compute the distribution of this sum:
Sampling: We can draw samples from the distributions and add them up.
Enumeration: We can enumerate all possible pairs of values.
I'll start with sampling:
End of explanation
"""
ks = rhode_rematch.Sample(iters) + wei_rematch.Sample(iters)
pmf = Pmf(ks)
thinkplot.Hist(pmf)
"""
Explanation: Or we could use Sample and NumPy:
End of explanation
"""
def AddPmfs(pmf1, pmf2):
pmf = Pmf()
for v1, p1 in pmf1.Items():
for v2, p2 in pmf2.Items():
pmf[v1 + v2] += p1 * p2
return pmf
"""
Explanation: Alternatively, we could compute the distribution of the sum by enumeration:
End of explanation
"""
pmf = AddPmfs(rhode_rematch, wei_rematch)
thinkplot.Pdf(pmf)
"""
Explanation: Here's how it's used:
End of explanation
"""
pmf = rhode_rematch + wei_rematch
thinkplot.Pdf(pmf)
"""
Explanation: The Pmf class provides a + operator that does the same thing.
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise: The Pmf class also provides the - operator, which computes the distribution of the difference in values from two distributions. Use the distributions from the previous section to compute the distribution of the differential between Rhode and Wei in a rematch. On average, how many clays should we expect Rhode to win by? What is the probability that Rhode wins by 10 or more?
End of explanation
"""
iters = 1000
pmf = Pmf()
for _ in range(iters):
ks = rhode_rematch.Sample(6)
pmf[max(ks)] += 1
pmf.Normalize()
thinkplot.Hist(pmf)
"""
Explanation: Distribution of maximum
Suppose Kim Rhode continues to compete in six more Olympics. What should we expect her best result to be?
Once again, there are two ways we can compute the distribution of the maximum:
Sampling.
Analysis of the CDF.
Here's a simple version by sampling:
End of explanation
"""
iters = 1000
ks = rhode_rematch.Sample((6, iters))
ks
"""
Explanation: And here's a version using NumPy. I'll generate an array with 6 rows and 10 columns:
End of explanation
"""
maxes = np.max(ks, axis=0)
maxes[:10]
"""
Explanation: Compute the maximum in each column:
End of explanation
"""
pmf = Pmf(maxes)
thinkplot.Hist(pmf)
"""
Explanation: And then plot the distribution of maximums:
End of explanation
"""
pmf = rhode_rematch.Max(6).MakePmf()
thinkplot.Hist(pmf)
"""
Explanation: Or we can figure it out analytically. If the maximum is less-than-or-equal-to some value k, all 6 random selections must be less-than-or-equal-to k, so:
$ CDF_{max}(x) = CDF(x)^6 $
Pmf provides a method that computes and returns this Cdf, so we can compute the distribution of the maximum like this:
End of explanation
"""
def Min(pmf, k):
cdf = pmf.MakeCdf()
cdf.ps = 1 - (1-cdf.ps)**k
return cdf
pmf = Min(rhode_rematch, 6).MakePmf()
thinkplot.Hist(pmf)
"""
Explanation: Exercise: Here's how Pmf.Max works:
def Max(self, k):
"""Computes the CDF of the maximum of k selections from this dist.
k: int
returns: new Cdf
"""
cdf = self.MakeCdf()
cdf.ps **= k
return cdf
Write a function that takes a Pmf and an integer n and returns a Pmf that represents the distribution of the minimum of k values drawn from the given Pmf. Use your function to compute the distribution of the minimum score Kim Rhode would be expected to shoot in six competitions.
End of explanation
"""
# Solution goes here
# Solution goes here
"""
Explanation: Exercises
Exercise: Suppose you are having a dinner party with 10 guests and 4 of them are allergic to cats. Because you have cats, you expect 50% of the allergic guests to sneeze during dinner. At the same time, you expect 10% of the non-allergic guests to sneeze. What is the distribution of the total number of guests who sneeze?
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise This study from 2015 showed that many subjects diagnosed with non-celiac gluten sensitivity (NCGS) were not able to distinguish gluten flour from non-gluten flour in a blind challenge.
Here is a description of the study:
"We studied 35 non-CD subjects (31 females) that were on a gluten-free diet (GFD), in a double-blind challenge study. Participants were randomised to receive either gluten-containing flour or gluten-free flour for 10 days, followed by a 2-week washout period and were then crossed over. The main outcome measure was their ability to identify which flour contained gluten.
"The gluten-containing flour was correctly identified by 12 participants (34%)..."
Since 12 out of 35 participants were able to identify the gluten flour, the authors conclude "Double-blind gluten challenge induces symptom recurrence in just one-third of patients fulfilling the clinical diagnostic criteria for non-coeliac gluten sensitivity."
This conclusion seems odd to me, because if none of the patients were sensitive to gluten, we would expect some of them to identify the gluten flour by chance. So the results are consistent with the hypothesis that none of the subjects are actually gluten sensitive.
We can use a Bayesian approach to interpret the results more precisely. But first we have to make some modeling decisions.
Of the 35 subjects, 12 identified the gluten flour based on resumption of symptoms while they were eating it. Another 17 subjects wrongly identified the gluten-free flour based on their symptoms, and 6 subjects were unable to distinguish. So each subject gave one of three responses. To keep things simple I follow the authors of the study and lump together the second two groups; that is, I consider two groups: those who identified the gluten flour and those who did not.
I assume (1) people who are actually gluten sensitive have a 95% chance of correctly identifying gluten flour under the challenge conditions, and (2) subjects who are not gluten sensitive have only a 40% chance of identifying the gluten flour by chance (and a 60% chance of either choosing the other flour or failing to distinguish).
Using this model, estimate the number of study participants who are sensitive to gluten. What is the most likely number? What is the 95% credible interval?
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise Coming soon: the space invaders problem.
End of explanation
"""
|
akchinSTC/systemml | projects/breast_cancer/MachineLearning.ipynb | apache-2.0 | %load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import matplotlib.pyplot as plt
import numpy as np
from pyspark.sql.functions import col, max
import systemml # pip3 install systemml
from systemml import MLContext, dml
plt.rcParams['figure.figsize'] = (10, 6)
ml = MLContext(sc)
"""
Explanation: Predicting Breast Cancer Proliferation Scores with Apache Spark and Apache SystemML
Machine Learning
Setup
End of explanation
"""
# Settings
size=256
grayscale = False
c = 1 if grayscale else 3
p = 0.01
folder = "data"
if p < 1:
tr_filename = os.path.join(folder, "train_{}_sample_{}{}.parquet".format(p, size, "_grayscale" if grayscale else ""))
val_filename = os.path.join(folder, "val_{}_sample_{}{}.parquet".format(p, size, "_grayscale" if grayscale else ""))
else:
tr_filename = os.path.join(folder, "train_{}{}.parquet".format(size, "_grayscale" if grayscale else ""))
val_filename = os.path.join(folder, "val_{}{}.parquet".format(size, "_grayscale" if grayscale else ""))
train_df = spark.read.load(tr_filename)
val_df = spark.read.load(val_filename)
train_df, val_df
tc = train_df.count()
vc = val_df.count()
tc, vc, tc + vc
train_df.select(max(col("__INDEX"))).show()
train_df.groupBy("tumor_score").count().show()
val_df.groupBy("tumor_score").count().show()
"""
Explanation: Read in train & val data
End of explanation
"""
# Note: Must use the row index column, or X may not
# necessarily correspond correctly to Y
X_df = train_df.select("__INDEX", "sample")
X_val_df = val_df.select("__INDEX", "sample")
y_df = train_df.select("__INDEX", "tumor_score")
y_val_df = val_df.select("__INDEX", "tumor_score")
X_df, X_val_df, y_df, y_val_df
"""
Explanation: Extract X and Y matrices
End of explanation
"""
script = """
# Scale images to [-1,1]
X = X / 255
X_val = X_val / 255
X = X * 2 - 1
X_val = X_val * 2 - 1
# One-hot encode the labels
num_tumor_classes = 3
n = nrow(y)
n_val = nrow(y_val)
Y = table(seq(1, n), y, n, num_tumor_classes)
Y_val = table(seq(1, n_val), y_val, n_val, num_tumor_classes)
"""
outputs = ("X", "X_val", "Y", "Y_val")
script = dml(script).input(X=X_df, X_val=X_val_df, y=y_df, y_val=y_val_df).output(*outputs)
X, X_val, Y, Y_val = ml.execute(script).get(*outputs)
X, X_val, Y, Y_val
"""
Explanation: Convert to SystemML Matrices
Note: This allows for reuse of the matrices on multiple
subsequent script invocations with only a single
conversion. Additionally, since the underlying RDDs
backing the SystemML matrices are maintained, any
caching will also be maintained.
End of explanation
"""
# script = """
# # Trigger conversions and caching
# # Note: This may take a while, but will enable faster iteration later
# print(sum(X))
# print(sum(Y))
# print(sum(X_val))
# print(sum(Y_val))
# """
# script = dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val)
# ml.execute(script)
"""
Explanation: Trigger Caching (Optional)
Note: This will take a while and is not necessary, but doing it
once will speed up the training below. Otherwise, the cost of
caching will be spread across the first full loop through the
data during training.
End of explanation
"""
# script = """
# write(X, "data/X_"+p+"_sample_binary", format="binary")
# write(Y, "data/Y_"+p+"_sample_binary", format="binary")
# write(X_val, "data/X_val_"+p+"_sample_binary", format="binary")
# write(Y_val, "data/Y_val_"+p+"_sample_binary", format="binary")
# """
# script = dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val, p=p)
# ml.execute(script)
"""
Explanation: Save Matrices (Optional)
End of explanation
"""
script = """
source("softmax_clf.dml") as clf
# Hyperparameters & Settings
lr = 1e-2 # learning rate
mu = 0.9 # momentum
decay = 0.999 # learning rate decay constant
batch_size = 32
epochs = 500
log_interval = 1
n = 200 # sample size for overfitting sanity check
# Train
[W, b] = clf::train(X[1:n,], Y[1:n,], X[1:n,], Y[1:n,], lr, mu, decay, batch_size, epochs, log_interval)
"""
outputs = ("W", "b")
script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val).output(*outputs)
W, b = ml.execute(script).get(*outputs)
W, b
"""
Explanation: Softmax Classifier
Sanity Check: Overfit Small Portion
End of explanation
"""
script = """
source("softmax_clf.dml") as clf
# Hyperparameters & Settings
lr = 5e-7 # learning rate
mu = 0.5 # momentum
decay = 0.999 # learning rate decay constant
batch_size = 32
epochs = 1
log_interval = 10
# Train
[W, b] = clf::train(X, Y, X_val, Y_val, lr, mu, decay, batch_size, epochs, log_interval)
"""
outputs = ("W", "b")
script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val).output(*outputs)
W, b = ml.execute(script).get(*outputs)
W, b
"""
Explanation: Train
End of explanation
"""
script = """
source("softmax_clf.dml") as clf
# Eval
probs = clf::predict(X, W, b)
[loss, accuracy] = clf::eval(probs, Y)
probs_val = clf::predict(X_val, W, b)
[loss_val, accuracy_val] = clf::eval(probs_val, Y_val)
"""
outputs = ("loss", "accuracy", "loss_val", "accuracy_val")
script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val, W=W, b=b).output(*outputs)
loss, acc, loss_val, acc_val = ml.execute(script).get(*outputs)
loss, acc, loss_val, acc_val
"""
Explanation: Eval
End of explanation
"""
script = """
source("convnet.dml") as clf
# Hyperparameters & Settings
lr = 1e-2 # learning rate
mu = 0.9 # momentum
decay = 0.999 # learning rate decay constant
lambda = 0 #5e-04
batch_size = 32
epochs = 300
log_interval = 1
dir = "models/lenet-cnn/sanity/"
n = 200 # sample size for overfitting sanity check
# Train
[Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X[1:n,], Y[1:n,], X[1:n,], Y[1:n,], C, Hin, Win, lr, mu, decay, lambda, batch_size, epochs, log_interval, dir)
"""
outputs = ("Wc1", "bc1", "Wc2", "bc2", "Wc3", "bc3", "Wa1", "ba1", "Wa2", "ba2")
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val,
C=c, Hin=size, Win=size)
.output(*outputs))
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2 = ml.execute(script).get(*outputs)
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2
"""
Explanation: LeNet-like ConvNet
Sanity Check: Overfit Small Portion
End of explanation
"""
script = """
source("convnet.dml") as clf
dir = "models/lenet-cnn/hyperparam-search/"
# TODO: Fix `parfor` so that it can be efficiently used for hyperparameter tuning
j = 1
while(j < 2) {
#parfor(j in 1:10000, par=6) {
# Hyperparameter Sampling & Settings
lr = 10 ^ as.scalar(rand(rows=1, cols=1, min=-7, max=-1)) # learning rate
mu = as.scalar(rand(rows=1, cols=1, min=0.5, max=0.9)) # momentum
decay = as.scalar(rand(rows=1, cols=1, min=0.9, max=1)) # learning rate decay constant
lambda = 10 ^ as.scalar(rand(rows=1, cols=1, min=-7, max=-1)) # regularization constant
batch_size = 32
epochs = 1
log_interval = 10
trial_dir = dir + "j/"
# Train
[Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X, Y, X_val, Y_val, C, Hin, Win, lr, mu, decay, lambda, batch_size, epochs, log_interval, trial_dir)
# Eval
#probs = clf::predict(X, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
#[loss, accuracy] = clf::eval(probs, Y)
probs_val = clf::predict(X_val, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
[loss_val, accuracy_val] = clf::eval(probs_val, Y_val)
# Save hyperparams
str = "lr: " + lr + ", mu: " + mu + ", decay: " + decay + ", lambda: " + lambda + ", batch_size: " + batch_size
name = dir + accuracy_val + "," + j #+","+accuracy+","+j
write(str, name)
j = j + 1
}
"""
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val, C=c, Hin=size, Win=size))
ml.execute(script)
"""
Explanation: Hyperparameter Search
End of explanation
"""
script = """
source("convnet.dml") as clf
# Hyperparameters & Settings
lr = 0.00205 # learning rate
mu = 0.632 # momentum
decay = 0.99 # learning rate decay constant
lambda = 0.00385
batch_size = 32
epochs = 1
log_interval = 10
dir = "models/lenet-cnn/train/"
# Train
[Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] =
clf::train(X, Y, X_val, Y_val, C, Hin, Win, lr, mu, decay,
lambda, batch_size, epochs, log_interval, dir)
"""
outputs = ("Wc1", "bc1", "Wc2", "bc2", "Wc3", "bc3",
"Wa1", "ba1", "Wa2", "ba2")
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val,
C=c, Hin=size, Win=size)
.output(*outputs))
outs = ml.execute(script).get(*outputs)
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2 = outs
"""
Explanation: Train
End of explanation
"""
script = """
source("convnet.dml") as clf
# Eval
probs = clf::predict(X, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
[loss, accuracy] = clf::eval(probs, Y)
probs_val = clf::predict(X_val, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
[loss_val, accuracy_val] = clf::eval(probs_val, Y_val)
"""
outputs = ("loss", "accuracy", "loss_val", "accuracy_val")
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val,
C=c, Hin=size, Win=size,
Wc1=Wc1, bc1=bc1,
Wc2=Wc2, bc2=bc2,
Wc3=Wc3, bc3=bc3,
Wa1=Wa1, ba1=ba1,
Wa2=Wa2, ba2=ba2)
.output(*outputs))
loss, acc, loss_val, acc_val = ml.execute(script).get(*outputs)
loss, acc, loss_val, acc_val
"""
Explanation: Eval
End of explanation
"""
|
mahieke/maschinelles_lernen | a1/excercise1.ipynb | mit | import pandas as pd
import numpy as np
%matplotlib inline
"""
Explanation: Aufgabe 1: Explorative Analyse und Vorverarbeitung
End of explanation
"""
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data'
cols =["CRIM","ZN","INDUS","CHAS","NOX","RM","AGE","DIS","RAD","TAX","PTRATIO","B","LSTAT","TGT"]
boston = pd.read_csv(url, sep=" ", skipinitialspace=True, header=None, names=cols, index_col=False) # Dataframe
dateDownloaded = !date
dateDownloaded
"""
Explanation: a) Explorative Analyse des Datensatzes "Boston Housing"
End of explanation
"""
shape = np.shape(boston)
print('Zeilen: {} und Spalten: {}'.format(shape[0],shape[1]))
"""
Explanation: Infos über die Boston Housing Daten
End of explanation
"""
boston.isnull().any()
"""
Explanation: Variablen erklärt:
1. CRIM: per capita crime rate by town
2. ZN: proportion of residential land zoned for lots over 25,000 sq.ft.
3. INDUS: proportion of non-retail business acres per town
4. CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
5. NOX: nitric oxides concentration (parts per 10 million)
6. RM: average number of rooms per dwelling
7. AGE: proportion of owner-occupied units built prior to 1940
8. DIS: weighted distances to five Boston employment centres
9. RAD: index of accessibility to radial highways
10. TAX: full-value property-tax rate per $10,000
11. PTRATIO: pupil-teacher ratio by town
12. B: 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
13. LSTAT: % lower status of the population
14. MEDV: Median value of owner-occupied homes in \$1000's
Überprüfen ob Datensatz NULL Datensätze hat:
End of explanation
"""
boston.duplicated().any()
"""
Explanation: Überprüfen, ob Zeilen doppelt vorkommen
End of explanation
"""
boston.describe()
boston.head(20)
"""
Explanation: count: Anzahl Messungen
mean: Mittelwert
std: Standardabweichung
min: Minimum
25%: 25-Perzentil
50%: 50-Perzentil
75%: 75-Perzentil
max: Maximum
End of explanation
"""
boston.dtypes
"""
Explanation: Frage 1: Welche der Variablen sind Kategorisch?
End of explanation
"""
boston["CHAS"].plot(kind='hist', bins=50)
"""
Explanation: Aus den Datentypen scheinen die Variablen "CHAS" und "RAD" Kategorisch zu sein. Über ein Histogram über diese beiden Variablen kann die Annahme verifiziert werden.
End of explanation
"""
tax = boston["TAX"]
tax_bins = np.unique(tax).shape[0]
as_int = any([a.is_integer() for a in tax])
print("Die TAX Variable hat __{}__ unterschiedliche Werte".format(tax_bins))
tax.hist(bins=tax_bins)
"""
Explanation: boston["RAD"].plot(kind='hist', bins=50)
Die Variablen "CHAS" und "RAD" sind auf jeden Fall kategorisch! Evtl. ist die "TAX" Variable auch kategorisch, da sie trotz des Datentyps float, keine nachkommastellen hat (zumindest scheint die so in der Stichprobe so).
End of explanation
"""
scatter = pd.scatter_matrix(boston, figsize=(14,14), marker='o', diagonal='kde');
boston.corr()
%load_ext version_information
%version_information numpy,pandas
"""
Explanation: Die TAX Variable ist kategorisch da es "nur" 66 unterschiedliche Werte annimmt.
Frage 2: Welche eignen sich gut zur Vorhersage des Hauspreises und warum?
End of explanation
"""
|
kunalj101/scipy2015-blaze-bokeh | 1.2 Plotting - Timeseries.ipynb | mit | import pandas as pd
from bokeh.plotting import figure, show, output_notebook
# Get data
# Process data
# Output option
# Create your plot
# Show plot
"""
Explanation: <img src=images/continuum_analytics_b&w.png align="left" width="15%" style="margin-right:15%">
<h1 align='center'>Bokeh tutorial</h1>
1.2 Plotting - Timeseries
Exercise: Reproduce the timeseries chart using the plotting interface
Data: 'data/Land_Ocean_Monthly_Anomaly_Average.csv'
Note: Don't worry about styling right now. We'll go over plot properties in the next exercise
End of explanation
"""
from bokeh.models import DatetimeTickFormatter
import math
# Axis type, width and height
# Line colors and legend
# Axis format (e.g tick format and orientation)
# Axis labels
# Legend position
# Grid style
# Remove toolbar
# Show plot
"""
Explanation: Exercise: Style the plot appropriately
Ideas:
Axis type and format
Line colors
Legend
Grid lines
Axis labels
Width and height
Remove toolbar
Tips:
bokeh.models.DatetimeTickFormatter()
Note: You can create a new figure to see observe the differences.
End of explanation
"""
from bokeh.models import ColumnDataSource, HoverTool
from collections import OrderedDict
# List all the tools that you want in your plot separated by comas, all in one string.
# Add the tools to your figure
# The hover tools doesn't render datetime appropriately. We'll need a string.
# To reference variables in the hover box, we'll need to use bokeh.ColumnDataSource instead of a pd.DataFrame
# Change plotting.line to get values from ColumnDataSource
# Set hover tool
# Copy your style from the previous exercise
# Show plot
"""
Explanation: Exercise: Add a crosshair and hover tool to the plot
End of explanation
"""
|
herberthamaral/curso-python-sec2015 | Mini curso Python - SEC 2015.ipynb | mit | print "Olá Mundo!" #Este é o nosso hello world e este é um comentário :)
"""
Explanation: Mini-curso de Python - SEC 2015
Autor: Herberth Amaral - herberthamaral@gmail.com
Montando o ambiente para este mini-curso
$ sudo apt-get install build-essential python-dev libblas-dev liblapack-dev gfortran python-pip git libzmq3 libzmq3-dev
$ sudo pip install pip --upgrade
$ sudo pip install virtualenvwrapper
Adicione as seguintes linhas no seu ~/.bashrc (ou ~/.bash_profile)
export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=$HOME/Devel
source /usr/local/bin/virtualenvwrapper.sh
Clone o repositório deste curso:
$ git clone https://github.com/herberthamaral/curso-python-sec2015.git
$ mkvirtualenv -a $(pwd)/curso-python-sec2015 curso-python-sec2015
$ pip install -r requirements.txt
$ pip install "ipython[all]"
Agora sente e espere porque vai demorar :)
Algumas características da linguagem
Para quem não conhece, o Python é uma linguagem:
Interpretada;
Tipagem forte e dinâmica;
Altíssimo nível;
De propósito geral;
Orientada a objetos, procedural e funcional;
Com gerenciamento de memória automático;
End of explanation
"""
%pylab inline
"""
Explanation: Pequena demonstração de ferramentas úteis para engenheiros
End of explanation
"""
import numpy as np
x = np.linspace(0, 5)
plot(x, cos(x))
"""
Explanation: Exemplo do Matplotlib, a principal ferramenta de plotagem de gráficos em Python
End of explanation
"""
from sklearn.linear_model import Perceptron
perceptron = Perceptron()
perceptron.fit([[0,0], [0,1], [1,0], [1,1]], [0,1,1,1]) # porta OR
perceptron.predict([0,1])
"""
Explanation: Demonstração do scikit-learn, biblioteca de aprendizado de máquina
End of explanation
"""
import numpy as np
matriz = np.matrix([[1,2,3,4], [5,6,7,8]])
matriz
matriz.T #transposta
matriz*matriz.T
"""
Explanation: Demonstração do NumPy, pedra fundamental para álgebra linear no Python
End of explanation
"""
import sympy as sp
sp.sqrt(8)
x = sp.symbols('x')
sp.diff(sp.sin(x)*sp.exp(x),x)
sp.integrate(sp.sin(x), x)
"""
Explanation: SymPy - matemática simbólica (sim, resolve integral)
End of explanation
"""
import sqlite3
conn = sqlite3.connect('exemplo.db')
c = conn.cursor()
c.execute("CREATE TABLE IF NOT EXISTS pessoas (id integer primary key, nome text, telefone text, data_nascimento date)")
c.execute("""INSERT INTO pessoas (nome, telefone, data_nascimento)
VALUES ('Herberth Amaral','32222222', '1988-11-05')""")
conn.commit()
c.execute('SELECT * from pessoas')
c.fetchone()
"""
Explanation: Banco de dados SQLite
End of explanation
"""
|
ML4DS/ML4all | U_lab1.Clustering/Lab_ShapeSegmentation_draft/LabSessionClustering_student.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.misc import imread
"""
Explanation: Lab Session: Clustering algorithms for Image Segmentation
Author: Jesús Cid Sueiro
Jan. 2017
End of explanation
"""
name = "birds.jpg"
name = "Seeds.jpg"
birds = imread("Images/" + name)
birdsG = np.sum(birds, axis=2)
# <SOL>
# </SOL>
"""
Explanation: 1. Introduction
In this notebook we explore an application of clustering algorithms to shape segmentation from binary images. We will carry out some exploratory work with a small set of images provided with this notebook. Most of them are not binary images, so we must do some preliminary work to extract he binary shape images and apply the clustering algorithms to them. We will have the opportunity to test the differences between $k$-means and spectral clustering in this problem.
1.1. Load Image
Several images are provided with this notebook:
BinarySeeds.png
birds.jpg
blood_frog_1.jpg
cKyDP.jpg
Matricula.jpg
Matricula2.jpg
Seeds.png
Select and visualize image birds.jpg from file and plot it in grayscale
End of explanation
"""
# <SOL>
# </SOL>
"""
Explanation: 2. Thresholding
Select an intensity threshold by manual inspection of the image histogram
End of explanation
"""
# <SOL>
# </SOL>
"""
Explanation: Plot the binary image after thresholding.
End of explanation
"""
# <SOL>
# </SOL>
print X
plt.scatter(X[:, 0], X[:, 1], s=5);
plt.axis('equal')
plt.show()
"""
Explanation: 3. Dataset generation
Extract pixel coordinates dataset from image and plot them in a scatter plot.
End of explanation
"""
from sklearn.cluster import KMeans
# <SOL>
# </SOL>
"""
Explanation: 4. k-means clustering algorithm
Use the pixel coordinates as the input data for a k-means algorithm. Plot the result of the clustering by means of a scatter plot, showing each cluster with a different colour.
End of explanation
"""
from sklearn.metrics.pairwise import rbf_kernel
# <SOL>
# </SOL>
# Visualization
# <SOL>
# </SOL>
"""
Explanation: 5. Spectral clustering algorithm
5.1. Affinity matrix
Compute and visualize the affinity matrix for the given dataset, using a rbf kernel with $\gamma=5$.
End of explanation
"""
# <SOL>
# </SOL>
plt.scatter(Xsub[:,0], Xsub[:,1], c=y_kmeans, s=5, cmap='rainbow', linewidth=0.0)
plt.axis('equal')
plt.show()
"""
Explanation: 5.2. Spectral clusering
Apply the spectral clustering algorithm, and show the clustering results using a scatter plot.
End of explanation
"""
|
JarronL/pynrc | notebooks/Exoplanet_Filter_Selection.ipynb | mit | def get_planet_counts(bp, age, masses=[0.1,1,10], dist=10, model='linder', file=None,
return_mags=False):
if 'linder' in model.lower():
tbl = nrc_utils.linder_table(file)
res = nrc_utils.linder_filter(tbl, bp.name, age, dist=dist,
cond_interp=True, cond_file=None)
elif 'cond' in model.lower():
tbl = nrc_utils.cond_table(age, file)
res = nrc_utils.cond_filter(tbl, bp.name, dist=dist)
else:
raise ValueError("model name {} not recognized".format(model))
# Arrays of masses and magnitudes
mass_vals, mag_vals = res
# Sort by mass
isort = np.argsort(mass_vals)
mass_data = mass_vals[isort]
mag_data = mag_vals[isort]
# Interpolate masses in log space to get a linear function
xv, yv = np.log10(mass_data[isort]), mag_data[isort]
xint = np.log10(masses)
mags = np.interp(xint, xv, yv)
if return_mags:
return mags
# Convert mags into counts for this bandpass
count_arr = []
sp = pynrc.stellar_spectrum('flat')
for i, mass in enumerate(masses):
# Renormalize to magnitude of each planet and get count rate
sp_norm = sp.renorm(mags[i], 'vegamag', bp)
obs = S.Observation(sp_norm, bp, binset=bp.wave)
count_arr.append(obs.countrate())
return np.array(count_arr)
"""
Explanation: Using the BEX models from Linder et al (2019) as well as the AMES-COND grid (Allard et al. 2001), determine the fluxes in a given filter with respect to exoplanet mass and age. Also, show contrasts relative to some primary host star.
This notebook generates plots showing the total collected flux (counts/sec) and contrasts (delta mags) for each filter for a variety of masses and ages to determine. Contrasts are relative to the flux from a G2V star in the same band.
End of explanation
"""
fig, axes_all = plt.subplots(3,3,figsize=(14,9),sharex=True)
ages = [1.5, 2.5, 5] # Myr
mass_min, mass_max = (0.1, 10) # MJup
masses = np.logspace(np.log10(mass_min),np.log10(mass_max),num=30)
filters = ['F410M', 'F430M', 'F460M', 'F480M', 'F444W']#, 'F200W']
# Normalize a G2V star to it's K-Band absolute magnitude
bp_k = pynrc.bp_2mass('k')
kmag = 8.6
spt = 'G2V'
sp_star = pynrc.stellar_spectrum(spt, kmag, 'vegamag', bp_k)
# spt = 'K5V'
# sp_star = pynrc.stellar_spectrum(spt, 3.2, 'vegamag', bp_k)
model = 'linder'
for i, age in enumerate(ages):
axes = axes_all[i]
for filt in filters:
bp = nrc_utils.read_filter(filt, module='B')
res = get_planet_counts(bp, age, masses, return_mags=False, model=model, dist=158)
axes[0].loglog(masses, res, marker='.', label=filt)
res = get_planet_counts(bp, age, masses, return_mags=True, model=model, dist=158)
axes[1].semilogx(masses, res, marker='.', label=filt)
obs = S.Observation(sp_star, bp, bp.wave)
# print(obs.effstim('vegamag'))
axes[2].semilogx(masses, res - obs.effstim('vegamag'), marker='.', label=filt)
axes[0].set_ylabel("Flux (counts/sec)")
axes[1].set_ylabel("Flux (mag)")
axes[2].set_ylabel("Contrast (mag)")
if i==0:
axes[0].set_title(f"Companion Flux (counts)")
axes[1].set_title(f"Companion Flux (mags)")
axes[2].set_title(f"Contrast Relative to {spt} Primary")
for j, ax in enumerate(axes):
if j>0:
ax.set_ylim(ax.get_ylim()[::-1])
ax.legend(title=f'Age = {age} Myr', loc='lower right')
if i==2:
ax.set_xlabel('Companion Mass ($M_{Jup}$)')
mod_str = 'BEX' if model=='linder' else 'COND'
fig.suptitle(f'{mod_str} Models', fontsize=15)
fig.tight_layout()
"""
Explanation: BEX Models for Medium Filters
End of explanation
"""
fig, axes_all = plt.subplots(3,3,figsize=(14,9),sharex=True)
ages = [1, 10, 100] # Myr
mass_min, mass_max = (0.5, 10) # MJup
masses = np.logspace(np.log10(mass_min),np.log10(mass_max),num=30)
filters = ['F410M', 'F430M', 'F460M', 'F480M', 'F444W']
# Normalize a G2V star to it's K-Band absolute magnitude
bp_k = pynrc.bp_2mass('k')
spt = 'G2V'
sp_star = pynrc.stellar_spectrum(spt, 3.2, 'vegamag', bp_k)
model = 'cond'
for i, age in enumerate(ages):
axes = axes_all[i]
for filt in filters:
bp = nrc_utils.read_filter(filt, module='B')
res = get_planet_counts(bp, age, masses, return_mags=False, model=model)
axes[0].loglog(masses, res, marker='.', label=filt)
res = get_planet_counts(bp, age, masses, return_mags=True, model=model)
axes[1].semilogx(masses, res, marker='.', label=filt)
obs = S.Observation(sp_star, bp, bp.wave)
axes[2].semilogx(masses, res - obs.effstim('vegamag'), marker='.', label=filt)
axes[0].set_ylabel("Flux (counts/sec)")
axes[1].set_ylabel("Flux (mag)")
axes[2].set_ylabel("Contrast (mag)")
if i==0:
axes[0].set_title(f"Companion Flux (counts)")
axes[1].set_title(f"Companion Flux (mags)")
axes[2].set_title(f"Contrast Relative to {spt} Primary")
for j, ax in enumerate(axes):
if j>0:
ax.set_ylim(ax.get_ylim()[::-1])
ax.legend(title=f'Age = {age} Myr', loc='lower right')
if i==2:
ax.set_xlabel('Companion Mass ($M_{Jup}$)')
mod_str = 'BEX' if model=='linder' else 'COND'
fig.suptitle(f'{mod_str} Models', fontsize=15)
fig.tight_layout()
"""
Explanation: Conclusion: F460M and F480M provide the best contrast for medium band filters (comparable to F444W) across all masses and ages considered (1-5 Myr, 0.1-10 MJup).
COND Models for Medium Band Filters
End of explanation
"""
fig, axes_all = plt.subplots(3,3,figsize=(14,9),sharex=True)
ages = [1, 10, 100] # Myr
mass_min, mass_max = (0.1, 10) # MJup
masses = np.logspace(np.log10(mass_min),np.log10(mass_max),num=30)
filters = ['F277W', 'F356W', 'F444W']
# Normalize a G2V star to it's K-Band absolute magnitude
bp_k = pynrc.bp_2mass('k')
spt = 'G2V'
sp_star = pynrc.stellar_spectrum(spt, 3.2, 'vegamag', bp_k)
model = 'linder'
for i, age in enumerate(ages):
axes = axes_all[i]
for filt in filters:
bp = nrc_utils.read_filter(filt, module='B')
res = get_planet_counts(bp, age, masses, return_mags=False, model=model)
axes[0].loglog(masses, res, marker='.', label=filt)
res = get_planet_counts(bp, age, masses, return_mags=True, model=model)
axes[1].semilogx(masses, res, marker='.', label=filt)
obs = S.Observation(sp_star, bp, bp.wave)
axes[2].semilogx(masses, res - obs.effstim('vegamag'), marker='.', label=filt)
axes[0].set_ylabel("Flux (counts/sec)")
axes[1].set_ylabel("Flux (mag)")
axes[2].set_ylabel("Contrast (mag)")
if i==0:
axes[0].set_title(f"Companion Flux (counts)")
axes[1].set_title(f"Companion Flux (mags)")
axes[2].set_title(f"Contrast Relative to {spt} Primary")
for j, ax in enumerate(axes):
if j>0:
ax.set_ylim(ax.get_ylim()[::-1])
ax.legend(title=f'Age = {age} Myr', loc='lower right')
if i==2:
ax.set_xlabel('Companion Mass ($M_{Jup}$)')
mod_str = 'BEX' if model=='linder' else 'COND'
fig.suptitle(f'{mod_str} Models', fontsize=15)
fig.tight_layout()
"""
Explanation: Conclusion: F460M and F480M provide the best contrast for medium band filters across all masses and ages considered (1-100 Myr, 0.5-10 MJup).
Linder Models for Wide Band Filters
End of explanation
"""
|
bdestombe/flopy-1 | examples/Notebooks/flopy3_WatertableRecharge_example.ipynb | bsd-3-clause | %matplotlib inline
from __future__ import print_function
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mfnwt'
if platform.system() == 'Windows':
exe_name = 'MODFLOW-NWT.exe'
mfexe = exe_name
modelpth = os.path.join('data')
modelname = 'watertable'
#make sure modelpth directory exists
if not os.path.exists(modelpth):
os.makedirs(modelpth)
"""
Explanation: FloPy
Simple water-table solution with recharge
This problem is an unconfined system with a uniform recharge rate, a horizontal bottom, and flow between constant-head boundaries in column 1 and 100. MODFLOW models cannot match the analytical solution exactly because they do not allow recharge to constant-head cells. Constant-head cells in column 1 and 100 were made very thin (0.1 m) in the direction of flow to minimize the effect of recharge applied to them. The analytical solution for this problem can be written as:
$h = \sqrt{b_{1}^{2} - \frac{x}{L} (b_{1}^{2} - b_{2}^{2}) + (\frac{R x}{K}(L-x))} + z_{bottom}$
where $R$ is the recharge rate, $K$ is the the hydraulic conductivity in the horizontal direction, $b_1$ is the specified saturated thickness at the left boundary, $b_2$ is the specified saturated thickness at the right boundary, $x$ is the distance from the left boundary $L$ is the length of the model domain, and $z_{bottom}$ is the elebation of the bottom of the aquifer.
The model consistes of a grid of 100 columns, 1 row, and 1 layer; a bottom altitude of 0 m; constant heads of 20 and 11m in column 1 and 100, respectively; a recharge rate of 0.001 m/d; and a horizontal hydraulic conductivity of 50 m/d. The discretization is 0.1 m in the row direction for the constant-head cells (column 1 and 100) and 50 m for all other cells.
End of explanation
"""
def analyticalWaterTableSolution(h1, h2, z, R, K, L, x):
h = np.zeros((x.shape[0]), np.float)
#dx = x[1] - x[0]
#x -= dx
b1 = h1 - z
b2 = h2 - z
h = np.sqrt(b1**2 - (x/L)*(b1**2 - b2**2) + (R * x / K) * (L - x)) + z
return h
"""
Explanation: Function to calculate the analytical solution at specified points in a aquifer
End of explanation
"""
# model dimensions
nlay, nrow, ncol = 1, 1, 100
# cell spacing
delr = 50.
delc = 1.
# domain length
L = 5000.
# boundary heads
h1 = 20.
h2 = 11.
# ibound
ibound = np.ones((nlay, nrow, ncol), dtype=np.int)
# starting heads
strt = np.zeros((nlay, nrow, ncol), dtype=np.float)
strt[0, 0, 0] = h1
strt[0, 0, -1] = h2
# top of the aquifer
top = 25.
# bottom of the aquifer
botm = 0.
# hydraulic conductivity
hk = 50.
# location of cell centroids
x = np.arange(0.0, L, delr) + (delr / 2.)
# location of cell edges
xa = np.arange(0, L+delr, delr)
# recharge rate
rchrate = 0.001
# calculate the head at the cell centroids using the analytical solution function
hac = analyticalWaterTableSolution(h1, h2, botm, rchrate, hk, L, x)
# calculate the head at the cell edges using the analytical solution function
ha = analyticalWaterTableSolution(h1, h2, botm, rchrate, hk, L, xa)
# ghbs
# ghb conductance
b1, b2 = 0.5*(h1+hac[0]), 0.5*(h2+hac[-1])
c1, c2 = hk*b1*delc/(0.5*delr), hk*b2*delc/(0.5*delr)
# dtype
ghb_dtype = flopy.modflow.ModflowGhb.get_default_dtype()
print(ghb_dtype)
# build ghb recarray
stress_period_data = np.zeros((2), dtype=ghb_dtype)
stress_period_data = stress_period_data.view(np.recarray)
print('stress_period_data: ', stress_period_data)
print('type is: ', type(stress_period_data))
# fill ghb recarray
stress_period_data[0] = (0, 0, 0, h1, c1)
stress_period_data[1] = (0, 0, ncol-1, h2, c2)
"""
Explanation: Model data required to create the model files and calculate the analytical solution
End of explanation
"""
mf = flopy.modflow.Modflow(modelname=modelname, exe_name=mfexe, model_ws=modelpth, version='mfnwt')
dis = flopy.modflow.ModflowDis(mf, nlay, nrow, ncol,
delr=delr, delc=delc,
top=top, botm=botm,
perlen=1, nstp=1, steady=True)
bas = flopy.modflow.ModflowBas(mf, ibound=ibound, strt=strt)
lpf = flopy.modflow.ModflowUpw(mf, hk=hk, laytyp=1)
ghb = flopy.modflow.ModflowGhb(mf, stress_period_data=stress_period_data)
rch = flopy.modflow.ModflowRch(mf, rech=rchrate, nrchop=1)
oc = flopy.modflow.ModflowOc(mf)
nwt = flopy.modflow.ModflowNwt(mf, linmeth=2, iprnwt=1, options='COMPLEX')
mf.write_input()
# remove existing heads results, if necessary
try:
os.remove(os.path.join(model_ws, '{0}.hds'.format(modelname)))
except:
pass
# run existing model
mf.run_model()
"""
Explanation: Create a flopy object to create and run the MODFLOW-NWT datasets for this problem
End of explanation
"""
# Create the headfile object
headfile = os.path.join(modelpth, '{0}.hds'.format(modelname))
headobj = flopy.utils.HeadFile(headfile, precision='single')
times = headobj.get_times()
head = headobj.get_data(totim=times[-1])
"""
Explanation: Read the simulated MODFLOW-NWT model results
End of explanation
"""
fig = plt.figure(figsize=(16,6))
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=0.25, hspace=0.25)
ax = fig.add_subplot(1, 3, 1)
ax.plot(xa, ha, linewidth=8, color='0.5', label='analytical solution')
ax.plot(x, head[0, 0, :], color='red', label='MODFLOW-2015')
leg = ax.legend(loc='lower left')
leg.draw_frame(False)
ax.set_xlabel('Horizontal distance, in m')
ax.set_ylabel('Head, in m')
ax = fig.add_subplot(1, 3, 2)
ax.plot(x, head[0, 0, :] - hac, linewidth=1, color='blue')
ax.set_xlabel('Horizontal distance, in m')
ax.set_ylabel('Error, in m')
ax = fig.add_subplot(1, 3, 3)
ax.plot(x, 100.*(head[0, 0, :] - hac)/hac, linewidth=1, color='blue')
ax.set_xlabel('Horizontal distance, in m')
ax.set_ylabel('Percent Error');
"""
Explanation: Plot the MODFLOW-NWT results and compare to the analytical solution
End of explanation
"""
|
jhillairet/scikit-rf | doc/source/examples/metrology/SOLT Calibration Standards Creation.ipynb | bsd-3-clause | import skrf
from skrf.media import DefinedGammaZ0
import numpy as np
freq = skrf.Frequency(1, 9000, 1001, "MHz")
ideal_medium = DefinedGammaZ0(frequency=freq, z0=50)
"""
Explanation: SOLT Calibration Standards Creation
Introduction
In scikit-rf, a calibration standard is treated just as a regular one-port or
two-port skrf.Network, defined by its full S-parameters. It can represent
reflection, transmission, load, and even arbitrary-impedance standards. Since
no additional errors are introduced in circuit modeling and fitting, this
approach allows the highest calibration accuracy, and is known as a
data-based standard in the terminology of VNA vendors.
However, traditionally, VNA calibration standards are defined using a circuit
model with fitted coefficients. Since many such calibration kits are still
being used and manufactured, especially in low-frequency applications, this
necessitates the creation of their network models before they can be used in
scikit-rf's calibration routines.
This example explains network creation from coefficients given in both the
HP-Agilent-Keysight format and the Rohde & Schwarz / Anritsu format. Both
are essentially the same circuit model, but the latter format uses different
units of measurement for an offset transmission line.
<div class="alert alert-block alert-warning">
Only coaxial standards are covered by this guide. The calculation is different
for waveguides. In particular, the scaling by $\sqrt{\text{GHz}}$ for coaxial
lines cannot be applied to waveguides because loss is also a function of their
physical dimensions, with significant more complicated formulas. Do you have
waveguide experience? If so, you can help this project by
<a href="https://scikit-rf.readthedocs.io/en/latest/contributing/index.html#examples-and-tutorials">making a contribution to the documentation</a>.
</div>
Alternatives to scikit-rf Modeling
Before we begin, it's worth pointing out some alternatives.
In scikit-rf, you are able to use any existing standard definition by
its S-parameters. If you already have your standard defined as a network
in other tools (e.g. in your favorite circuit simulator, or actual
measurements results), you can simply export the S-parameters to Touchstone
files for use in scikit-rf. Similarly, if you're already using a data-based
calibration standard, it should be possible to use its data directly. The
S-parameters may be stored in device-specific file formats, consult your
vendor on whether they can be exported as a Touchstone file.
Finally,
for non-critical measurements below 1 GHz, sometimes one can assume
the calibration standards are ideal. In scikit-rf, one can create ideal
responses conveniently by defining an ideal transmission line and calling
the short(), open(), match(), and thru() methods (explained
in the Preparation section).
Example: HP-Agilent-Keysight Coefficient Format
After the necessary background is introduced, let's begin.
For the purpose of this guide, we're going to model the Keysight 85033E,
3.5 mm, 50 Ω, DC to 9 GHz calibration kit, with the following coefficients
[4].
| Parameter | Unit | Open | Short | Load | Thru |
| ------------ | --------------------------- | --------- | --------- | ------- | -------- |
| $\text{C}_0$ | $10^{−15} \text{ F}$ | 49.43 | | | |
| $\text{C}_1$ | $10^{−27} \text{ F/Hz}$ | -310.1 | | | |
| $\text{C}_2$ | $10^{−36} \text{ F/Hz}^2$ | 23.17 | | | |
| $\text{C}_3$ | $10^{−45} \text{ F/Hz}^3$ | -0.1597 | | | |
| $\text{L}_0$ | $10^{−12} \text{ H}$ | | 2.077 | | |
| $\text{L}_1$ | $10^{−24} \text{ H/Hz}$ | | -108.5 | | |
| $\text{L}_2$ | $10^{−33} \text{ H/Hz}^2$ | | 2.171 | | |
| $\text{L}_3$ | $10^{−42} \text{ H/Hz}^3$ | | -0.01 | | |
| Resistance | $\Omega$ | | | 50 | |
| Offset $Z_0$ | $\Omega$ | 50 | 50 | 50 | 50 |
| Offset Delay | ps | 29.242 | 31.785 | 0 | 0 |
| Offset Loss | $\text{G}\Omega$ / s | 2.2 | 2.36 | 2.3 | 2.3 |
Circuit Model
Before we start creating their network definitions, we first need to know
the underlying circuit model and the meaning of these coefficients.
As this schematic shows, this is the HP-Agilent-Keysight model for
a calibration standard.
The first part is an "offset" lossy transmission
line, defined using three parameters: (1) a real characteristic impedance
of a lossless line - usually the system impedance, but for waveguide
standard, a special normalized value of 1 is used, (2) a delay -
represents its electrical length, given in picoseconds, and (3) a loss.
The loss is given in a somewhat unusual unit - gigaohms per second. With
the three parameters, one can calculate the propagation constant
($\gamma$) and the complex characteristic impedance ($Z_c$) of the
lossy line.
A shunt impedance is connected at the end of the transmission line,
and models the distributed capacitance or inductance
in the open or short standard. It's given as a third-degree
polynomial with four coefficients, $y(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3$, where $x$ is the frequency and $a_i$ are the coefficients.
For an open standard, they're $\text{C}_0$, $\text{C}_1$, $\text{C}_2$,
$\text{C}_3$, the first constant term is in femtofarad. For a short
standard, they're $\text{L}_0$, $\text{L}_1$, $\text{L}_2$, $\text{L}_3$,
the first constant term is in picohenry.
Neglected Terms
The short standard may sometimes be modeled only in terms of an offset
delay and offset loss, without a shunt impedance. Since the behavior of
a low-inductance short circuit is reasonably linear at low frequency, one can model it as an extra delay term with acceptable accuracy.
A matched load generates little reflection, thus it's often simply modeled as
a $Z_0$ termination, its reflection phase shift is assumed to be negligible and
is given a zero offset delay.
The thru standard is sometimes modeled with an offset delay only, without loss,
for two reasons: loss is negligible at low frequencies, and when the Unknown Thru
calibration algorithm is used, the exact characteristics of the Thru is unimportant.
Conversely, when the Thru is used as a "flush" thru, which is the case in most
traditional SOLT calibrations - port 1 and port 2 are connected directly without
any adapters in the Thru step. Thus, by definition, the thru standard is completely
ideal and has zero length, no modeling is required (in the table above, Keysight
still gives an offset loss for Load and Thru, but both should be modeled
as ideals because the listed offset delay is zero, essentially removing the offset
transmission line).
Preparation
<div id="preparation"></div>
Equipped with this circuit model, we can start to model the calibration
standards.
First, we need to import some library definitions, specify the frequency range of our
calculation. Here, we used 1 MHz to 9 GHz, with 1001 points. You may want to adjust it
for your needs. We also define an ideal_medium with a $50 \space\Omega$ port
impedance for the purpose of some future calculations.
End of explanation
"""
ideal_open = ideal_medium.open()
ideal_short = ideal_medium.short()
ideal_load = ideal_medium.match()
ideal_thru = ideal_medium.thru()
"""
Explanation: Ideal Responses
It's useful to know the special case first: ideal calibration standards are easily
created by calling the open(), short(), match(), and thru() methods in the
ideal_medium, the first three return a 1-port network. The thru() method returns
a two-port network.
End of explanation
"""
def offset_gamma_and_zc(offset_delay, offset_loss, offset_z0=50):
alpha_l = (offset_loss * offset_delay) / (2 * offset_z0)
alpha_l *= np.sqrt(freq.f / 1e9)
beta_l = 2 * np.pi * freq.f * offset_delay + alpha_l
gamma_l = alpha_l + 1j * beta_l
zc = (offset_z0) + (1 - 1j) * (offset_loss / (4 * np.pi * freq.f)) * np.sqrt(freq.f / 1e9)
return gamma_l, zc
"""
Explanation: Modeling the Offset Transmission Line
To correctly model the offset transmission line, one should use the
offset delay, offset loss, and offset $Z_0$ to derive the propagation
constant ($\gamma$) and the complex characteristic impedance ($Z_c$)
of the lossy line. Then, an actual transmission line is defined in those
terms.
The relationship between the offset line parameters and the propagation
constant is given by the following equations by Keysight [1].
They're in fact only approximate, one can obtain more accurate
results by calculating the full RLCG transmission line parameters, see
[3] and [4] for details. However, for practical calibration
standards (1-100 ps, 1-25 Gohm/s), the author of this guide found the error
is less than 0.001.
$$
\begin{gather}
\alpha l = \frac{\text{offset loss} \cdot \text{offset delay}}{2 \cdot \text{ offset }Z_0} \sqrt{\frac{f}{10^9}} \
\beta l = 2 \pi f \cdot \text{offset delay} + \alpha l \
\gamma l = \alpha l + j\beta l\
Z_c = \text{offset }Z_0 + (1 - j) \frac{\text{offset loss}}{2 \cdot 2 \pi f} \sqrt{\frac{f}{10^9}}
\end{gather}
$$
where $\alpha$ is the attenuation constant of the line, in nepers per meter, $\beta$ is the phase constant of the line, in radians per meter, $\gamma = \alpha + j\beta$ is the propagation constant of the line, $l$ is the length of the line, $Z_c$ is the complex characteristic impedance of the lossy line.
Several facts need to be taken into account. First, the actual length $l$ of the line
is irrelevant: what's being calculated here is not just $\gamma$ but $\gamma l$, with
an implicitly defined length. Thus, if $\gamma l$ is used as $\gamma$, the length of
the line is always set to unity (i.e. 1 meter). Next, the term $\sqrt{\frac{f}{10^9}}$
scales the line loss from the nominal 1 GHz value to a given frequency, but this is
only valid for coaxial lines, waveguides have a more complicated scaling rule. Finally,
the complex characteristic impedance $Z_c$ is different from the real characteristic
impedance offset $Z_0$. Offset $Z_0$ does not include any losses, and it's only used as
the port impedance, while $Z_c$ - calculated from offset $Z_0$ and offset loss - is the
actual impedance of the lossy line.
Let's translate these formulas to code.
End of explanation
"""
gamma_l_open, zc_open = offset_gamma_and_zc(29.242e-12, 2.2e9)
gamma_l_short, zc_short = offset_gamma_and_zc(31.785e-12, 2.36e9)
"""
Explanation: The broadcasting feature in numpy is used here. The quantities
alpha_l, beta_l, and zc are all frequency-dependent, thus they're
arrays, not scalars. But instead of looping over each frequency explicitly
and adding them to an array, here, arrays are automatically created by the
multiplication of a scalar and a numpy.array. We'll continue to use this
technique.
With the function offset_gamma_and_zc() defined, we can now calculate the
line constants for the open and short standards by calling it.
End of explanation
"""
medium_open = DefinedGammaZ0(
frequency=freq,
gamma=gamma_l_open, Z0=zc_open, z0=50
)
line_open = medium_open.line(
d=1, unit='m',
z0=medium_open.Z0, embed=True
)
medium_short = DefinedGammaZ0(
frequency=freq,
gamma=gamma_l_short, Z0=zc_short, z0=50
)
line_short = medium_short.line(
d=1, unit='m',
z0=medium_short.Z0, embed=True
)
"""
Explanation: At this point, we already have everything we need to know about this offset line.
The other half of the task is straightforward: create a two-port network for
this transmission line in scikit-rf, with a propagation constant $\gamma l$, a
characteristic impedance $Z_c$, a port impedance $\text{offset }Z_0=50\space\Omega$,
and an unity length (1 meter, because $\gamma l$ is used as $\gamma$, and $\gamma$
is measured in meters).
It's easy but a bit confusing to perform this task in scikit-rf, it needs elaboration:
First, create a DefinedGammaZ0 medium with these arguments: the propagation constant
gamma=gamma_l, the medium impedance Z0=zc, and port impedance z0=50
(note the spelling difference between Z0 and z0). It represents a physical medium.
Then, an actual line with a 1-meter length is derived
by calling the medium's line() method. But now, pay attention to the arguments
z0=medium.Z0 and embed=True. The argument z0 represents the line impedance.
Thus, we must set z0=medium.Z0 (or zc, where medium.Z0 comes from) to get the
proper line impedance we want. Finally, we must also set embed=True so that the
two ports at both ends of the line are set to the port impedance ($50\space\Omega$)
of the medium.
The confusing part is that in DefinedGammaZ0 class, argument z0 represents the
port impedance, but in the line() method, the same z0 argument represents the
line impedance instead! Also, if z0 is omitted, by default, line() uses the
port impedance of the medium (not medium impedance).
End of explanation
"""
# use ideal_medium, not medium_open and medium_short to avoid confusions.
capacitor_poly = np.poly1d([
-0.1597 * 1e-45,
23.17 * 1e-36,
-310.1 * 1e-27,
49.43 * 1e-15
])
capacitor_list = capacitor_poly(freq.f)
shunt_open = ideal_medium.capacitor(capacitor_list) ** ideal_medium.short()
inductor_poly = np.poly1d([
-0.01 * 1e-42,
2.171 * 1e-33,
-108.5 * 1e-24,
2.077 * 1e-12
])
inductor_list = inductor_poly(freq.f)
shunt_short = ideal_medium.inductor(inductor_list) ** ideal_medium.short()
"""
Explanation: Modeling the Shunt Impedance
Then, we need to model the shunt impedance of the open and short standards.
For the open standard, it's a capacitance. For the short standard, it's
an inductance.
Both are modeled as a third-degree polynomial, as a function of frequency.
In numpy, one can quickly define such a function via
np.poly1d([x3, x2, x1, x0]). This is a higher-order function which accepts
a list of coefficients in descending order, and returns a callable polynomial
function.
After the polynomial is evaluated, we can generate the frequency-dependent
capacitors and inductors. The open circuit is modeled as a series
medium.capacitor() followed by an ideal medium.short(). The short circuit
is modeled as a series medium.inductor() followed by an ideal
medium.short().
Because the capacitor and inductor are defined with respect to the port impedance,
not any particular lossy transmission line, to avoid confusions, we use ideal_medium,
not medium_open or medium_short in the following examples (although the latter
two are usable, they also use the port impedance).
End of explanation
"""
open_std = line_open ** shunt_open
short_std = line_short ** shunt_short
load_std = ideal_medium.match()
thru_std = ideal_medium.thru()
"""
Explanation: For the open standard, a series medium.shunt_capacitor() terminated by
a medium.open() could have also been used to get the same result. The
medium.open() termination is important, because shunt_capacitor() creates
a two-port network, and the other port needs to be open. Otherwise, a line
terminated solely by a shunt_capacitor() produces incorrect S-parameters.
Completion
Finally, we connect these model components together, and add definitions for the
ideal load and Thru, this completes our modeling.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: Now you can pass these standards into scikit-rf's calibration routines, or use the write_touchstone() method to save them on the disk for future use.
<div class="alert alert-block alert-info">
Here, the <code>open_std</code>, <code>short_std</code> and <code>load_std</code> we
generated are one-port networks, but most scikit-rf's calibration routines expect a
two-port networks as standards since they're used in two-port calibrations. You can
use the function <code>skrf.two_port_reflect()</code> to generate a two-port network
from two one-port networks. For more information, be sure to read the
<a href="https://scikit-rf.readthedocs.io/en/latest/examples/metrology/SOLT.html">SOLT
calibration</a> example in the documentation.
</div>
Plotting
End of explanation
"""
mag = plt.subplot(1, 1, 1)
plt.title("Keysight 85033E Open (S11)")
open_std.plot_s_db(color='red', label="Magnitude")
plt.legend(bbox_to_anchor=(0.73, 1), loc='upper left', borderaxespad=0)
phase = mag.twinx()
open_std.plot_s_deg(color='blue', label="Phase")
plt.legend(bbox_to_anchor=(0.73, 0.9), loc='upper left', borderaxespad=0)
"""
Explanation: Finally, let's take a look at the magnitudes and phase shifts of our standards.
Open
End of explanation
"""
mag = plt.subplot(1, 1, 1)
plt.title("Keysight 85033E Short (S11)")
short_std.plot_s_db(color='red', label="Magnitude")
plt.legend(bbox_to_anchor=(0.73, 1), loc='upper left', borderaxespad=0)
phase = mag.twinx()
short_std.plot_s_deg(color='blue', label="Phase")
plt.legend(bbox_to_anchor=(0.73, 0.9), loc='upper left', borderaxespad=0)
"""
Explanation: Short
End of explanation
"""
import skrf
from skrf.media import DefinedGammaZ0
import numpy as np
def keysight_calkit_offset_line(freq, offset_delay, offset_loss, offset_z0):
if offset_delay or offset_loss:
alpha_l = (offset_loss * offset_delay) / (2 * offset_z0)
alpha_l *= np.sqrt(freq.f / 1e9)
beta_l = 2 * np.pi * freq.f * offset_delay + alpha_l
zc = offset_z0 + (1 - 1j) * (offset_loss / (4 * np.pi * freq.f)) * np.sqrt(freq.f / 1e9)
gamma_l = alpha_l + beta_l * 1j
medium = DefinedGammaZ0(frequency=freq, z0=offset_z0, Z0=zc, gamma=gamma_l)
offset_line = medium.line(d=1, unit='m', z0=medium.Z0, embed=True)
return medium, offset_line
else:
medium = DefinedGammaZ0(frequency=freq, Z0=offset_z0)
line = medium.line(d=0)
return medium, line
def keysight_calkit_open(freq, offset_delay, offset_loss, c0, c1, c2, c3, offset_z0=50):
medium, line = keysight_calkit_offset_line(freq, offset_delay, offset_loss, offset_z0)
# Capacitance is defined with respect to the port impedance offset_z0, not the lossy
# line impedance. In scikit-rf, the return values of `shunt_capacitor()` and `medium.open()`
# methods are (correctly) referenced to the port impedance.
if c0 or c1 or c2 or c3:
poly = np.poly1d([c3, c2, c1, c0])
capacitance = medium.shunt_capacitor(poly(freq.f)) ** medium.open()
else:
capacitance = medium.open()
return line ** capacitance
def keysight_calkit_short(freq, offset_delay, offset_loss, l0, l1, l2, l3, offset_z0=50):
# Inductance is defined with respect to the port impedance offset_z0, not the lossy
# line impedance. In scikit-rf, the return values of `inductor()` and `medium.short()`
# methods are (correctly) referenced to the port impedance.
medium, line = keysight_calkit_offset_line(freq, offset_delay, offset_loss, offset_z0)
if l0 or l1 or l2 or l3:
poly = np.poly1d([l3, l2, l1, l0])
inductance = medium.inductor(poly(freq.f)) ** medium.short()
else:
inductance = medium.short()
return line ** inductance
def keysight_calkit_load(freq, offset_delay=0, offset_loss=0, offset_z0=50):
medium, line = keysight_calkit_offset_line(freq, offset_delay, offset_loss, offset_z0)
load = medium.match()
return line ** load
def keysight_calkit_thru(freq, offset_delay=0, offset_loss=0, offset_z0=50):
medium, line = keysight_calkit_offset_line(freq, offset_delay, offset_loss, offset_z0)
thru = medium.thru()
return line ** thru
freq = skrf.Frequency(1, 9000, 1001, "MHz")
open_std = keysight_calkit_open(
freq,
offset_delay=29.242e-12, offset_loss=2.2e9,
c0=49.43e-15, c1=-310.1e-27, c2=23.17e-36, c3=-0.1597e-45
)
short_std = keysight_calkit_short(
freq,
offset_delay=31.785e-12, offset_loss=2.36e9,
l0=2.077e-12, l1=-108.5e-24, l2=2.171e-33, l3=-0.01e-42
)
load_std = keysight_calkit_load(freq)
thru_std = keysight_calkit_thru(freq)
"""
Explanation: Conclusion
As shown in the graphs above, the losses in the standards are extremely low, on the
order of 0.01 dB throughout the spectrum. Meanwhile, the phase shift is what really
needs compensation for. At 1 GHz, the phase shift has already reached 25 degrees or
so.
Code Snippet
For convenience, you can reuse the following code snippets to generate calibration standard networks from coefficients in Keysight format.
End of explanation
"""
def rs_to_keysight(rs_offset_length, rs_offset_loss, offset_z0=50):
offset_delay = rs_offset_length / skrf.constants.c
offset_loss = skrf.mathFunctions.db_2_np(rs_offset_loss * offset_z0 / offset_delay)
return offset_delay, offset_loss
"""
Explanation: Example: Rohde & Schwarz / Anritsu Coefficient Format
On Rohde & Schwarz and Anritsu VNAs, a slightly different format is used to define the coefficients. Here's an example of a Maury Microwave 8050CK10, a 3.5 mm, DC to 26.5 GHz calibration kit defined in Rohde & Schwartz's format [5].
| Parameter | Unit | Open | Short | Load | Thru |
| ------------ | --------------------------- | ---------- | --------- | ------- | -------- |
| $\text{C}_0$ | $10^{−15} \text{ F}$ | 62.54 | | | |
| $\text{C}_1$ | $10^{−15} \text{ F/GHz}$ | 1284.0 | | | |
| $\text{C}_2$ | $10^{−15} \text{ F/GHz}^2$ | 107.6 | | | |
| $\text{C}_3$ | $10^{−15} \text{ F/GHz}^3$ | -1.886 | | | |
| $\text{L}_0$ | $10^{−12} \text{ H}$ | | 0 | | |
| $\text{L}_1$ | $10^{−12} \text{ H/GHz}$ | | 0 | | |
| $\text{L}_2$ | $10^{−12} \text{ H/GHz}^2$ | | 0 | | |
| $\text{L}_3$ | $10^{−12} \text{ H/GHz}^3$ | | 0 | | |
| Resistance | $\Omega$ | | | 50 | |
| Offset Length| mm | 4.344 | 5.0017 | 0 | 17.375 |
| Offset Loss |$\text{dB / }\sqrt{\text{GHz}}$| 0.0033 | 0.0038 | 0 | 0.0065 |
Modeling the Offset Transmission Line
As shown, it's essentially the same circuit model, the only difference is that the offset transmission line is defined in different units of measurements: offset delay is defined as a physical length instead of a time delay, offset loss is defined in decibel, the offset $Z_0$ is defined to be $50 \space\Omega$ and unlisted.
We can reuse the same calculations in the Keysight model after a simple unit conversion using these equations [2].
$$
\begin{gather}
\text{D'} = \frac{D \cdot \sqrt{\epsilon_r}}{c_0} \
\text{L'} = \frac{L \cdot Z_0}{D' \cdot 20 \log_{10}{(e)}}
\end{gather}
$$
where $D$ and $L$ are the offset length (meter) and offset loss ($\text{dB / }\sqrt{\text{GHz}}$) in the R&S model, $D'$ and $L'$ are the offset delay (second) and offset loss ($\Omega$ / s) in Keysight's model, $\epsilon_r$ is the dielectric constant, it's air by definition, thus $\epsilon_r = 1$, and $c_0$ is the speed of light. The term $20 \log_{10}{(e)}$ is a conversion from decibel to neper.
End of explanation
"""
offset_delay, offset_loss = rs_to_keysight(4.344e-3, 0.0033)
gamma_l, zc = offset_gamma_and_zc(offset_delay, offset_loss)
medium_open = DefinedGammaZ0(
frequency=freq,
gamma=gamma_l, Z0=zc, z0=50
)
line_open = medium_open.line(
d=1, unit='m',
z0=medium_open.Z0, embed=True
)
offset_delay, offset_loss = rs_to_keysight(5.0017e-3, 0.0038)
gamma_l, zc = offset_gamma_and_zc(offset_delay, offset_loss)
medium_short = DefinedGammaZ0(
frequency=freq,
gamma=gamma_l, Z0=zc, z0=50
)
line_short = medium_short.line(
d=1, unit='m',
z0=medium_short.Z0, embed=True
)
offset_delay, offset_loss = rs_to_keysight(17.375e-3, 0.0065)
gamma_l, zc = offset_gamma_and_zc(offset_delay, offset_loss)
medium_thru = DefinedGammaZ0(
frequency=freq,
gamma=gamma_l, Z0=zc, z0=50
)
line_thru = medium_thru.line(
d=1, unit='m',
z0=medium_thru.Z0, embed=True
)
"""
Explanation: After unit conversion, we can define standards just like how calibration standards in Keysight-style
coefficients are defined.
End of explanation
"""
capacitor_poly = np.poly1d([
-0.001886 * 1000e-45,
0.1076 * 1000e-36,
-1.284 * 1000e-27,
62.54 * 1e-15
])
capacitor_open = capacitor_poly(freq.f)
shunt_open = ideal_medium.shunt_capacitor(capacitor_open) ** ideal_medium.open()
# or: shunt_open = ideal_medium.shunt_capacitor(capacitor_open) ** ideal_medium.short()
# see the Keysight example for explanation.
shunt_short = ideal_medium.short()
"""
Explanation: Modeling the Shunt Impedance
The definition of shunt impedance is identical to the Keysight format.
But, beware of the units used for the capacitance and inductance! In the
given table, the capacitances are given in $10^{−15} \text{ F}$, $10^{−15} \text{ F/GHz}$,
$10^{−15} \text{ F/GHz}^2$, and
$10^{−15} \text{ F/GHz}^3$. For Keysight and Anritsu VNAs, they're given in $10^{-15} \text{ F}$,
$10^{-27} \text{ F/Hz}$, $10^{-36} \text{ F/Hz}^2$ and $10^{-45} \text{ F/Hz}^3$. Inductance
units have the same differences. Always double-check the units before start modeling. To
convert the units from the first to the second format, multiply $x_1$, $x_2$ and $x_3$
by 1000 (don't change the constant term $x_0$). For consistency, we'll use the second format
in the code.
Since the inductance in the short standard is neglected, only the capacitance in the open
standard is modeled, the short is modeled as ideal.
End of explanation
"""
open_std = line_open ** shunt_open
short_std = line_short ** shunt_short
load_std = ideal_medium.match()
thru_std = line_thru
"""
Explanation: Completion
Finally, we connect these model components together.
End of explanation
"""
mag = plt.subplot(1, 1, 1)
plt.title("Maury Microwave 8050CK10 Open (S11)")
open_std.plot_s_db(color='red', label="Magnitude")
plt.legend(bbox_to_anchor=(0.73, 1), loc='upper left', borderaxespad=0)
phase = mag.twinx()
open_std.plot_s_deg(color='blue', label="Phase")
plt.legend(bbox_to_anchor=(0.73, 0.9), loc='upper left', borderaxespad=0)
"""
Explanation: Plotting
Again, let's examine the behaviors of the finished standards.
Open
End of explanation
"""
mag = plt.subplot(1, 1, 1)
plt.title("Maury Microwave 8050CK10 Short (S11)")
short_std.plot_s_db(color='red', label="Magnitude")
plt.legend(bbox_to_anchor=(0.73, 1), loc='upper left', borderaxespad=0)
phase = mag.twinx()
short_std.plot_s_deg(color='blue', label="Phase")
plt.legend(bbox_to_anchor=(0.73, 0.9), loc='upper left', borderaxespad=0)
"""
Explanation: Short
End of explanation
"""
mag = plt.subplot(1, 1, 1)
plt.title("Maury Microwave 8050CK10 Thru (S21)")
thru_std.s21.plot_s_db(color='red', label="Magnitude")
plt.legend(bbox_to_anchor=(0.73, 1), loc='upper left', borderaxespad=0)
phase = mag.twinx()
thru_std.s21.plot_s_deg(color='blue', label="Phase")
plt.legend(bbox_to_anchor=(0.73, 0.9), loc='upper left', borderaxespad=0)
"""
Explanation: Thru
End of explanation
"""
import skrf
from skrf.media import DefinedGammaZ0
import numpy as np
def rs_to_keysight(rs_offset_length, rs_offset_loss, offset_z0=50):
offset_delay = rs_offset_length / skrf.constants.c
offset_loss = skrf.mathFunctions.db_2_np(rs_offset_loss * offset_z0 / offset_delay)
return offset_delay, offset_loss
def rs_calkit_offset_line(freq, rs_offset_length, rs_offset_loss, offset_z0):
if rs_offset_length or rs_offset_loss:
offset_delay, offset_loss = rs_to_keysight(rs_offset_length, rs_offset_loss)
alpha_l = (offset_loss * offset_delay) / (2 * offset_z0)
alpha_l *= np.sqrt(freq.f / 1e9)
beta_l = 2 * np.pi * freq.f * offset_delay + alpha_l
zc = offset_z0 + (1 - 1j) * (offset_loss / (4 * np.pi * freq.f)) * np.sqrt(freq.f / 1e9)
gamma_l = alpha_l + beta_l * 1j
medium = DefinedGammaZ0(frequency=freq, z0=offset_z0, Z0=zc, gamma=gamma_l)
offset_line = medium.line(d=1, unit='m', z0=medium.Z0, embed=True)
return medium, offset_line
else:
medium = DefinedGammaZ0(frequency=freq, Z0=offset_z0)
line = medium.line(d=0)
return medium, line
def rs_calkit_open(freq, offset_length, offset_loss, c0, c1, c2, c3, offset_z0=50):
# Capacitance is defined with respect to the port impedance offset_z0, not the lossy
# line impedance. In scikit-rf, the return values of `shunt_capacitor()` and `medium.open()`
# methods are (correctly) referenced to the port impedance.
medium, line = rs_calkit_offset_line(freq, offset_length, offset_loss, offset_z0)
if c0 or c1 or c2 or c3:
poly = np.poly1d([c3, c2, c1, c0])
capacitance = medium.shunt_capacitor(poly(freq.f)) ** medium.open()
else:
capacitance = medium.open()
return line ** capacitance
def rs_calkit_short(freq, offset_length, offset_loss, l0, l1, l2, l3, offset_z0=50):
# Inductance is defined with respect to the port impedance offset_z0, not the lossy
# line impedance. In scikit-rf, the return values of `inductor()` and `medium.short()`
# methods are (correctly) referenced to the port impedance.
medium, line = rs_calkit_offset_line(freq, offset_length, offset_loss, offset_z0)
if l0 or l1 or l2 or l3:
poly = np.poly1d([l3, l2, l1, l0])
inductance = medium.inductor(poly(freq.f)) ** medium.short()
else:
inductance = medium.short()
return line ** inductance
def rs_calkit_load(freq, offset_length=0, offset_loss=0, offset_z0=50):
medium, line = rs_calkit_offset_line(freq, offset_length, offset_loss, offset_z0)
load = medium.match()
return line ** load
def rs_calkit_thru(freq, offset_length=0, offset_loss=0, offset_z0=50):
medium, line = rs_calkit_offset_line(freq, offset_length, offset_loss, offset_z0)
thru = medium.thru()
return line ** thru
freq = skrf.Frequency(1, 9000, 1001, "MHz")
open_std = rs_calkit_open(
freq,
offset_length=4.344e-3, offset_loss=0.0033,
# Due to unit differences, the numerical values of c1, c2 and c3
# must be multiplied by 1000 from the R&S datasheet value. For
# Anritsu, this is not needed. Check the units on your datasheet!
c0=62.54 * 1e-15,
c1=-1.284 * 1000e-27,
c2=0.1076 * 1000e-36,
c3=-0.001886 * 1000e-45
)
short_std = rs_calkit_short(
freq,
offset_length=5.0017e-3, offset_loss=0.0038,
l0=0, l1=0, l2=0, l3=0
)
load_std = rs_calkit_load(freq)
thru_std = rs_calkit_thru(
freq,
offset_length=17.375e-3, offset_loss=0.0065
)
"""
Explanation: Conclusion
The results are similar to the Keysight calibration standards. The S21 graph for the
Thru standard explains why adding an electrical delay sometimes can serve as a crude
but usable calibration method ("port extension") for VNA measurements. Again, losses
are extremely low, phase shift is the source of non-ideal properties in the standards.
Code Snippet
For convenience, you can reuse the following code snippet to generate calibration standard networks from coefficients in Rhode & Swartz and Anritsu format.
End of explanation
"""
|
jobovy/misc-notebooks | inference/open-cluster-ABC-w-lack-of-CCSNe.ipynb | bsd-3-clause | def sftime_ABC(n=100,K=1,tccsn=4.,tmax=20.):
out= []
for ii in range(n):
while True:
# Sample from prior
tsf= numpy.random.uniform()*tmax
# Sample K massive-star formation times
stars= numpy.random.uniform(size=K)*tsf
# Only accept if all go CCSN after SF ceases
if numpy.all(stars+tccsn > tsf): break
out.append(tsf)
return out
"""
Explanation: ABC inference of upper limit on star-formation timescale from lack of CCSN
The ABC sampling assuming K stars that go CCSN in tccsn years:
End of explanation
"""
pdf_1ccsn= sftime_ABC(n=100000)
pdf_2ccsn= sftime_ABC(n=100000,K=2)
pdf_5ccsn= sftime_ABC(n=100000,K=5)
dum=bovy_plot.bovy_hist(pdf_1ccsn,range=[0.,20.],
bins=31,normed=True,
histtype='step')
dum=bovy_plot.bovy_hist(pdf_2ccsn,range=[0.,20.],
bins=31,normed=True,
histtype='step',overplot=True)
dum=bovy_plot.bovy_hist(pdf_5ccsn,range=[0.,20.],
bins=31,normed=True,
histtype='step',overplot=True)
#My analytical calculation for 1
xs= numpy.linspace(0.,20.,1001)
ys= 4./xs
ys[xs < 4.]= 1.
ys/= numpy.sum(ys)*(xs[1]-xs[0])
plot(xs,ys)
"""
Explanation: The PDF for 1, 2, and 5 CCSN
End of explanation
"""
print numpy.percentile(pdf_1ccsn,95)
print numpy.percentile(pdf_2ccsn,95)
print numpy.percentile(pdf_5ccsn,95)
"""
Explanation: And the 95% confidence limits
End of explanation
"""
pdf_11ccsn= sftime_ABC(n=100000,K=11,tccsn=6.)
print numpy.percentile(pdf_11ccsn,95)
"""
Explanation: For 11 lighter CCSNe with 6 Myr lag
End of explanation
"""
|
JanetMatsen/plotting_python | plot_groupby_and_subplots.ipynb | apache-2.0 | df = pd.DataFrame({'age':[1.,2,3,4,5,6,7,8,9],
'height':[4, 4.5, 5, 6, 7, 8, 9, 9.5, 10],
'gender':['M','F', 'F','M','M','F', 'F','M', 'F'],
#'hair color':['brown','black', 'brown', 'blonde', 'brown', 'red',
# 'brown', 'brown', 'black' ],
'hair length':[1,6,2,3,1,5,6,5,3] })
"""
Explanation: Prepare a minimal data set.
End of explanation
"""
def plot_2_subplots(df, x1, y1, y2, x2=None, title=None):
fig, axs = plt.subplots(2, 1, figsize=(5, 4))
colors = ['c','b']
# get the data array for x1:
x1d = df[x1]
# get the data array for x2:
if x2 is None: # use x1 as x2
x2d=df[x1]
x2=x1
# todo (?) share x axis if x2 was None?
else:
x2d=df[x2]
# get the data arrays for y1, y2:
y1d=df[y1]
y2d=df[y2]
axs[0].plot(x1d, y1d, linestyle='--', marker='o', color=colors[0]) #, label=y1)
axs[0].set_xlabel(x1)
axs[0].set_ylabel(y1)
axs[1].plot(x2d, y2d, linestyle='--', marker='o', color=colors[1]) #, label=y2)
axs[1].set_xlabel(x2)
axs[1].set_ylabel(y2)
for subplot in axs:
subplot.legend(loc='best')
axs[0].axhline(y=0, color='k')
# fill 2nd plot
axs[1].axhline(y=0, color='k')
plt.legend()
if title is not None:
plt.title(title)
plt.tight_layout()
return fig
p = plot_2_subplots(df=df, x1='age', y1='height', y2='hair length', x2=None, title=None)
"""
Explanation: Plotting separate data columns as separate sub-plots:
End of explanation
"""
df.plot
def plot_by_group(df, group, x, y, title=None):
fig, ax = plt.subplots(1, 1, figsize=(3.5, 2.5))
ax.set_xlabel(x)
ax.set_ylabel(y)
# todo: move title up(?)
if title is not None:
ax.set_title(title)
for tup, group_df in df.groupby(group):
# sort on the x attribute
group_df = group_df.sort_values(x)
# todo: print label in legend.
ax.plot(group_df[x], group_df[y], marker='o', label=tup[0])
print(tup)
# todo: put legend outside the figure
plt.legend()
plot_by_group(df=df, group='gender', x='age', y='height', title='this is a title, you bet.')
"""
Explanation: Plot multiple groups (from same data columns) onto the same plot.
End of explanation
"""
def plot_2_subplots_v2(df, x1, x2, y1, y2, title=None):
fig, axs = plt.subplots(2, 1, figsize=(5, 4))
plt_data = {1:(df[x1], df[y1]), 2:(df[x2], df[y2])}
titles = {1:x1, 2:x2}
colors = {1:'#b3cde3', 2:'#decbe4'}
for row, ax in enumerate(axs, start=1):
print(row, ax)
ax.plot(plt_data[row][0], plt_data[row][1], color=colors[row], marker='o', label=row)
ax.set_xlabel('some x')
ax.set_title(titles[row])
plt.tight_layout()
return fig
# kind of a silly example.
p = plot_2_subplots_v2(df=df, x1='age', y1='height', y2='hair length', x2='age', title=None)
"""
Explanation: Jake Vanderplas:
plt.plot can be noticeably more efficient than plt.scatter. The reason is that plt.scatter has the capability to render a different size and/or color for each point, so the renderer must do the extra work of constructing each point individually.
http://nbviewer.jupyter.org/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.02-Simple-Scatter-Plots.ipynb
End of explanation
"""
|
CityofToronto/bdit_congestion | here_map_change/hereanalysis.ipynb | gpl-3.0 | %sql select count(*) from (select distinct(link_dir) from here.ta_201710) oct
"""
Explanation: Total distinct link_dir in Oct:
End of explanation
"""
%sql select count(*) from (select distinct(link_dir) from here.ta_201711) nov
"""
Explanation: Total distinct link_dir in Nov:
End of explanation
"""
%%sql
select count(*) as intersection from (select distinct(link_dir) as links from here.ta_201711
INTERSECT
select distinct(link_dir) as links from here.ta_201710) as common
"""
Explanation: Common link_dir for Nov and Oct (i.e. $Nov \cap Oct$):
End of explanation
"""
146728-139168
"""
Explanation: link_dir in Oct AND NOT in Nov (i.e. $Oct - (Nov \cap Oct)$):
End of explanation
"""
string1 = '''SELECT count(link_dir) as count_oct FROM here.ta_201710
WHERE link_dir NOT IN (select distinct(link_dir) FROM here.ta_201711
INTERSECT
SELECT distinct(link_dir) FROM here.ta_201710)
GROUP BY link_dir'''
string2 = '''SELECT count(link_dir) as count_nov FROM here.ta_201711
WHERE link_dir NOT IN (select distinct(link_dir) FROM here.ta_201711
INTERSECT
SELECT distinct(link_dir) FROM here.ta_201710)
GROUP BY link_dir'''
october = pandasql.read_sql(pg.SQL(string1), con)
november = pandasql.read_sql(pg.SQL(string2), con)
o = np.array(np.log(october['count_oct']))
n = np.array(np.log(november['count_nov']))
o.min(), o.max(), n.min(), n.max()
pd.options.display.mpl_style = 'default'
s = 23
plt.figure(figsize = (17,10))
plt.hist(o, normed=True, bins = [0.1*i for i in range(0, 90)], alpha=0.6, color = 'red')
plt.xlabel('counts', fontsize = s)
plt.ylabel('frequency', fontsize = s)
plt.rc('ytick', labelsize=s)
plt.rc('xtick', labelsize=s)
plt.title('Frequency Distribution of Links (October)', fontsize = 28)
plt.tight_layout()
pd.options.display.mpl_style = 'default'
plt.figure(figsize = (17,10))
plt.hist(n, normed=True, bins = [0.1*i for i in range(0, 89)], alpha=0.6, color = 'blue')
plt.xlabel('counts', fontsize = s)
plt.ylabel('frequency', fontsize = s)
plt.rc('ytick', labelsize=s)
plt.rc('xtick', labelsize=s)
plt.title('Frequency Distribution of Links (November)', fontsize = 28)
plt.tight_layout()
plt.show()
"""
Explanation: Below are frequency histograms for the links' counts for each month. The shaded data represents the respective month in the title of the graph, and the black line represents the month it is being compared to.
End of explanation
"""
144502-139168
%%sql
select intersection/total as percentage from
(select count(*)::float as intersection from (select distinct(link_dir) as links from here.ta_201711
INTERSECT
select distinct(link_dir) as links from here.ta_201710) as common) t1,
(select count(*)::float as total from (select distinct(link_dir) from here.ta_201710) f) t2;
%%sql
select intersection/total as percentage from
(select count(*)::float as intersection from (select distinct(link_dir) as links from here.ta_201711
INTERSECT
select distinct(link_dir) as links from here.ta_201710) as common) t1,
(select count(*)::float as total from (select distinct(link_dir) from here.ta_201711) f) t2;
"""
Explanation: link_dir in Nov AND NOT in Oct (i.e. $Nov - (Nov \cap Oct)$):
End of explanation
"""
|
google-aai/sc17 | cats/step_5_to_6_part1.ipynb | apache-2.0 | # Enter your username:
YOUR_GMAIL_ACCOUNT = '******' # Whatever is before @gmail.com in your email address
import cv2
import numpy as np
import os
import pickle
import shutil
import sys
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from random import random
from scipy import stats
from sklearn import preprocessing
from sklearn import svm
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import average_precision_score
from sklearn.metrics import precision_recall_curve
import tensorflow as tf
from tensorflow.contrib.learn import LinearClassifier
from tensorflow.contrib.learn import Experiment
from tensorflow.contrib.learn.python.learn import learn_runner
from tensorflow.contrib.layers import real_valued_column
from tensorflow.contrib.learn import RunConfig
# Directories:
PREPROC_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/')
OUTPUT_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/logreg/') # Does not need to exist yet.
"""
Explanation: Logistic Regression Based on Extracted Features
Author(s): bfoo@google.com, kozyr@google.com
In this notebook, we will perform training over the features collected from step 4's image and feature analysis step. Two tools will be used in this demo:
Scikit learn: the widely used, single machine Python machine learning library
TensorFlow: Google's home-grown machine learning library that allows distributed machine learning
Setup
You need to have worked through the feature engineering notebook in order for this to work since we'll be loading the pickled datasets we saved in Step 4. You might have to adjust the directories below if you made changes to save directory in that notebook.
End of explanation
"""
training_std = pickle.load(open(PREPROC_DIR + 'training_std.pkl', 'r'))
debugging_std = pickle.load(open(PREPROC_DIR + 'debugging_std.pkl', 'r'))
training_labels = pickle.load(open(PREPROC_DIR + 'training_labels.pkl', 'r'))
debugging_labels = pickle.load(open(PREPROC_DIR + 'debugging_labels.pkl', 'r'))
FEATURE_LENGTH = training_std.shape[1]
print FEATURE_LENGTH
# Examine the shape of the feature data we loaded:
print(type(training_std)) # Type will be numpy array.
print(np.shape(training_std)) # Rows, columns.
# Examine the label data we loaded:
print(type(training_labels)) # Type will be numpy array.
print(np.shape(training_labels)) # How many datapoints?
training_labels[:3] # First 3 training labels.
"""
Explanation: Load stored features and labels
Load from the pkl files saved in step 4 and confirm that the feature length is correct.
End of explanation
"""
# Plug into scikit-learn for logistic regression training
model = LogisticRegression(penalty='l1', C=0.2) # C is inverse of the regularization strength
model.fit(training_std, training_labels)
# Print zero coefficients to check regularization strength
print 'Non-zero weights', sum(model.coef_[0] > 0)
"""
Explanation: Step 5: Enabling Logistic Regression to Run
Logistic regression is a generalized linear model that predicts a probability value of whether each picture is a cat. Scikit-learn has a very easy interface for training a logistic regression model.
Logistic Regression in scikit-learn
In logistic regression, one of the hyperparameters is known as the regularization term C. Regularization is a penalty associated with the complexity of the model itself, such as the value of its weights. The example below uses "L1" regularization, which has the following behavior: as C decreases, the number of non-zero weights also decreases (complexity decreases).
A high complexity model (high C) will fit very well to the training data, but will also capture the noise inherent in the training set. This could lead to poor performance when predicting labels on the debugging set.
A low complexity model (low C) does not fit as well with training data, but will generalize better over unseen data. There is a delicate balance in this process, as oversimplifying the model also hurts its performance.
End of explanation
"""
# Get the output predictions of the training and debugging inputs
training_predictions = model.predict_proba(training_std)[:, 1]
debugging_predictions = model.predict_proba(debugging_std)[:, 1]
"""
Explanation: Step 6: Train Logistic Regression with scikit-learn
Let's train!
End of explanation
"""
# Accuracy metric:
def get_accuracy(truth, predictions, threshold=0.5, roundoff=2):
"""
Args:
truth: can be Boolean (False, True), int (0, 1), or float (0, 1)
predictions: number between 0 and 1, inclusive
threshold: we convert predictions to 1s if they're above this value
roundoff: report accuracy to how many decimal places?
Returns:
accuracy: number correct divided by total predictions
"""
truth = np.array(truth) == (1|True)
predicted = np.array(predictions) >= threshold
matches = sum(predicted == truth)
accuracy = float(matches) / len(truth)
return round(accuracy, roundoff)
# Compute our accuracy metric for training and debugging
print 'Training accuracy is ' + str(get_accuracy(training_labels, training_predictions))
print 'Debugging accuracy is ' + str(get_accuracy(debugging_labels, debugging_predictions))
"""
Explanation: That was easy! But how well did it do? Let's check the accuracy of the model we just trained.
End of explanation
"""
def train_input_fn():
training_X_tf = tf.convert_to_tensor(training_std, dtype=tf.float32)
training_y_tf = tf.convert_to_tensor(training_labels, dtype=tf.float32)
return {'features': training_X_tf}, training_y_tf
def eval_input_fn():
debugging_X_tf = tf.convert_to_tensor(debugging_std, dtype=tf.float32)
debugging_y_tf = tf.convert_to_tensor(debugging_labels, dtype=tf.float32)
return {'features': debugging_X_tf}, debugging_y_tf
"""
Explanation: Step 5: Enabling Logistic Regression to Run v2.0
Tensorflow Model
Tensorflow is a Google home-grown tool that allows one to define a model and run distributed training on it. In this notebook, we focus on the atomic pieces for building a tensorflow model. However, this will all be trained locally.
Input functions
Tensorflow requires the user to define input functions, which are functions that return rows of feature vectors, and their corresponding labels. Tensorflow will periodically call these functions to obtain data as model training progresses.
Why not just provide the feature vectors and labels upfront? Again, this comes down to the distributed aspect of Tensorflow, where data can be received from various sources, and not all data can fit on a single machine. For instance, you may have several million rows distributed across a cluster, but any one machine can only provide a few thousand rows. Tensorflow allows you to define the input function to pull data in from a queue rather than a numpy array, and that queue can contain training data that is available at that time.
Another practical reason for supplying limited training data is that sometimes the feature vectors are very long, and only a few rows can fit within memory at a time. Finally, complex ML models (such as deep neural networks) take a long time to train and use up a lot of resources, and so limiting the training samples at each machine allows us to train faster and without memory issues.
The input function's returned features is defined as a dictionary of scalar, categorical, or tensor-valued features. The returned labels from an input function is defined as a single tensor storing the labels. In this notebook, we will simply return the entire set of features and labels with every function call.
End of explanation
"""
# Tweak this hyperparameter to improve debugging precision-recall AUC.
REG_L1 = 5.0 # Use the inverse of C in sklearn, i.e 1/C.
LEARNING_RATE = 2.0 # How aggressively to adjust coefficients during optimization?
TRAINING_STEPS = 20000
# The estimator requires an array of features from the dictionary of feature columns to use in the model
feature_columns = [real_valued_column('features', dimension=FEATURE_LENGTH)]
# We use Tensorflow's built-in LinearClassifier estimator, which implements a logistic regression.
# You can go to the model_dir below to see what Tensorflow leaves behind during training.
# Delete the directory if you wish to retrain.
estimator = LinearClassifier(feature_columns=feature_columns,
optimizer=tf.train.FtrlOptimizer(
learning_rate=LEARNING_RATE,
l1_regularization_strength=REG_L1),
model_dir=OUTPUT_DIR + '-model-reg-' + str(REG_L1)
)
"""
Explanation: Logistic Regression with TensorFlow
Tensorflow's linear classifiers, such as logistic regression, are structured as estimators. An estimator has the ability to compute the objective function of the ML model, and take a step towards reducing it. Tensorflow has built-in estimators such as "LinearClassifier", which is just a logistic regression trainer. These estimators have additional metrics that are calculated, such as the average accuracy at threshold = 0.5.
End of explanation
"""
def generate_experiment_fn():
def _experiment_fn(output_dir):
return Experiment(estimator=estimator,
train_input_fn=train_input_fn,
eval_input_fn=eval_input_fn,
train_steps=TRAINING_STEPS,
eval_steps=1,
min_eval_frequency=1)
return _experiment_fn
"""
Explanation: Experiments and Runners
An experiment is a TensorFlow object that stores the estimator, as well as several other parameters. It can also periodically write the model progress into checkpoints which can be loaded later if you would like to continue the model where the training last left off.
Some of the parameters are:
train_steps: how many times to adjust model weights before stopping
eval_steps: when a summary is written, the model, in its current state of progress, will try to predict the debugging data and calculate its accuracy. Eval_steps is set to 1 because we only need to call the input function once (already returns the entire evaluation dataset).
The rest of the parameters just boils down to "do evaluation once".
(If you run the below script multiple times without changing REG_L1 or train_steps, you will notice that the model does not train, as you've already trained the model that many steps for the given configuration).
End of explanation
"""
learn_runner.run(generate_experiment_fn(), OUTPUT_DIR + '-model-reg-' + str(REG_L1))
"""
Explanation: Step 6: Train Logistic Regression with TensorFlow
Unless you change TensorFlow's verbosity, there is a lot of text that is outputted. Such text can be useful when debugging a distributed training pipeline, but is pretty noisy when running from a notebook locally. The line to look for is the chunk at the end where "accuracy" is reported. This is the final result of the model.
End of explanation
"""
|
sfegan/calin | examples/simulation/toy event simulation for mst nectarcam array.ipynb | gpl-2.0 | %pylab inline
import calin.math.hex_array
import calin.provenance.system_info
import calin.simulation.vs_optics
import calin.simulation.geant4_shower_generator
import calin.simulation.ray_processor
import calin.simulation.tracker
import calin.simulation.detector_efficiency
import calin.simulation.atmosphere
import calin.simulation.world_magnetic_model
import calin.simulation.pmt
from IPython.display import clear_output
from ipywidgets.widgets import *
"""
Explanation: Toy event simulation for array of 15 MST/NectarCAM
calin/examples/simulation/toy event simulation for mst nectarcam.ipynb - Stephen Fegan - 2017-01-26
Copyright 2017, Stephen Fegan sfegan@llr.in2p3.fr
Laboratoire Leprince-Ringuet, CNRS/IN2P3, Ecole Polytechnique, Institut Polytechnique de Paris
This file is part of "calin". "calin" is free software: you can redistribute it and/or modify it under the
terms of the GNU General Public License version 2 or later, as published by
the Free Software Foundation. "calin" is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
Introduction
This notebook shows how use the calin interface to Geant 4 to generate sample events in NectarCAM. This is not a "true" simulation, in that it doesn't contain any simulation of the electronics, trigger, digitization etc. Particles in the air shower are generated and tracked by Geant 4 and mean Cherenkov emission at the telescope is calculated by quadratic integration in predefined steps along the charged-particle tracks and in predefined angular steps in the Cherenkov cone at each step. The test rays are traced through the MST optics onto the focal plane. Poisson and PMT fluctations can be added subsequently if desired.
The simulation contains the following elements:
An array of 15 MST-like telescopes at the altitude of the IAC observatory, 2.147km, pointed at the zenith. Telescopes are in CTA configuration prod3Nb.3AL4-HN15.
Primary particles (gamma-rays, protons, electrons, or muons) injected at the top of the atmosphere at 100km
Geant 4 simulation of air shower using a layered atmosphere, whose properties are read from a file (CTA Prod 3 atmospheric model number 36), and magnetic field at the IAC site, given by the World Magnetic Model
Quadratic integration of the mean signal in the telescope through generation of test rays on the Cherenkov cone stepped evenly along the tracks of all charged particles in air shower, controlled by parmeters dx=10cm and dphi=2deg:
Test rays are generated at points along tracks separated by at most:
50dx for emission heights > 40m above the telescope [note tan(thetac) ~= 1/50] or
dx for heights < 40m
At each point test rays are generated along the Cherenkov cone with angular steps of at most:
dphi
dx / 2 pi D tan(theta_c), where D is distance from emission point to telescope
Test rays are raytraced through MST assuming shadowing by 3m x 3m square focus box
Weighting of test rays by
The Cherenkov emission formula (see equation 33.45, PDG 2016)
Wavelength and height dependent absorption by atmosphere using MODTRAN optical depths (atm_trans_2147_1_10_0_0_2147.dat from Prod 3)
Wavelength dependent effective detector efficiency combining:
Hamamatsu QE curve (qe_R12992-100-05.dat from Prod 3)
Mirror reflectivity (ref_AlSiO2HfO2.dat from Prod 3)
Transmission through the PMMA window (Aclylite8_tra_v2013ref.dat from Prod 3)
Arbitrary factor of 0.9 to account for shadowing (to be revised when shadowing is known)
Light-cone efficiency vs photon entry angle (CTA-LST_lightguide_eff.dat from Prod 3)
All PEs are summed (regardless of arrival time) to form image
Poisson and PMT noise, and night-sky background can be optionally added to make more realistic images
The following items (at least) are missing from the simulation:
Frequency dependence of refractive index, and effects on Cherenkov yield and geometry
Mie scattering and refraction of rays in atmosphere due to variable refractive index
Angular dependence of mirror and PMMA response, and wavelength dependence of cones
True "night sky" response, including the effct of NSB PEs that occur at the start/end of the integration window and after pulsing.
Conditioning of the signal by the PMT and ampifier
Trigger
Digitization, including: conversion to DC, high/low gain & saturation of the samples at 4096 DC. This latter effect leads to non-linearity for large signals, and eventually to a complete saturation. Here, a checkbox can be selected to apply an ad-hoc saturation function in PEs: a linear response for small signals (<2000 PEs) rolling over to complete saturation at 4000 PEs.
1. Import all required packages from calin and ipwidgets
End of explanation
"""
# Column 1 : Telescope position N [cm]
# Column 2 : Telescope position W [cm]
# Column 3 : Telescope position UP offset from 2147m [cm]
scope_pos = array([[-21065., 5051., 6130., 700.],
[-17906., 22302., 4210., 700.],
[ 2796., 24356., 2320., 700.],
[ 12421., -13456., 3960., 700.],
[ 17627., 12790., 1930., 700.],
[ -7473., -14416., 5320., 700.],
[-21479., -12252., 7410., 700.],
[ -9811., 37645., 2070., 700.],
[ 2108., -30761., 7330., 700.],
[-19640., -29024., 8810., 700.],
[-30285., 38460., 3330., 700.],
[-34500., 17500., 6900., 700.],
[-37499., -3339., 9260., 700.],
[ 272., 4342., 4000., 700.],
[ 27765., -3780., 1800., 700.]])
scope_pos_x = -scope_pos[:,1]
scope_pos_y = scope_pos[:,0]
scope_pos_z = scope_pos[:,2]
scope_pos_x -= 0.5*(max(scope_pos_x) + min(scope_pos_x))
scope_pos_y -= 0.5*(max(scope_pos_y) + min(scope_pos_y))
#polar(arctan2(scope_pos_y,scope_pos_x), sqrt(scope_pos_x**2+scope_pos_y**2)/100,'o')
plot(scope_pos_x/100, scope_pos_y/100, 'o')
xlabel('X coordinate [m]')
ylabel('Y coordinate [m]')
axis('square')
xmax = 400
axis([-xmax,xmax,-xmax,xmax])
xticks(frange(-xmax,xmax,100))
yticks(frange(-xmax,xmax,100))
grid()
def dms(d,m,s):
# Note this function fails for "negative" d=0 (e.g. -00:30:00)
sign = 1
if(d<0):
sign = -1
d = abs(d)
return sign * (d + m/60.0 + s/3600.0)
mst = calin.ix.simulation.vs_optics.IsotropicDCArrayParameters()
mst.mutable_array_origin().set_latitude(dms(28, 45, 47.36))
mst.mutable_array_origin().set_longitude(dms(-17, 53, 23.93))
mst.mutable_array_origin().set_elevation(2147 * 100.0)
for i in range(len(scope_pos_x)):
scope = mst.mutable_prescribed_array_layout().add_scope_positions();
scope.set_x(scope_pos_x[i])
scope.set_y(scope_pos_y[i])
scope.set_z(scope_pos_z[i] + mst.array_origin().elevation())
mst.mutable_reflector_frame().set_optic_axis_rotation(-90);
dc = mst.mutable_reflector()
dc.set_curvature_radius(1920)
dc.set_aperture(1230)
dc.set_facet_num_hex_rings(5)
dc.mutable_psf_align().set_object_plane(inf) # 10 * 1e5);
dc.set_alignment_image_plane(1600)
dc.set_facet_spacing(122)
dc.set_facet_size(120)
dc.set_facet_focal_length(1607)
dc.set_facet_focal_length_dispersion(1)
dc.set_facet_spot_size_probability(0.8)
dc.set_facet_spot_size(0.5 * 2.8) # Spot size of 28mm at 2F
dc.set_facet_spot_size_dispersion(0.5 * 0.02)
dc.set_facet_labeling_parity(True)
dc.set_weathering_factor(1.0)
for id in [1,62,67,72,77,82,87]: dc.add_facet_missing_list(id-1)
mst.mutable_focal_plane().set_camera_diameter(235)
mst.mutable_focal_plane().mutable_translation().set_y(1/(1.0/dc.alignment_image_plane()-1/(10 * 1e5)))
mst.mutable_pixel().set_spacing(5)
mst.mutable_pixel().set_cone_inner_diameter(5)
mst.mutable_pixel().set_cone_survival_prob(1)
mst.mutable_pixel().set_hex_module_size(1)
mst.mutable_pixel().set_module_num_hex_rings(9)
u1,v1 = calin.math.hex_array.cluster_hexid_to_center_uv(1,1)
x1,y1 = calin.math.hex_array.uv_to_xy(u1,v1)
rot = arctan2(-y1,x1)/pi*180 - 30
mst.mutable_pixel().set_grid_rotation(rot)
obs_camera_box = mst.add_obscurations()
obs_camera_box.aligned_box().max_corner().set_x(150)
obs_camera_box.aligned_box().max_corner().set_y(mst.focal_plane().translation().y()+150)
obs_camera_box.aligned_box().max_corner().set_z(150)
obs_camera_box.aligned_box().min_corner().set_x(-150)
obs_camera_box.aligned_box().min_corner().set_y(mst.focal_plane().translation().y())
obs_camera_box.aligned_box().min_corner().set_z(-150)
obs_camera_box.aligned_box().set_incoming_only(True)
rng = calin.math.rng.RNG()
cta = calin.simulation.vs_optics.VSOArray()
cta.generateFromArrayParameters(mst, rng)
cta.pointTelescopesAzEl(0,90.0/180.0*pi);
"""
Explanation: 2. Define telescope properties for ray tracer and construct array
Elevation : 2147m (all values are in centimeters)
Fifteen telescopes from CTA layout prod3Nb.3AL4-HN15, as outlined below
Reflector radius : 1920cm
Facets : 120cm side-side, on hexagonal grid with spacing of 122cm between centers
Facet focal length : 1607cm
Aperture : 1230cm - 5 hexagonal rings of mirror facets with 7 facets missing
Alignment : image at infinity focused on plane at 1600m
Mirror "roughness" generating Gaussian with D80=28mm at radius of curvature
Camera plane : offset for focusing of source at 10km (approx 1602.5cm)
Camera : 9 hexagonal rings of modules, each of 7 PMTs
Obsucration by camera box 300cm x 300cm x 150cm
End of explanation
"""
data_dir = calin.provenance.system_info.build_info().data_install_dir() + "/simulation/"
det_eff = calin.simulation.detector_efficiency.DetectionEfficiency()
det_eff.scaleEffFromFile(data_dir + 'qe_R12992-100-05.dat')
det_eff.scaleEffFromFile(data_dir + 'ref_AlSiO2HfO2.dat')
det_eff.scaleEffFromFile(data_dir + 'Aclylite8_tra_v2013ref.dat')
det_eff.scaleEffByConst(0.9)
cone_eff = calin.simulation.detector_efficiency.AngularEfficiency(data_dir + 'CTA-LST_lightguide_eff.dat')
atm = calin.simulation.atmosphere.LayeredAtmosphere(data_dir + 'atmprof36.dat')
atm_abs = calin.simulation.detector_efficiency.AtmosphericAbsorption(data_dir + 'atm_trans_2147_1_10_0_0_2147.dat')
"""
Explanation: 3. Construct detection efficiency curve, cone efficiency and atmosphere
End of explanation
"""
wmm = calin.simulation.world_magnetic_model.WMM()
bfield = wmm.field_vs_elevation(mst.array_origin().latitude(), mst.array_origin().longitude())
#bfield = None
"""
Explanation: 4. Use world magnetic model to calculate field vs height at IAC
If you wish to have no magnetic field then uncomment the line "bfield = None"
End of explanation
"""
pe_imager = calin.simulation.ray_processor.SimpleImagePEProcessor(cta.numTelescopes(),cta.telescope(0).numPixels())
dx = 10.0
qcfg = calin.ix.simulation.tracker.QuadratureIACTArrayIntegrationConfig();
qcfg.set_ray_spacing_linear(dx)
qcfg.set_ray_spacing_angular(2)
quad = calin.simulation.tracker.VSO_QuadratureIACTArrayIntegration(qcfg, cta, pe_imager)
#quad.add_trace_visitor(diag)
quad.set_detection_efficiencies(det_eff, atm_abs, cta.telescope(0).opticalAxis()[2], cone_eff)
iact = calin.simulation.tracker.IACTDetectorSphereCherenkovConeIntersectionFinder(quad)
act = calin.simulation.tracker.AirCherenkovParameterCalculatorTrackVisitor(iact, atm)
limiter_lo = calin.simulation.tracker.LengthLimitingTrackVisitor(act, dx,
mst.array_origin().elevation() + 40*100)
limiter_hi = calin.simulation.tracker.LengthLimitingTrackVisitor(limiter_lo, 50.0*dx)
"""
Explanation: 5. Construct the hierarchy of actions to take for each track
Geant 4 generates tracks and calls a "visitor" class to process each one. Here we use a hierarchy of nested "visitors", each calling down to the next one to perform the quadrature integration. We construct the hierarchy in reverse order. Going forward from Geant 4 the hierarchy is as follows:
Main Geant 4 class to generate the tracks
LengthLimitingTrackVisitor "hi" : active over whole atmosphere, this divides long tracks into segments of maximum length "50dx"
LengthLimitingTrackVisitor "lo" : active only within 40m of the telescope, divides long tracks into segments of maximum length "dx"
AirCherenkovParameterCalculatorTrackVisitor : calculate Cherenkov parameters for charged particle tracks (discard all others)
IACTDetectorSphereCherenkovConeIntersectionFinder : calculates intersection of Cherenkov cone and series of spheres that encompass telescopes, discarding tracks that habe no intersection
VSO_QuadratureIACTArrayIntegration : perform angular quadrature integration over Cherenkov cone, ray tracing test rays through telescope optics, discarding rays that don not hit camera
SimpleImagePEProcessor : make simple image of all rays that hit camera, summing weight of each ray that impacts each channel.
The integration is contolled by one parameter "dx" which defaults to 10cm, which means there will be on average more than ~100 test rays per square meter (i.e. per mirror facet) from an infinitely long track.
End of explanation
"""
generator = calin.simulation.geant4_shower_generator.\
Geant4ShowerGenerator(limiter_hi, atm, 1000, mst.array_origin().elevation(), atm.top_of_atmosphere(), bfield,
calin.simulation.geant4_shower_generator.VerbosityLevel_SUPRESSED_STDOUT);
generator.set_minimum_energy_cut(20);
ballistic_generator = calin.simulation.tracker.\
StraightTrackGenerator(limiter_hi, mst.array_origin().elevation())
"""
Explanation: 6. Track generators
Two track generators are constructed, the primary one using Geant 4 to simulate the interaction of the primary and secondary particles in the atmosphere. A second generator "StraightTrackGenerator" generates purely straight line tracks using the primary (i.e. has no physics at all) and is used here ONLY to generate "ballistic" muon tracks that do not bend in the magnetic field or generate secondart particles.
End of explanation
"""
pmt_cfg = calin.simulation.pmt.PMTSimPolya.cta_model_3()
pmt_cfg.set_signal_in_pe(True)
pmt = calin.simulation.pmt.PMTSimPolya(pmt_cfg,rng)
pmt_gain = mean(pmt.rvs(1000000))
noise_cfg = calin.ix.simulation.pmt.PoissonSignalSimConfig();
noise_gen = calin.simulation.pmt.PoissonSignalSim(pmt, noise_cfg, rng)
"""
Explanation: 7. PMT and Poisson noise generators
End of explanation
"""
def gen_image(theta=0, phi=0, bx=0, by=0, e=1000, pt='proton', threshold=20, nsb=4, noise=True):
theta *= pi/180
phi *= pi/180
bx *= 100
by *= 100
e *= 1000
u = asarray([sin(theta)*cos(phi), sin(theta)*sin(phi), -cos(theta)])
x0 = asarray([bx,by,mst.array_origin().elevation()])+u/u[2]*100*1e5
if(pt=='proton'):
pt = calin.simulation.tracker.ParticleType_PROTON
generator.generate_showers(1, pt, e, x0, u)
elif(pt=='gamma'):
pt = calin.simulation.tracker.ParticleType_GAMMA
generator.generate_showers(1, pt, e, x0, u)
elif(pt=='electron'):
pt = calin.simulation.tracker.ParticleType_ELECTRON
generator.generate_showers(1, pt, e, x0, u)
elif(pt=='muon'):
pt = calin.simulation.tracker.ParticleType_MUON
generator.generate_showers(1, pt, e, x0, u)
elif(pt=='muon (ballistic)'):
pt = calin.simulation.tracker.ParticleType_MUON
ballistic_generator.generate_showers(1, pt, e, x0, u)
elif(pt=='nsb'):
return asarray(pe_imager.scope_image(0))*0
pix_data = asarray(pe_imager.scope_image(0))
for i in range(1,cta.numTelescopes()):
pix_data = maximum(pix_data, asarray(pe_imager.scope_image(i)))
return pix_data
"""
Explanation: 8. Function to run simulation and return image
End of explanation
"""
def gentle_clip(x, C=4000, D=1200):
# Hyperbola with offset and non-linearity cancelled
C = sqrt(C**2 - D**2) # C is interpreted as max value
return (x-sqrt((x-C)**2 + D**2)+sqrt(C**2 + D**2)) / (1+C/sqrt(C**2 + D**2))
"""
Explanation: 9. Clipping function based on hyperbola
Hyperbola with asymptotes (y=x) and (y=C) with offset at x=0 removed and scaled to remove residual nonlinearity at x=0
End of explanation
"""
def plot_image(pix_data):
s = cta.telescope(0)
# figure(figsize=(10,8))
figure(figsize=(7,6))
pix = []
idx = []
for pix_id in range(len(pix_data)):
# if(pix_data[pix_id] == 0.0):
# continue
pix_hexid = s.pixel(pix_id).hexID()
vx,vy = calin.math.hex_array.hexid_to_vertexes_xy_trans(pix_hexid,
s.cosPixelRotation(), s.sinPixelRotation(), s.pixelSpacing()/s.focalPlanePosition()[1]/pi*180.0)
vv = zeros((len(vx),2))
vv[:,0] = vx
vv[:,1] = vy
pix.append(Polygon(vv,closed=True))
idx.append(pix_id)
pc = matplotlib.collections.PatchCollection(pix, cmap=matplotlib.cm.jet)
pc.set_array(asarray(pix_data)[idx])
pc.set_linewidths(0)
clo = 0
if(min(pix_data)<0):
clo = -5
pc.set_clim(clo,max(80,ceil(max(pix_data)/10)*10))
gca().add_collection(pc)
axis('square')
axis(4.5*asarray([-1,1,-1,1]))
xlabel('X coordinate [deg]')
ylabel('Y coordinate [deg]')
colorbar(pc)
grid(color='w')
"""
Explanation: 10. Plot image using matplotlib (using a PatchCollection)
End of explanation
"""
def run_sim(theta=0, phi=0, bx=0, by=0, loge=3, pt='proton', threshold=20, nsb=4.0, noise=True, clip=True):
e = 10**loge
sim_image = gen_image(theta=theta,phi=phi,bx=bx,by=by,e=e,pt=pt,threshold=threshold,nsb=nsb,noise=noise)
im = sim_image
if(noise):
im += nsb
im = noise_gen.rvs(im)
im -= noise_gen.pedestal_mean()
im /= pmt_gain
im -= nsb
if(clip):
im = gentle_clip(im)
plot_image(im)
text(0.025,0.975,'Energy: %s$\\,$GeV\nType: %s'%("{:,.1f}".format(e),pt),
transform=gca().transAxes,va='top',ha='left')
text(0.025,0.025,'$\\hat{u}$ : %g$^\\circ$, %g$^\\circ$\n$\\vec{b}$ : %g$\\,$m, %g$\\,$m'%(theta,phi,bx,by),
transform=gca().transAxes,va='bottom',ha='left')
text(0.975,0.975,'Size: %s$\\,$PE\nN(>%g$\\,$PE): %g'%("{:,.1f}".format(sum(im)), threshold, sum(im>threshold)),
transform=gca().transAxes,va='top',ha='right')
try:
gcf().savefig('/CTA/event.pdf')
except:
pass
"""
Explanation: 11. Put it all togerther : run the simulation, add noise and plot
If the directory "/CTA" exists then the image is saved as "/CTA/event.pdf"
End of explanation
"""
wbx = FloatSlider(min=-500.0, max=500.0, step=1, value=0, description="X impact point [m]")
wby = FloatSlider(min=-500.0, max=500.0, step=1, value=0, description="Y impact point [m]")
wenergy = FloatSlider(min=1.0, max=5.0, step=0.0625, value=3, description="Log10 E/GeV", readout_format='.3f',)
wtheta = FloatSlider(min=0.0,max=8.0,step=0.1,value=0.0, description="Theta [deg]")
wphi = FloatSlider(min=0.0, max=360.0, step=5.0, value=0.0, description="Phi [deg]")
wtype = Dropdown(options=['gamma', 'proton', 'electron', 'muon', 'muon (ballistic)', 'nsb'],
value='gamma', description='Paricle type')
wnsb = FloatSlider(min=0.0, max=20.0, step=0.2, value=4.0, description="Mean NSB [PE]")
wnoise = Checkbox(value=True, description='Ph & PMT noise')
wclip = Checkbox(value=True, description='Clipping')
button = Button(description="Run simulation")
wbox = VBox([HBox([wbx,wby,wenergy]),HBox([wtheta,wphi,wtype]),
HBox([wnsb,wnoise,wclip,button])])
display(wbox)
def on_button_clicked(b):
clear_output()
run_sim(theta=wtheta.value, phi=wphi.value, bx=wbx.value, by=wby.value, loge=wenergy.value, pt=wtype.value,
threshold=30, nsb=wnsb.value, noise=wnoise.value, clip=wclip.value)
button.on_click(on_button_clicked)
"""
Explanation: 12. Set up widgets and connect them run simulation when button clicked
User can change:
Impact point on ground radially in two dimensions (distance and angle)
Primary propagation direction in two radial dimensions (angular separation from optical axis and angle)
Energy (log10) in GeV
Event type
"gamma", "proton", "electron", or "muon" : primary particle interacting in atmosphere, simulated with Geant 4 starting from 100km
"muon (ballistic)" : a straight track with no physics interactions
"nsb" : simple NSB event
NSB level in average number of background PEs per channel per event
Whether to add noise from photon Poisson fluctuations and PMT
Whether to use the ad-hoc clipping/saturation function
End of explanation
"""
|
ToqueWillot/M2DAC | FDMS/TME3/Model_V5.ipynb | gpl-2.0 | # from __future__ import exam_success
from __future__ import absolute_import
from __future__ import print_function
# Standard imports
%matplotlib inline
import os
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import pandas as pd
import scipy.stats as stats
# Sk cheats
from sklearn.cross_validation import cross_val_score
from sklearn import grid_search
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.svm import SVR
#from sklearn.preprocessing import Imputer # get rid of nan
from sklearn.decomposition import NMF # to add features based on the latent representation
from sklearn.decomposition import ProjectedGradientNMF
# Faster gradient boosting
import xgboost as xgb
# For neural networks models
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD, RMSprop
"""
Explanation: FDMS TME3
Kaggle How Much Did It Rain? II
Florian Toque & Paul Willot
Notes
We tried different regressor model, like GBR, SVM, MLP, Random Forest and KNN as recommanded by the winning team of the Kaggle on taxi trajectories. So far GBR seems to be the best, slightly better than the RF.
The new features we exctracted only made a small impact on predictions but still improved them consistently.
We tried to use a LSTM to take advantage of the sequential structure of the data but it didn't work too well, probably because there is not enought data (13M lines divided per the average length of sequences (15), less the 30% of fully empty data)
End of explanation
"""
%%time
#filename = "data/train.csv"
filename = "data/reduced_train_100000.csv"
#filename = "data/reduced_train_1000000.csv"
raw = pd.read_csv(filename)
raw = raw.set_index('Id')
raw.columns
raw['Expected'].describe()
"""
Explanation: 13.765.202 lines in train.csv
8.022.757 lines in test.csv
Few words about the dataset
Predictions is made in the USA corn growing states (mainly Iowa, Illinois, Indiana) during the season with the highest rainfall (as illustrated by Iowa for the april to august months)
The Kaggle page indicate that the dataset have been shuffled, so working on a subset seems acceptable
The test set is not a extracted from the same data as the training set however, which make the evaluation trickier
Load the dataset
End of explanation
"""
# Considering that the gauge may concentrate the rainfall, we set the cap to 1000
# Comment this line to analyse the complete dataset
l = len(raw)
raw = raw[raw['Expected'] < 300] #1000
print("Dropped %d (%0.2f%%)"%(l-len(raw),(l-len(raw))/float(l)*100))
raw.head(5)
raw.describe()
"""
Explanation: Per wikipedia, a value of more than 421 mm/h is considered "Extreme/large hail"
If we encounter the value 327.40 meter per hour, we should probably start building Noah's ark
Therefor, it seems reasonable to drop values too large, considered as outliers
End of explanation
"""
# We select all features except for the minutes past,
# because we ignore the time repartition of the sequence for now
features_columns = list([u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
def getXy(raw):
selected_columns = list([ u'minutes_past',u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
data = raw[selected_columns]
docX, docY = [], []
for i in data.index.unique():
if isinstance(data.loc[i],pd.core.series.Series):
m = [data.loc[i].as_matrix()]
docX.append(m)
docY.append(float(raw.loc[i]["Expected"]))
else:
m = data.loc[i].as_matrix()
docX.append(m)
docY.append(float(raw.loc[i][:1]["Expected"]))
X , y = np.array(docX) , np.array(docY)
return X,y
"""
Explanation: We regroup the data by ID
End of explanation
"""
#noAnyNan = raw.loc[raw[features_columns].dropna(how='any').index.unique()]
noAnyNan = raw.dropna()
noFullNan = raw.loc[raw[features_columns].dropna(how='all').index.unique()]
fullNan = raw.drop(raw[features_columns].dropna(how='all').index)
print(len(raw))
print(len(noAnyNan))
print(len(noFullNan))
print(len(fullNan))
"""
Explanation: We prepare 3 subsets
Fully filled data only
Used at first to evaluate the models and no worry about features completion
End of explanation
"""
%%time
#X,y=getXy(noAnyNan)
X,y=getXy(noFullNan)
len(X[0][0])
#%%time
#XX = [np.array(t).mean(0) for t in X]
#XX = np.array([np.append(np.nanmean(np.array(t),0),(np.array(t)[1:] - np.array(t)[:-1]).sum(0) ) for t in X])
global_means = np.nanmean(noFullNan,0)
XX=[]
for t in X:
nm = np.nanmean(t,0)
for idx,j in enumerate(nm):
if np.isnan(j):
nm[idx]=global_means[idx]
XX.append(nm)
XX=np.array(XX)
# rescale to clip min at 0 (for non negative matrix factorization)
XX_rescaled=XX[:,:]-np.min(XX,0)
%%time
nmf = ProjectedGradientNMF()
W = nmf.fit_transform(XX_rescaled)
#H = nn.components_
# used to fill fully empty datas
global_means = np.nanmean(noFullNan,0)
# reduce the sequence structure of the data and produce
# new hopefully informatives features
def addFeatures(X,mf=0):
# used to fill fully empty datas
#global_means = np.nanmean(X,0)
XX=[]
nbFeatures=float(len(X[0][0]))
for idxt,t in enumerate(X):
# compute means, ignoring nan when possible, marking it when fully filled with nan
nm = np.nanmean(t,0)
tt=[]
for idx,j in enumerate(nm):
if np.isnan(j):
nm[idx]=global_means[idx]
tt.append(1)
else:
tt.append(0)
tmp = np.append(nm,np.append(tt,tt.count(0)/nbFeatures))
# faster if working on fully filled data:
#tmp = np.append(np.nanmean(np.array(t),0),(np.array(t)[1:] - np.array(t)[:-1]).sum(0) )
# add the percentiles
tmp = np.append(tmp,np.nanpercentile(t,10,axis=0))
tmp = np.append(tmp,np.nanpercentile(t,50,axis=0))
tmp = np.append(tmp,np.nanpercentile(t,90,axis=0))
for idx,i in enumerate(tmp):
if np.isnan(i):
tmp[idx]=0
# adding the dbz as a feature
test = t
try:
taa=test[:,0]
except TypeError:
taa=[test[0][0]]
valid_time = np.zeros_like(taa)
valid_time[0] = taa[0]
for n in xrange(1,len(taa)):
valid_time[n] = taa[n] - taa[n-1]
valid_time[-1] = valid_time[-1] + 60 - np.sum(valid_time)
valid_time = valid_time / 60.0
sum=0
try:
column_ref=test[:,2]
except TypeError:
column_ref=[test[0][2]]
for dbz, hours in zip(column_ref, valid_time):
# See: https://en.wikipedia.org/wiki/DBZ_(meteorology)
if np.isfinite(dbz):
mmperhr = pow(pow(10, dbz/10)/200, 0.625)
sum = sum + mmperhr * hours
if not(mf is 0):
tmp = np.append(tmp,mf[idxt])
XX.append(np.append(np.array(sum),tmp))
#XX.append(np.array([sum]))
#XX.append(tmp)
return np.array(XX)
%%time
XX=addFeatures(X,mf=W)
#XX=addFeatures(X)
XX[:10,0]
def splitTrainTest(X, y, split=0.2):
tmp1, tmp2 = [], []
ps = int(len(X) * (1-split))
index_shuf = range(len(X))
random.shuffle(index_shuf)
for i in index_shuf:
tmp1.append(X[i])
tmp2.append(y[i])
return tmp1[:ps], tmp2[:ps], tmp1[ps:], tmp2[ps:]
X_train,y_train, X_test, y_test = splitTrainTest(XX,y)
"""
Explanation: Predicitons
As a first try, we make predictions on the complete data, and return the 50th percentile and uncomplete and fully empty data
End of explanation
"""
def manualScorer(estimator, X, y):
err = (estimator.predict(X_test)-y_test)**2
return -err.sum()/len(err)
"""
Explanation:
End of explanation
"""
svr = SVR(kernel='rbf', C=800.0)
%%time
srv = svr.fit(X_train,y_train)
print(svr.score(X_train,y_train))
print(svr.score(X_test,y_test))
err = (svr.predict(X_train)-y_train)**2
err.sum()/len(err)
err = (svr.predict(X_test)-y_test)**2
err.sum()/len(err)
%%time
svr_score = cross_val_score(svr, XX, y, cv=5)
print("Score: %s\nMean: %.03f"%(svr_score,svr_score.mean()))
"""
Explanation: max prof 24
nb trees 84
min sample per leaf 17
min sample to split 51
End of explanation
"""
knn = KNeighborsRegressor(n_neighbors=6,weights='distance',algorithm='ball_tree')
#parameters = {'weights':('distance','uniform'),'algorithm':('auto', 'ball_tree', 'kd_tree', 'brute')}
parameters = {'n_neighbors':range(1,10,1)}
grid_knn = grid_search.GridSearchCV(knn, parameters,scoring=manualScorer)
%%time
grid_knn.fit(X_train,y_train)
print(grid_knn.grid_scores_)
print("Best: ",grid_knn.best_params_)
knn = grid_knn.best_estimator_
knn= knn.fit(X_train,y_train)
print(knn.score(X_train,y_train))
print(knn.score(X_test,y_test))
err = (knn.predict(X_train)-y_train)**2
err.sum()/len(err)
err = (knn.predict(X_test)-y_test)**2
err.sum()/len(err)
"""
Explanation:
End of explanation
"""
etreg = ExtraTreesRegressor(n_estimators=200, max_depth=None, min_samples_split=1, random_state=0)
parameters = {'n_estimators':range(100,200,20)}
grid_rf = grid_search.GridSearchCV(etreg, parameters,n_jobs=2,scoring=manualScorer)
%%time
grid_rf.fit(X_train,y_train)
print(grid_rf.grid_scores_)
print("Best: ",grid_rf.best_params_)
grid_rf.best_params_
#etreg = grid_rf.best_estimator_
%%time
etreg = etreg.fit(X_train,y_train)
print(etreg.score(X_train,y_train))
print(etreg.score(X_test,y_test))
err = (etreg.predict(X_train)-y_train)**2
err.sum()/len(err)
err = (etreg.predict(X_test)-y_test)**2
err.sum()/len(err)
"""
Explanation:
End of explanation
"""
rfr = RandomForestRegressor(n_estimators=200, criterion='mse', max_depth=None, min_samples_split=2,
min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto',
max_leaf_nodes=None, bootstrap=True, oob_score=False, n_jobs=-1,
random_state=None, verbose=0, warm_start=False)
%%time
rfr = rfr.fit(X_train,y_train)
print(rfr.score(X_train,y_train))
print(rfr.score(X_test,y_test))
"""
Explanation:
End of explanation
"""
# the dbz feature does not influence xgbr so much
xgbr = xgb.XGBRegressor(max_depth=6, learning_rate=0.1, n_estimators=700, silent=True,
objective='reg:linear', nthread=-1, gamma=0, min_child_weight=1,
max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, base_score=0.5,
seed=0, missing=None)
%%time
xgbr = xgbr.fit(X_train,y_train)
# without the nmf features
# print(xgbr.score(X_train,y_train))
## 0.993948231144
# print(xgbr.score(X_test,y_test))
## 0.613931733332
# with nmf features
print(xgbr.score(X_train,y_train))
print(xgbr.score(X_test,y_test))
"""
Explanation:
End of explanation
"""
gbr = GradientBoostingRegressor(loss='ls', learning_rate=0.1, n_estimators=900,
subsample=1.0, min_samples_split=2, min_samples_leaf=1, max_depth=4, init=None,
random_state=None, max_features=None, alpha=0.5,
verbose=0, max_leaf_nodes=None, warm_start=False)
%%time
gbr = gbr.fit(X_train,y_train)
#os.system('say "終わりだ"') #its over!
#parameters = {'max_depth':range(2,5,1),'alpha':[0.5,0.6,0.7,0.8,0.9]}
#parameters = {'subsample':[0.2,0.4,0.5,0.6,0.8,1]}
#parameters = {'subsample':[0.2,0.5,0.6,0.8,1],'n_estimators':[800,1000,1200]}
#parameters = {'max_depth':range(2,4,1)}
parameters = {'n_estimators':[400,800,1100]}
#parameters = {'loss':['ls', 'lad', 'huber', 'quantile'],'alpha':[0.3,0.5,0.8,0.9]}
#parameters = {'learning_rate':[0.1,0.5,0.9]}
grid_gbr = grid_search.GridSearchCV(gbr, parameters,n_jobs=2,scoring=manualScorer)
%%time
grid_gbr = grid_gbr.fit(X_train,y_train)
print(grid_gbr.grid_scores_)
print("Best: ",grid_gbr.best_params_)
print(gbr.score(X_train,y_train))
print(gbr.score(X_test,y_test))
err = (gbr.predict(X_train)-y_train)**2
print(err.sum()/len(err))
err = (gbr.predict(X_test)-y_test)**2
print(err.sum()/len(err))
err = (gbr.predict(X_train)-y_train)**2
print(err.sum()/len(err))
err = (gbr.predict(X_test)-y_test)**2
print(err.sum()/len(err))
"""
Explanation:
End of explanation
"""
t = []
for i in XX:
t.append(np.count_nonzero(~np.isnan(i)) / float(i.size))
pd.DataFrame(np.array(t)).describe()
"""
Explanation:
End of explanation
"""
svr.predict(X_test)
s = modelList[0]
t.mean(1)
modelList = [svr,knn,etreg,rfr,xgbr,gbr]
reducedModelList = [knn,etreg,xgbr,gbr]
score_train = [[str(f).split("(")[0],f.score(X_train,y_train)] for f in modelList]
score_test = [[str(f).split("(")[0],f.score(X_test,y_test)] for f in modelList]
for idx,i in enumerate(score_train):
print(i[0])
print(" train: %.03f"%i[1])
print(" test: %.03f"%score_test[idx][1])
#reducedModelList = [knn,etreg,xgbr,gbr]
globalPred = np.array([f.predict(XX) for f in reducedModelList]).T
#globalPred.mean(1)
globalPred[0]
y[0]
err = (globalPred.mean(1)-y)**2
print(err.sum()/len(err))
err = (globalPred.mean(1)-y)**2
print(err.sum()/len(err))
for f in modelList:
print(str(f).split("(")[0])
err = (f.predict(XX)-y)**2
print(err.sum()/len(err))
err = (XX[:,0]-y)**2
print(err.sum()/len(err))
for f in modelList:
print(str(f).split("(")[0])
print(f.score(XX,y))
XX[:10,0] # feature 0 is marshall-palmer
svrMeta = SVR()
%%time
svrMeta = svrMeta.fit(globalPred,y)
err = (svrMeta.predict(globalPred)-y)**2
print(err.sum()/len(err))
"""
Explanation:
End of explanation
"""
in_dim = len(XX[0])
out_dim = 1
model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
model.add(Dense(128, input_shape=(in_dim,)))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(1, init='uniform'))
model.add(Activation('linear'))
#sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
#model.compile(loss='mean_squared_error', optimizer=sgd)
rms = RMSprop()
model.compile(loss='mean_squared_error', optimizer=rms)
#model.fit(X_train, y_train, nb_epoch=20, batch_size=16)
#score = model.evaluate(X_test, y_test, batch_size=16)
prep = []
for i in y_train:
prep.append(min(i,20))
prep=np.array(prep)
mi,ma = prep.min(),prep.max()
fy = (prep-mi) / (ma-mi)
#my = fy.max()
#fy = fy/fy.max()
model.fit(np.array(X_train), fy, batch_size=10, nb_epoch=10, validation_split=0.1)
pred = model.predict(np.array(X_test))*ma+mi
err = (pred-y_test)**2
err.sum()/len(err)
r = random.randrange(len(X_train))
print("(Train) Prediction %0.4f, True: %0.4f"%(model.predict(np.array([X_train[r]]))[0][0]*ma+mi,y_train[r]))
r = random.randrange(len(X_test))
print("(Test) Prediction %0.4f, True: %0.4f"%(model.predict(np.array([X_test[r]]))[0][0]*ma+mi,y_test[r]))
"""
Explanation: Here for legacy
End of explanation
"""
%%time
filename = "data/reduced_test_5000.csv"
#filename = "data/test.csv"
test = pd.read_csv(filename)
test = test.set_index('Id')
features_columns = list([u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
def getX(raw):
selected_columns = list([ u'minutes_past',u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
data = raw[selected_columns]
docX= []
for i in data.index.unique():
if isinstance(data.loc[i],pd.core.series.Series):
m = [data.loc[i].as_matrix()]
docX.append(m)
else:
m = data.loc[i].as_matrix()
docX.append(m)
X = np.array(docX)
return X
#%%time
#X=getX(test)
#tmp = []
#for i in X:
# tmp.append(len(i))
#tmp = np.array(tmp)
#sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
#plt.title("Number of ID per number of observations\n(On test dataset)")
#plt.plot()
#testFull = test.dropna()
testNoFullNan = test.loc[test[features_columns].dropna(how='all').index.unique()]
%%time
X=getX(testNoFullNan) # 1min
#XX = [np.array(t).mean(0) for t in X] # 10s
XX=[]
for t in X:
nm = np.nanmean(t,0)
for idx,j in enumerate(nm):
if np.isnan(j):
nm[idx]=global_means[idx]
XX.append(nm)
XX=np.array(XX)
# rescale to clip min at 0 (for non negative matrix factorization)
XX_rescaled=XX[:,:]-np.min(XX,0)
%%time
W = nmf.transform(XX_rescaled)
XX=addFeatures(X,mf=W)
pd.DataFrame(xgbr.predict(XX)).describe()
reducedModelList = [knn,etreg,xgbr,gbr]
globalPred = np.array([f.predict(XX) for f in reducedModelList]).T
predTest = globalPred.mean(1)
predFull = zip(testNoFullNan.index.unique(),predTest)
testNan = test.drop(test[features_columns].dropna(how='all').index)
pred = predFull + predNan
tmp = np.empty(len(testNan))
tmp.fill(0.445000) # 50th percentile of full Nan dataset
predNan = zip(testNan.index.unique(),tmp)
testLeft = test.drop(testNan.index.unique()).drop(testFull.index.unique())
tmp = np.empty(len(testLeft))
tmp.fill(1.27) # 50th percentile of full Nan dataset
predLeft = zip(testLeft.index.unique(),tmp)
len(testFull.index.unique())
len(testNan.index.unique())
len(testLeft.index.unique())
pred = predFull + predNan + predLeft
pred.sort(key=lambda x: x[0], reverse=False)
#reducedModelList = [knn,etreg,xgbr,gbr]
globalPred = np.array([f.predict(XX) for f in reducedModelList]).T
#globalPred.mean(1)
submission = pd.DataFrame(pred)
submission.columns = ["Id","Expected"]
submission.head()
submission.loc[submission['Expected']<0,'Expected'] = 0.445
submission.to_csv("submit4.csv",index=False)
filename = "data/sample_solution.csv"
sol = pd.read_csv(filename)
sol
ss = np.array(sol)
%%time
for a,b in predFull:
ss[a-1][1]=b
ss
sub = pd.DataFrame(pred)
sub.columns = ["Id","Expected"]
sub.Id = sub.Id.astype(int)
sub.head()
sub.to_csv("submit3.csv",index=False)
"""
Explanation: Predict on testset
End of explanation
"""
|
CDIPS-AI-2017/pensieve | Notebooks/emo.ipynb | apache-2.0 | import pensieve as pv
import pandas as pd
pd.options.display.max_rows = 6
import numpy as np
import re
from tqdm import tqdm_notebook as tqdm
%matplotlib notebook
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
"""
Explanation: Extract sentiment vectors from text using NRC lexicon
End of explanation
"""
book = 'book7'
doc = pv.Doc('../../clusterpot/%s.txt' %book)
print("title: %s" %doc.paragraphs[0].text)
"""
Explanation: Read raw text
Seven books of Harry Potter in text format converted from rich text format.
Accessible by setting book to
+ book1.txt
+ ...
+ book7.txt
The raw text is read through pv.Doc and split into paragraph objects.
End of explanation
"""
df = pd.read_hdf('./NRC.h5')
df
"""
Explanation: Read NRC lexicon
The NRC emotional lexicon is a curated list of words with
+ sentiments : positive and negative
+ emotions: anger, anticipation, disgust, fear, joy, sadness, surprise, trust
Details of this work may be found at Saif Mohammad's homepage
The HDF5 file is cleaned from the original NRC text file.
The resulting pandas dataframe has the following columns
+ word | string
+ words in the lexicon
+ emo | string
+ the complete set of sentiments and emotion
+ binary | int
+ 0 if the word does not correspond to the sentiment or emotion
+ 1 if the word elicits the sentiment or emotion
End of explanation
"""
emo_list = []
moo_d = dict()
for p in tqdm(range(len(doc.paragraphs)), desc='par'):
s = doc.paragraphs[p].text.split()
emo_d = dict({'joy':0, 'fear':0, 'surprise':0, 'sadness':0, 'disgust':0, 'anger':0})
moo_d[p] = []
for w in s:
w = re.sub("[!@#$%^&*()_+:;,.\\?']", "", w)
emo = df.query("word=='%s'" %w).query("binary==1")['emo'].as_matrix()
for e in emo:
if e in emo_d.keys():
emo_d['%s' %e] += 1
moo_d[p].append(w)
else:
pass
moo_d[p] = np.unique(moo_d[p]).tolist()
emo_list.append(emo_d)
"""
Explanation: Generate emotional vectors for every paragraph
for p in paragraphs in the text
tokenize the paragraph with .split()
initiate emotional vector emo_d
for every word (w) in paragraph (p)
strip special characters
query for word in NRC lexicon and read out the emo column with binary = 1
for every emotion (e)
if the word elicits an emotion in emo_d, then add 1 to the specified emotion.
append the emotion vectors to a list
End of explanation
"""
pd.DataFrame(emo_list).to_hdf('./book_emo.h5',book)
pd.DataFrame.from_dict(moo_d, orient='index').to_hdf('./book_moo.h5', book)
"""
Explanation: Write emotional vectors to HDF5
HDF5 is structured as
+ bookn
+ len(doc.paragraphs) x len(emo_d.keys()) array of integers
where n spans 1 through 7.
End of explanation
"""
rbook = 'book2'
doc = pv.Doc('../../clusterpot/%s.txt' %rbook)
book_emotions = pd.read_hdf('book_emo.h5',key=rbook)
book_mood = pd.read_hdf('book_moo.h5',key=rbook)
book_emotions.join(book_mood)
"""
Explanation: Read emo vectors in as HDF5 to analyse the text
Read in with corresponding text
End of explanation
"""
emo_kw = 'anger'
emo_rank = np.argsort(book_emotions[emo_kw].as_matrix())[::-1]
emo_rank
"""
Explanation: Simple example: choose emotion to sort by
Choose emo_kw from list ['anger', disgust', 'fear', 'joy', 'sadness', 'surprise']
End of explanation
"""
pidx = 1
print("paragraph from text\n%s\n" %doc.paragraphs[emo_rank[pidx]].text)
print("moo_d emo Harry feels\n%s\n" %book_mood.iloc[emo_rank[pidx]].dropna())
print("emo vector\n%s" %book_emotions.iloc[emo_rank[pidx]])
"""
Explanation: Print the paragraph and emotional vector corresponding to the n-th highest sorted paragraph
pidx | int
+ starts from 0, chooses the paragraph correponding to the pidx-th highest sorted paragraph
End of explanation
"""
rescale = True
if rescale:
slength = [float(len(doc.paragraphs[i].text.split())) for i in range(len(doc.paragraphs))]
else:
slength = 1
x = book_emotions['anger'].as_matrix()/slength
y = book_emotions['disgust'].as_matrix()/slength
z = book_emotions['sadness'].as_matrix()/slength
fig = plt.figure('emotional correlation',figsize=(7,7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs=x,ys=y,zs=z)
ax.set_xlabel('anger')
ax.set_ylabel('disgust')
ax.set_zlabel('sadness')
plt.draw()
plt.show()
"""
Explanation: Plot emotional vectors
Plots emotional vectors
rescale | boolean
+ True rescales the vectors with paragraph length.
End of explanation
"""
slength = [float(len(doc.paragraphs[i].text.split())) for i in range(len(doc.paragraphs))]
y = np.sum(book_emotions.as_matrix(),axis=1)
plt.figure('paragraph length vs. emotion sum',figsize=(7,7))
ax = plt.axes([0.15,0.15,0.8,0.8])
ax.scatter(x=slength,y=y)
#ax.set_xlim([0,1])
#ax.set_ylim([0,1])
#ax.set_xscale('log')
plt.draw()
plt.show()
slength = [float(len(doc.paragraphs[i].text.split())) for i in range(len(doc.paragraphs))]
y = np.sum(book_emotions.as_matrix(),axis=1)
plt.figure('paragraph length vs. emotion sum',figsize=(7,7))
ax = plt.axes([0.15,0.15,0.8,0.8])
ax.scatter(x=slength,y=y)
#ax.set_xlim([0,1])
#ax.set_ylim([0,1])
#ax.set_xscale('log')
plt.draw()
plt.show()
s = "Upon the signature of the International Statute of Secrecy in 1689, wizards went into hiding for good. It was natural, perhaps, that they formed their own small communities within a community. Many small villages and hamlets attracted several magical families, who banded together for mutual support and protection. The villages of Tinworsh in Cornwall, Upper Flagley in Yorkshire, and Ottery St. Catchpole on the south coast of England were notable homes to knots of Wizarding families who lived alongside tolerant and sometimes Confunded Muggles. Most celebrated of these half-magical dwelling places is, perhaps, Godrics Hollow, the West Country village where the great wizard Godric Gryffindor was born, and where Bowman Wright, Wizarding smith, forged the first Golden Snitch. The graveyard is full of the names of ancient magical families, and this accounts, no doubt, for the stories of hauntings that have dogged the little church beside it for many centuries."
w = s.split()[0]
%timeit re.sub("[!@#$%^&*()_+:;,.\\?']", "", w)
%%timeit
punctuation = ["!","@","#","$","%","^","&","*","(",")","_","+",":",";",",",".","?","'","\\"]
for punc in punctuation:
w.replace("%s" %punc,'')
moodf = pd.DataFrame.from_dict(moo_d, orient='index')
moodf.iloc[25].dropna()
moodf.to_hdf('./book_moo_v2.h5', book)
book_mood = pd.read_hdf('book_moo_v2.h5',key=rbook)
"""
Explanation: Test code
End of explanation
"""
|
shareactorIO/pipeline | source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/TFSlim/slim_walkthrough.ipynb | apache-2.0 | import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import math
import numpy as np
import tensorflow as tf
import time
from datasets import dataset_utils
# Main slim library
slim = tf.contrib.slim
"""
Explanation: TF-Slim Walkthrough
This notebook will walk you through the basics of using TF-Slim to define, train and evaluate neural networks on various tasks. It assumes a basic knowledge of neural networks.
Table of contents
<a href="#Install">Installation and setup</a><br>
<a href='#MLP'>Creating your first neural network with TF-Slim</a><br>
<a href='#ReadingTFSlimDatasets'>Reading Data with TF-Slim</a><br>
<a href='#CNN'>Training a convolutional neural network (CNN)</a><br>
<a href='#Pretained'>Using pre-trained models</a><br>
Installation and setup
<a id='Install'></a>
As of 8/28/16, the latest stable release of TF is r0.10, which does not contain the latest version of slim.
To obtain the latest version of TF-Slim, please install the most recent nightly build of TF
as explained here.
To use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from here. Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/slim before running this notebook, so that these files are in your python path.
To check you've got these two steps to work, just execute the cell below. If it complains about unknown modules, restart the notebook after moving to the TF-Slim models directory.
End of explanation
"""
def regression_model(inputs, is_training=True, scope="deep_regression"):
"""Creates the regression model.
Args:
inputs: A node that yields a `Tensor` of size [batch_size, dimensions].
is_training: Whether or not we're currently training the model.
scope: An optional variable_op scope for the model.
Returns:
predictions: 1-D `Tensor` of shape [batch_size] of responses.
end_points: A dict of end points representing the hidden layers.
"""
with tf.variable_scope(scope, 'deep_regression', [inputs]):
end_points = {}
# Set the default weight _regularizer and acvitation for each fully_connected layer.
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_regularizer=slim.l2_regularizer(0.01)):
# Creates a fully connected layer from the inputs with 32 hidden units.
net = slim.fully_connected(inputs, 32, scope='fc1')
end_points['fc1'] = net
# Adds a dropout layer to prevent over-fitting.
net = slim.dropout(net, 0.8, is_training=is_training)
# Adds another fully connected layer with 16 hidden units.
net = slim.fully_connected(net, 16, scope='fc2')
end_points['fc2'] = net
# Creates a fully-connected layer with a single hidden unit. Note that the
# layer is made linear by setting activation_fn=None.
predictions = slim.fully_connected(net, 1, activation_fn=None, scope='prediction')
end_points['out'] = predictions
return predictions, end_points
"""
Explanation: Creating your first neural network with TF-Slim
<a id='MLP'></a>
Below we give some code to create a simple multilayer perceptron (MLP) which can be used
for regression problems. The model has 2 hidden layers.
The output is a single node.
When this function is called, it will create various nodes, and silently add them to whichever global TF graph is currently in scope. When a node which corresponds to a layer with adjustable parameters (eg., a fully connected layer) is created, additional parameter variable nodes are silently created, and added to the graph. (We will discuss how to train the parameters later.)
We use variable scope to put all the nodes under a common name,
so that the graph has some hierarchical structure.
This is useful when we want to visualize the TF graph in tensorboard, or if we want to query related
variables.
The fully connected layers all use the same L2 weight decay and ReLu activations, as specified by arg_scope. (However, the final layer overrides these defaults, and uses an identity activation function.)
We also illustrate how to add a dropout layer after the first fully connected layer (FC1). Note that at test time,
we do not drop out nodes, but instead use the average activations; hence we need to know whether the model is being
constructed for training or testing, since the computational graph will be different in the two cases
(although the variables, storing the model parameters, will be shared, since they have the same name/scope).
End of explanation
"""
with tf.Graph().as_default():
# Dummy placeholders for arbitrary number of 1d inputs and outputs
inputs = tf.placeholder(tf.float32, shape=(None, 1))
outputs = tf.placeholder(tf.float32, shape=(None, 1))
# Build model
predictions, end_points = regression_model(inputs)
# Print name and shape of each tensor.
print "Layers"
for k, v in end_points.iteritems():
print 'name = {}, shape = {}'.format(v.name, v.get_shape())
# Print name and shape of parameter nodes (values not yet initialized)
print "\n"
print "Parameters"
for v in slim.get_model_variables():
print 'name = {}, shape = {}'.format(v.name, v.get_shape())
"""
Explanation: Let's create the model and examine its structure.
We create a TF graph and call regression_model(), which adds nodes (tensors) to the graph. We then examine their shape, and print the names of all the model variables which have been implicitly created inside of each layer. We see that the names of the variables follow the scopes that we specified.
End of explanation
"""
def produce_batch(batch_size, noise=0.3):
xs = np.random.random(size=[batch_size, 1]) * 10
ys = np.sin(xs) + 5 + np.random.normal(size=[batch_size, 1], scale=noise)
return [xs.astype(np.float32), ys.astype(np.float32)]
x_train, y_train = produce_batch(200)
x_test, y_test = produce_batch(200)
plt.scatter(x_train, y_train)
"""
Explanation: Let's create some 1d regression data .
We will train and test the model on some noisy observations of a nonlinear function.
End of explanation
"""
def convert_data_to_tensors(x, y):
inputs = tf.constant(x)
inputs.set_shape([None, 1])
outputs = tf.constant(y)
outputs.set_shape([None, 1])
return inputs, outputs
# The following snippet trains the regression model using a sum_of_squares loss.
ckpt_dir = '/tmp/regression_model/'
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
inputs, targets = convert_data_to_tensors(x_train, y_train)
# Make the model.
predictions, nodes = regression_model(inputs, is_training=True)
# Add the loss function to the graph.
loss = slim.losses.sum_of_squares(predictions, targets)
# The total loss is the uers's loss plus any regularization losses.
total_loss = slim.losses.get_total_loss()
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.005)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training inside a session.
final_loss = slim.learning.train(
train_op,
logdir=ckpt_dir,
number_of_steps=5000,
save_summaries_secs=5,
log_every_n_steps=500)
print("Finished training. Last batch loss:", final_loss)
print("Checkpoint saved in %s" % ckpt_dir)
"""
Explanation: Let's fit the model to the data
The user has to specify the loss function and the optimizer, and slim does the rest.
In particular, the slim.learning.train function does the following:
For each iteration, evaluate the train_op, which updates the parameters using the optimizer applied to the current minibatch. Also, update the global_step.
Occasionally store the model checkpoint in the specified directory. This is useful in case your machine crashes - then you can simply restart from the specified checkpoint.
End of explanation
"""
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_train, y_train)
predictions, end_points = regression_model(inputs, is_training=True)
# Add multiple loss nodes.
sum_of_squares_loss = slim.losses.sum_of_squares(predictions, targets)
absolute_difference_loss = slim.losses.absolute_difference(predictions, targets)
# The following two ways to compute the total loss are equivalent
regularization_loss = tf.add_n(slim.losses.get_regularization_losses())
total_loss1 = sum_of_squares_loss + absolute_difference_loss + regularization_loss
# Regularization Loss is included in the total loss by default.
# This is good for training, but not for testing.
total_loss2 = slim.losses.get_total_loss(add_regularization_losses=True)
init_op = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init_op) # Will initialize the parameters with random weights.
total_loss1, total_loss2 = sess.run([total_loss1, total_loss2])
print('Total Loss1: %f' % total_loss1)
print('Total Loss2: %f' % total_loss2)
print('Regularization Losses:')
for loss in slim.losses.get_regularization_losses():
print(loss)
print('Loss Functions:')
for loss in slim.losses.get_losses():
print(loss)
"""
Explanation: Training with multiple loss functions.
Sometimes we have multiple objectives we want to simultaneously optimize.
In slim, it is easy to add more losses, as we show below. (We do not optimize the total loss in this example,
but we show how to compute it.)
End of explanation
"""
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_test, y_test)
# Create the model structure. (Parameters will be loaded below.)
predictions, end_points = regression_model(inputs, is_training=False)
# Make a session which restores the old parameters from a checkpoint.
sv = tf.train.Supervisor(logdir=ckpt_dir)
with sv.managed_session() as sess:
inputs, predictions, targets = sess.run([inputs, predictions, targets])
plt.scatter(inputs, targets, c='r');
plt.scatter(inputs, predictions, c='b');
plt.title('red=true, blue=predicted')
"""
Explanation: Let's load the saved model and use it for prediction.
End of explanation
"""
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_test, y_test)
predictions, end_points = regression_model(inputs, is_training=False)
# Specify metrics to evaluate:
names_to_value_nodes, names_to_update_nodes = slim.metrics.aggregate_metric_map({
'Mean Squared Error': slim.metrics.streaming_mean_squared_error(predictions, targets),
'Mean Absolute Error': slim.metrics.streaming_mean_absolute_error(predictions, targets)
})
# Make a session which restores the old graph parameters, and then run eval.
sv = tf.train.Supervisor(logdir=ckpt_dir)
with sv.managed_session() as sess:
metric_values = slim.evaluation.evaluation(
sess,
num_evals=1, # Single pass over data
eval_op=names_to_update_nodes.values(),
final_op=names_to_value_nodes.values())
names_to_values = dict(zip(names_to_value_nodes.keys(), metric_values))
for key, value in names_to_values.iteritems():
print('%s: %f' % (key, value))
"""
Explanation: Let's compute various evaluation metrics on the test set.
In TF-Slim termiology, losses are optimized, but metrics (which may not be differentiable, e.g., precision and recall) are just measured. As an illustration, the code below computes mean squared error and mean absolute error metrics on the test set.
Each metric declaration creates several local variables (which must be initialized via tf.initialize_local_variables()) and returns both a value_op and an update_op. When evaluated, the value_op returns the current value of the metric. The update_op loads a new batch of data, runs the model, obtains the predictions and accumulates the metric statistics appropriately before returning the current value of the metric. We store these value nodes and update nodes in 2 dictionaries.
After creating the metric nodes, we can pass them to slim.evaluation.evaluation, which repeatedly evaluates these nodes the specified number of times. (This allows us to compute the evaluation in a streaming fashion across minibatches, which is usefulf for large datasets.) Finally, we print the final value of each metric.
End of explanation
"""
import tensorflow as tf
from datasets import dataset_utils
url = "http://download.tensorflow.org/data/flowers.tar.gz"
flowers_data_dir = '/tmp/flowers'
if not tf.gfile.Exists(flowers_data_dir):
tf.gfile.MakeDirs(flowers_data_dir)
dataset_utils.download_and_uncompress_tarball(url, flowers_data_dir)
"""
Explanation: Reading Data with TF-Slim
<a id='ReadingTFSlimDatasets'></a>
Reading data with TF-Slim has two main components: A
Dataset and a
DatasetDataProvider. The former is a descriptor of a dataset, while the latter performs the actions necessary for actually reading the data. Lets look at each one in detail:
Dataset
A TF-Slim
Dataset
contains descriptive information about a dataset necessary for reading it, such as the list of data files and how to decode them. It also contains metadata including class labels, the size of the train/test splits and descriptions of the tensors that the dataset provides. For example, some datasets contain images with labels. Others augment this data with bounding box annotations, etc. The Dataset object allows us to write generic code using the same API, regardless of the data content and encoding type.
TF-Slim's Dataset works especially well when the data is stored as a (possibly sharded)
TFRecords file, where each record contains a tf.train.Example protocol buffer.
TF-Slim uses a consistent convention for naming the keys and values inside each Example record.
DatasetDataProvider
A
DatasetDataProvider is a class which actually reads the data from a dataset. It is highly configurable to read the data in various ways that may make a big impact on the efficiency of your training process. For example, it can be single or multi-threaded. If your data is sharded across many files, it can read each files serially, or from every file simultaneously.
Demo: The Flowers Dataset
For convenience, we've include scripts to convert several common image datasets into TFRecord format and have provided
the Dataset descriptor files necessary for reading them. We demonstrate how easy it is to use these dataset via the Flowers dataset below.
Download the Flowers Dataset
<a id='DownloadFlowers'></a>
We've made available a tarball of the Flowers dataset which has already been converted to TFRecord format.
End of explanation
"""
from datasets import flowers
import tensorflow as tf
slim = tf.contrib.slim
with tf.Graph().as_default():
dataset = flowers.get_split('train', flowers_data_dir)
data_provider = slim.dataset_data_provider.DatasetDataProvider(
dataset, common_queue_capacity=32, common_queue_min=1)
image, label = data_provider.get(['image', 'label'])
with tf.Session() as sess:
with slim.queues.QueueRunners(sess):
for i in xrange(4):
np_image, np_label = sess.run([image, label])
height, width, _ = np_image.shape
class_name = name = dataset.labels_to_names[np_label]
plt.figure()
plt.imshow(np_image)
plt.title('%s, %d x %d' % (name, height, width))
plt.axis('off')
plt.show()
"""
Explanation: Display some of the data.
End of explanation
"""
def my_cnn(images, num_classes, is_training): # is_training is not used...
with slim.arg_scope([slim.max_pool2d], kernel_size=[3, 3], stride=2):
net = slim.conv2d(images, 64, [5, 5])
net = slim.max_pool2d(net)
net = slim.conv2d(net, 64, [5, 5])
net = slim.max_pool2d(net)
net = slim.flatten(net)
net = slim.fully_connected(net, 192)
net = slim.fully_connected(net, num_classes, activation_fn=None)
return net
"""
Explanation: Convolutional neural nets (CNNs).
<a id='CNN'></a>
In this section, we show how to train an image classifier using a simple CNN.
Define the model.
Below we define a simple CNN. Note that the output layer is linear function - we will apply softmax transformation externally to the model, either in the loss function (for training), or in the prediction function (during testing).
End of explanation
"""
import tensorflow as tf
with tf.Graph().as_default():
# The model can handle any input size because the first layer is convolutional.
# The size of the model is determined when image_node is first passed into the my_cnn function.
# Once the variables are initialized, the size of all the weight matrices is fixed.
# Because of the fully connected layers, this means that all subsequent images must have the same
# input size as the first image.
batch_size, height, width, channels = 3, 28, 28, 3
images = tf.random_uniform([batch_size, height, width, channels], maxval=1)
# Create the model.
num_classes = 10
logits = my_cnn(images, num_classes, is_training=True)
probabilities = tf.nn.softmax(logits)
# Initialize all the variables (including parameters) randomly.
init_op = tf.initialize_all_variables()
with tf.Session() as sess:
# Run the init_op, evaluate the model outputs and print the results:
sess.run(init_op)
probabilities = sess.run(probabilities)
print('Probabilities Shape:')
print(probabilities.shape) # batch_size x num_classes
print('\nProbabilities:')
print(probabilities)
print('\nSumming across all classes (Should equal 1):')
print(np.sum(probabilities, 1)) # Each row sums to 1
"""
Explanation: Apply the model to some randomly generated images.
End of explanation
"""
from preprocessing import inception_preprocessing
import tensorflow as tf
slim = tf.contrib.slim
def load_batch(dataset, batch_size=32, height=299, width=299, is_training=False):
"""Loads a single batch of data.
Args:
dataset: The dataset to load.
batch_size: The number of images in the batch.
height: The size of each image after preprocessing.
width: The size of each image after preprocessing.
is_training: Whether or not we're currently training or evaluating.
Returns:
images: A Tensor of size [batch_size, height, width, 3], image samples that have been preprocessed.
images_raw: A Tensor of size [batch_size, height, width, 3], image samples that can be used for visualization.
labels: A Tensor of size [batch_size], whose values range between 0 and dataset.num_classes.
"""
data_provider = slim.dataset_data_provider.DatasetDataProvider(
dataset, common_queue_capacity=32,
common_queue_min=8)
image_raw, label = data_provider.get(['image', 'label'])
# Preprocess image for usage by Inception.
image = inception_preprocessing.preprocess_image(image_raw, height, width, is_training=is_training)
# Preprocess the image for display purposes.
image_raw = tf.expand_dims(image_raw, 0)
image_raw = tf.image.resize_images(image_raw, height, width)
image_raw = tf.squeeze(image_raw)
# Batch it up.
images, images_raw, labels = tf.train.batch(
[image, image_raw, label],
batch_size=batch_size,
num_threads=1,
capacity=2 * batch_size)
return images, images_raw, labels
from datasets import flowers
# This might take a few minutes.
train_dir = '/tmp/tfslim_model/'
print('Will save model to %s' % train_dir)
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset)
# Create the model:
logits = my_cnn(images, num_classes=dataset.num_classes, is_training=True)
# Specify the loss function:
one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes)
slim.losses.softmax_cross_entropy(logits, one_hot_labels)
total_loss = slim.losses.get_total_loss()
# Create some summaries to visualize the training process:
tf.scalar_summary('losses/Total Loss', total_loss)
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training:
final_loss = slim.learning.train(
train_op,
logdir=train_dir,
number_of_steps=1, # For speed, we just do 1 epoch
save_summaries_secs=1)
print('Finished training. Final batch loss %d' % final_loss)
"""
Explanation: Train the model on the Flowers dataset.
Before starting, make sure you've run the code to <a href="#DownloadFlowers">Download the Flowers</a> dataset. Now, we'll get a sense of what it looks like to use TF-Slim's training functions found in
learning.py. First, we'll create a function, load_batch, that loads batches of dataset from a dataset. Next, we'll train a model for a single step (just to demonstrate the API), and evaluate the results.
End of explanation
"""
from datasets import flowers
# This might take a few minutes.
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.DEBUG)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset)
logits = my_cnn(images, num_classes=dataset.num_classes, is_training=False)
predictions = tf.argmax(logits, 1)
# Define the metrics:
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
'eval/Accuracy': slim.metrics.streaming_accuracy(predictions, labels),
'eval/Recall@5': slim.metrics.streaming_recall_at_k(logits, labels, 5),
})
print('Running evaluation Loop...')
checkpoint_path = tf.train.latest_checkpoint(train_dir)
metric_values = slim.evaluation.evaluate_once(
master='',
checkpoint_path=checkpoint_path,
logdir=train_dir,
eval_op=names_to_updates.values(),
final_op=names_to_values.values())
names_to_values = dict(zip(names_to_values.keys(), metric_values))
for name in names_to_values:
print('%s: %f' % (name, names_to_values[name]))
"""
Explanation: Evaluate some metrics.
As we discussed above, we can compute various metrics besides the loss.
Below we show how to compute prediction accuracy of the trained model, as well as top-5 classification accuracy. (The difference between evaluation and evaluation_loop is that the latter writes the results to a log directory, so they can be viewed in tensorboard.)
End of explanation
"""
from datasets import dataset_utils
url = "http://download.tensorflow.org/models/inception_v1_2016_08_28.tar.gz"
checkpoints_dir = '/tmp/checkpoints'
if not tf.gfile.Exists(checkpoints_dir):
tf.gfile.MakeDirs(checkpoints_dir)
dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir)
"""
Explanation: Using pre-trained models
<a id='Pretrained'></a>
Neural nets work best when they have many parameters, making them very flexible function approximators.
However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list here.
You can either use these models as-is, or you can perform "surgery" on them, to modify them for some other task. For example, it is common to "chop off" the final pre-softmax layer, and replace it with a new set of weights corresponding to some new set of labels. You can then quickly fine tune the new model on a small new dataset. We illustrate this below, using inception-v1 as the base model. While models like Inception V3 are more powerful, Inception V1 is used for speed purposes.
Download the Inception V1 checkpoint
End of explanation
"""
import numpy as np
import os
import tensorflow as tf
import urllib2
from datasets import imagenet
from nets import inception
from preprocessing import inception_preprocessing
slim = tf.contrib.slim
batch_size = 3
image_size = inception.inception_v1.default_image_size
with tf.Graph().as_default():
url = 'https://upload.wikimedia.org/wikipedia/commons/7/70/EnglishCockerSpaniel_simon.jpg'
image_string = urllib2.urlopen(url).read()
image = tf.image.decode_jpeg(image_string, channels=3)
processed_image = inception_preprocessing.preprocess_image(image, image_size, image_size, is_training=False)
processed_images = tf.expand_dims(processed_image, 0)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(processed_images, num_classes=1001, is_training=False)
probabilities = tf.nn.softmax(logits)
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'inception_v1.ckpt'),
slim.get_model_variables('InceptionV1'))
with tf.Session() as sess:
init_fn(sess)
np_image, probabilities = sess.run([image, probabilities])
probabilities = probabilities[0, 0:]
sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])]
plt.figure()
plt.imshow(np_image.astype(np.uint8))
plt.axis('off')
plt.show()
names = imagenet.create_readable_names_for_imagenet_labels()
for i in range(5):
index = sorted_inds[i]
print('Probability %0.2f%% => [%s]' % (probabilities[index], names[index]))
"""
Explanation: Apply Pre-trained model to Images.
We have to convert each image to the size expected by the model checkpoint.
There is no easy way to determine this size from the checkpoint itself.
So we use a preprocessor to enforce this.
End of explanation
"""
# Note that this may take several minutes.
import os
from datasets import flowers
from nets import inception
from preprocessing import inception_preprocessing
slim = tf.contrib.slim
image_size = inception.inception_v1.default_image_size
def get_init_fn():
"""Returns a function run by the chief worker to warm-start the training."""
checkpoint_exclude_scopes=["InceptionV1/Logits", "InceptionV1/AuxLogits"]
exclusions = [scope.strip() for scope in checkpoint_exclude_scopes]
variables_to_restore = []
for var in slim.get_model_variables():
excluded = False
for exclusion in exclusions:
if var.op.name.startswith(exclusion):
excluded = True
break
if not excluded:
variables_to_restore.append(var)
return slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'inception_v1.ckpt'),
variables_to_restore)
train_dir = '/tmp/inception_finetuned/'
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset, height=image_size, width=image_size)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(images, num_classes=dataset.num_classes, is_training=True)
# Specify the loss function:
one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes)
slim.losses.softmax_cross_entropy(logits, one_hot_labels)
total_loss = slim.losses.get_total_loss()
# Create some summaries to visualize the training process:
tf.scalar_summary('losses/Total Loss', total_loss)
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training:
final_loss = slim.learning.train(
train_op,
logdir=train_dir,
init_fn=get_init_fn(),
number_of_steps=2)
print('Finished training. Last batch loss %f' % final_loss)
"""
Explanation: Fine-tune the model on a different set of labels.
We will fine tune the inception model on the Flowers dataset.
End of explanation
"""
import numpy as np
import tensorflow as tf
from datasets import flowers
from nets import inception
slim = tf.contrib.slim
image_size = inception.inception_v1.default_image_size
batch_size = 3
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, images_raw, labels = load_batch(dataset, height=image_size, width=image_size)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(images, num_classes=dataset.num_classes, is_training=True)
probabilities = tf.nn.softmax(logits)
checkpoint_path = tf.train.latest_checkpoint(train_dir)
init_fn = slim.assign_from_checkpoint_fn(
checkpoint_path,
slim.get_variables_to_restore())
with tf.Session() as sess:
with slim.queues.QueueRunners(sess):
sess.run(tf.initialize_local_variables())
init_fn(sess)
np_probabilities, np_images_raw, np_labels = sess.run([probabilities, images_raw, labels])
for i in xrange(batch_size):
image = np_images_raw[i, :, :, :]
true_label = np_labels[i]
predicted_label = np.argmax(np_probabilities[i, :])
predicted_name = dataset.labels_to_names[predicted_label]
true_name = dataset.labels_to_names[true_label]
plt.figure()
plt.imshow(image.astype(np.uint8))
plt.title('Ground Truth: [%s], Prediction [%s]' % (true_name, predicted_name))
plt.axis('off')
plt.show()
"""
Explanation: Apply fine tuned model to some images.
End of explanation
"""
|
kanhua/pypvcell | legacy/Mechanical stack 2J III-V-Si.ipynb | apache-2.0 | %matplotlib inline
import numpy as np
from scipy.interpolate import interp2d
import matplotlib.pyplot as plt
from scipy.io import savemat
from iii_v_si import calc_2j_si_eta, calc_2j_si_eta_direct
from detail_balanced_MJ import calc_1j_eta
def vary_top_eg(top_cell_qe,n_s=1):
topcell_eg = np.linspace(0.9, 3, num=100)
eta = np.zeros(topcell_eg.shape)
for p in range(topcell_eg.shape[0]):
eta[p] = calc_2j_si_eta_direct(top_eg=topcell_eg[p], top_rad_eta=1,
top_qe=top_cell_qe, bot_rad_eta=1,
bot_qe=1, n_s=n_s, mj="MS")
print("At AM1.5g, direct band gap assumption of silicon")
print("max eta %s:" % eta.max())
print("optimal Eg: %s" % topcell_eg[eta.argmax()])
return topcell_eg,eta
"""
Explanation: Introduction
This script finds the optimal band gaps of mechanical stack III-V-Si solar cells. I uses a detailed balance approach to calculate the I-V of individual subcells. For calculating efficiency, I add up the maximum power of individual subcell and divide it by the total illumination power.
Details of how the I-V is calculated can be referred to this paper.
End of explanation
"""
eg1,eta1=vary_top_eg(1)
plt.plot(eg1,eta1)
plt.xlabel("top cell's band gap")
plt.ylabel("efficiency")
plt.savefig("mstopeg.pdf")
"""
Explanation: Assume that the top cell has 100% EQE
End of explanation
"""
qe_range=np.linspace(0.5,1,num=3)
for q in qe_range:
eg,eta = vary_top_eg(q)
plt.plot(eg,eta,hold=True,label="QE=%s"%q)
plt.legend(loc="best")
plt.xlabel("top cell's band gap")
plt.ylabel("efficiency")
"""
Explanation: The maximum efficiency is then 42%, and the optimal band gap is 1.81 eV. For two-terminal, 2J devices, maximum efficiency is 41% with a 1.74-eV top cell on silicon. As we can see, using mechanical stack did not benefit the efficiency fundamentally.
Try if different EQE values shift the peak
End of explanation
"""
eg1,eta1=vary_top_eg(0.001)
plt.plot(eg1,eta1)
plt.xlabel("top cell's band gap")
plt.ylabel("efficiency")
"""
Explanation: Different top cell's EQEs do not change the optimal band gap of the top cell, as expected.
Assume that the top cell has very low EQE
End of explanation
"""
# calulate the SQ-limit efficiency of silicon
eta = calc_1j_eta(eg=1.12, qe=1, r_eta=1, cell_temperature=300)
print(eta)
"""
Explanation: The maximum efficiency in this case is around 30%. Which should be very close the limiting efficiency of 1J GaAs. We can check:
End of explanation
"""
|
dacr26/CompPhys | .ipynb_checkpoints/01_01_euler-checkpoint.ipynb | mit | T0 = 10. # initial temperature
Ts = 83. # temp. of the environment
r = 0.1 # cooling rate
dt = 0.05 # time step
tmax = 60. # maximum time
nsteps = int(tmax/dt) # number of steps
T = T0
for i in range(1,nsteps+1):
new_T = T - r*(T-Ts)*dt
T = new_T
print i,i*dt, T
# we can also do t = t - r*(t-ts)*dt
"""
Explanation: author:
- 'Adrian E. Feiguin'
title: 'Computational Physics'
...
Ordinary differential equations
Let’s consider a simple 1st order equation:
$$\frac{dy}{dx}=f(x,y)$$
To solve this equation with a computer we need to discretize the differences: we
have to convert the differential equation into a “finite differences” equation. The simplest
solution is Euler’s method.
Euler’s method
Supouse that at a point $x_0$, the function $f$ has a value $y_0$. We
want to find the approximate value of $y$ in a point $x_1$ close to
$x_0$, $x_1=x_0+\Delta x$, with $\Delta x$ small. We assume that $f$,
the rate of change of $y$, is constant in this interval $\Delta x$.
Therefore we find: $$\begin{eqnarray}
&& dx \approx \Delta x &=&x_1-x_0, \
&& dy \approx \Delta y &=&y_1-y_0,\end{eqnarray}$$ with
$y_1=y(x_1)=y(x_0+\Delta x)$. Then we re-write the differential equation in terms of discrete differences as:
$$\frac{\Delta y}{\Delta x}=f(x,y)$$ or
$$\Delta y = f(x,y)\Delta x$$
and approximate the value of $y_1$ as
$$y_1=y_0+f(x_0,y_0)(x_1-x_0)$$ We can generalize this formula to find
the value of $y$ at $x_2=x_1+\Delta x$ as
$$y_{2}=y_1+f(x_1,y_1)\Delta x,$$ or in the general case:
$$y_{n+1}=y_n+f(x_n,y_n)\Delta x$$
This is a good approximation as long as $\Delta x$ is “small”. What is
small? Depends on the problem, but it is basically defined by the “rate
of change”, or “smoothness” of $f$. $f(x)$ has to behave smoothly and
without rapid variations in the interval $\Delta x$.
Notice that Euler’s method is equivalent to a 1st order Taylor expansion
about the point $x_0$. The “local error” calculating $x_1$ is then
$O(\Delta x^2)$. If we use the method $N$ times to calculate $N$
consecutive points, the propagated “global” error will be
$NO(\Delta x^2)\approx O(\Delta
x)$. This error decreases linearly with decreasing step, so we need to
halve the step size to reduce the error in half. The numerical work for
each step consists of a single evaluation of $f$.
Exercise 1.1: Newton’s law of cooling
If the temperature difference between an object and its surroundings is
small, the rate of change of the temperature of the object is
proportional to the temperature difference: $$\frac{dT}{dt}=-r(T-T_s),$$
where $T$ is the temperature of the body, $T_s$ is the temperature of
the environment, and $r$ is a “cooling constant” that depends on the
heat transfer mechanism, the contact area with the environment and the
thermal properties of the body. The minus sign appears because if
$T>T_s$, the temperature must decrease.
Write a program to calculate the temperature of a body at a time $t$,
given the cooling constant $r$ and the temperature of the body at time
$t=0$. Plot the results for $r=0.1\frac{1}{min}$; $T_0=83^{\circ} C$
using different intervals $\Delta t$ and compare with exact (analytical)
results.
End of explanation
"""
%matplotlib inline
import numpy as np
from matplotlib import pyplot
"""
Explanation: Let's try plotting the results. We first need to import the required libraries and methods
End of explanation
"""
my_time = np.zeros(nsteps)
my_temp = np.zeros(nsteps)
"""
Explanation: Next, we create numpy arrays to store the (x,y) values
End of explanation
"""
T = T0
my_temp[0] = T0
for i in range(1,nsteps):
T = T - r*(T-Ts)*dt
my_time[i] = i*dt
my_temp[i] = T
pyplot.plot(my_time, my_temp, color='#003366', ls='-', lw=3)
pyplot.xlabel('time')
pyplot.ylabel('temperature');
"""
Explanation: We have to re write the loop to store the values in the arrays. Remember that numpy arrays start from 0.
End of explanation
"""
my_time = np.linspace(0.,tmax,nsteps)
pyplot.plot(my_time, my_temp, color='#003366', ls='-', lw=3)
pyplot.xlabel('time')
pyplot.ylabel('temperature');
"""
Explanation: We could have saved effort by defining
End of explanation
"""
def euler(y, f, dx):
"""Computes y_new = y + f*dx
Parameters
----------
y : float
old value of y_n at x_n
f : float
first derivative f(x,y) evaluated at (x_n,y_n)
dx : float
x step
"""
return y + f*dx
T = T0
for i in range(1,nsteps):
T = euler(T, -r*(T-Ts), dt)
my_temp[i] = T
"""
Explanation: Alternatively, and in order to re use code in future problems, we could have created a function.
End of explanation
"""
euler = lambda y, f, dx: y + f*dx
"""
Explanation: Actually, for this particularly simple case, calling a function may introduce unecessary overhead, but it is a an example that we will find useful for future applications. For a simple function like this we could have used a "lambda" function (more about lambda functions <a href="http://www.secnetix.de/olli/Python/lambda_functions.hawk">here</a>).
End of explanation
"""
dt = 1.
#my_color = ['#003366','#663300','#660033','#330066']
my_color = ['red', 'green', 'blue', 'black']
for j in range(0,4):
nsteps = int(tmax/dt) #the arrays will have different size for different time steps
my_time = np.linspace(dt,tmax,nsteps)
my_temp = np.zeros(nsteps)
T = T0
for i in range(1,nsteps):
T = euler(T, -r*(T-Ts), dt)
my_temp[i] = T
pyplot.plot(my_time, my_temp, color=my_color[j], ls='-', lw=3)
dt = dt/2.
pyplot.xlabel('time');
pyplot.ylabel('temperature');
pyplot.xlim(8,10)
pyplot.ylim(48,58);
"""
Explanation: Now, let's study the effects of different time steps on the convergence:
End of explanation
"""
|
datapolitan/lede_algorithms | class2_1/EDA_Review.ipynb | gpl-2.0 | df = pd.read_csv('data/ontime_reports_may_2015_ny.csv')
df.describe()
"""
Explanation: Loading data
Simple stuff. We're loading in a CSV here, and we'll run the describe function over it to get the lay of the land.
End of explanation
"""
df.sort('ARR_DELAY', ascending=False).head(1)
"""
Explanation: In journalism, we're primarily concerned with using data analysis for two purposes:
Finding needles in haystacks
And describing trends
We'll spend a little time looking at the first before we move on to the second.
Needles in haystacks
Let's start with the longest delays:
End of explanation
"""
df.sort('ARR_DELAY', ascending=False).head(10)
"""
Explanation: One record isn't super useful, so we'll do 10:
End of explanation
"""
la_guardia_flights = df[df['ORIGIN'] == 'LGA']
la_guardia_flights.sort('ARR_DELAY', ascending=False).head(10)
"""
Explanation: If we want, we can keep drilling down. Maybe we should also limit our inquiry to, say, La Guardia.
End of explanation
"""
lga_to_atl = df[df['DEST'] == 'ATL']
lga_to_atl.boxplot('ACTUAL_ELAPSED_TIME', by='ORIGIN')
"""
Explanation: Huh, does LGA struggle more than usual to get its planes to Atlanta on time? Let's live dangerously and make a boxplot.
(Spoiler alert: JFK is marginally worse)
End of explanation
"""
df.groupby('CARRIER').median()['ARR_DELAY']
"""
Explanation: And so on.
Of course data journalists are also in the business of finding trends, so let's do some of that.
Describing trends
Being good, accountability-minded reporters, one thing we might be interested in is each airline's on-time performance throughout our sample. Here's one way to check that:
End of explanation
"""
df.groupby('CARRIER').mean()['ARR_DELAY']
"""
Explanation: Huh. Looks like the median flight from most of these carriers tends to show up pretty early. How does that change when we look at the mean?
End of explanation
"""
df.groupby(['CARRIER', 'ORIGIN']).median()['ARR_DELAY']
"""
Explanation: A little less generous. We can spend some time debating which portrayal is more fair, but the large difference between the two is still worth noting.
We can, of course, also drill down by destination:
End of explanation
"""
df.boxplot('ARR_DELAY', by='CARRIER')
"""
Explanation: And if we want a more user-friendly display ...
End of explanation
"""
df.corr()
"""
Explanation: BONUS! Correlation
Up until now, we've spent a lot of time seeing how variables act in isolation -- mainly focusing on arrival delays. But sometimes we might also want to see how two variables interact. That's where correlation comes into play.
For example, let's test one of my personal suspicions that longer flights (measured in distance) tend to experience longer delays.
End of explanation
"""
import matplotlib.pyplot as plt
plt.matshow(df.corr())
"""
Explanation: And now we'll make a crude visualization, just to show off:
End of explanation
"""
|
vzg100/Post-Translational-Modification-Prediction | .ipynb_checkpoints/Phosphorylation Sequence Tests -MLP -dbptm+ELM -EnzymeBenchmarks-VectorAvr.-checkpoint.ipynb | mit | from pred import Predictor
from pred import sequence_vector
from pred import chemical_vector
"""
Explanation: Template for test
End of explanation
"""
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
benchmarks = ["Data/Benchmarks/phos_CDK1.csv", "Data/Benchmarks/phos_CK2.csv", "Data/Benchmarks/phos_MAPK1.csv", "Data/Benchmarks/phos_PKA.csv", "Data/Benchmarks/phos_PKC.csv"]
for j in benchmarks:
for i in par:
print("y", i, " ", j)
y = Predictor()
y.load_data(file="Data/Training/clean_s_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark(j, "S")
del y
print("x", i, " ", j)
x = Predictor()
x.load_data(file="Data/Training/clean_s_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark(j, "S")
del x
"""
Explanation: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.
Included is N Phosphorylation however no benchmarks are available, yet.
Training data is from phospho.elm and benchmarks are from dbptm.
End of explanation
"""
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
benchmarks = ["Data/Benchmarks/phos_CDK1.csv", "Data/Benchmarks/phos_CK2.csv", "Data/Benchmarks/phos_MAPK1.csv", "Data/Benchmarks/phos_PKA.csv", "Data/Benchmarks/phos_PKC.csv"]
for j in benchmarks:
for i in par:
try:
print("y", i, " ", j)
y = Predictor()
y.load_data(file="Data/Training/clean_Y_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
y.benchmark(j, "Y")
del y
print("x", i, " ", j)
x = Predictor()
x.load_data(file="Data/Training/clean_Y_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
x.benchmark(j, "Y")
del x
except:
print("Benchmark not relevant")
"""
Explanation: Y Phosphorylation
End of explanation
"""
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
benchmarks = ["Data/Benchmarks/phos_CDK1.csv", "Data/Benchmarks/phos_CK2.csv", "Data/Benchmarks/phos_MAPK1.csv", "Data/Benchmarks/phos_PKA.csv", "Data/Benchmarks/phos_PKC.csv"]
for j in benchmarks:
for i in par:
print("y", i, " ", j)
y = Predictor()
y.load_data(file="Data/Training/clean_t_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark(j, "T")
del y
print("x", i, " ", j)
x = Predictor()
x.load_data(file="Data/Training/clean_t_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark(j, "T")
del x
"""
Explanation: T Phosphorylation
End of explanation
"""
|
bashtage/statsmodels | examples/notebooks/statespace_local_linear_trend.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
from scipy.stats import norm
import statsmodels.api as sm
import matplotlib.pyplot as plt
"""
Explanation: State space modeling: Local Linear Trends
This notebook describes how to extend the statsmodels statespace classes to create and estimate a custom model. Here we develop a local linear trend model.
The Local Linear Trend model has the form (see Durbin and Koopman 2012, Chapter 3.2 for all notation and details):
$$
\begin{align}
y_t & = \mu_t + \varepsilon_t \qquad & \varepsilon_t \sim
N(0, \sigma_\varepsilon^2) \
\mu_{t+1} & = \mu_t + \nu_t + \xi_t & \xi_t \sim N(0, \sigma_\xi^2) \
\nu_{t+1} & = \nu_t + \zeta_t & \zeta_t \sim N(0, \sigma_\zeta^2)
\end{align}
$$
It is easy to see that this can be cast into state space form as:
$$
\begin{align}
y_t & = \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} \mu_t \ \nu_t \end{pmatrix} + \varepsilon_t \
\begin{pmatrix} \mu_{t+1} \ \nu_{t+1} \end{pmatrix} & = \begin{bmatrix} 1 & 1 \ 0 & 1 \end{bmatrix} \begin{pmatrix} \mu_t \ \nu_t \end{pmatrix} + \begin{pmatrix} \xi_t \ \zeta_t \end{pmatrix}
\end{align}
$$
Notice that much of the state space representation is composed of known values; in fact the only parts in which parameters to be estimated appear are in the variance / covariance matrices:
$$
\begin{align}
H_t & = \begin{bmatrix} \sigma_\varepsilon^2 \end{bmatrix} \
Q_t & = \begin{bmatrix} \sigma_\xi^2 & 0 \ 0 & \sigma_\zeta^2 \end{bmatrix}
\end{align}
$$
End of explanation
"""
"""
Univariate Local Linear Trend Model
"""
class LocalLinearTrend(sm.tsa.statespace.MLEModel):
def __init__(self, endog):
# Model order
k_states = k_posdef = 2
# Initialize the statespace
super(LocalLinearTrend, self).__init__(
endog, k_states=k_states, k_posdef=k_posdef,
initialization='approximate_diffuse',
loglikelihood_burn=k_states
)
# Initialize the matrices
self.ssm['design'] = np.array([1, 0])
self.ssm['transition'] = np.array([[1, 1],
[0, 1]])
self.ssm['selection'] = np.eye(k_states)
# Cache some indices
self._state_cov_idx = ('state_cov',) + np.diag_indices(k_posdef)
@property
def param_names(self):
return ['sigma2.measurement', 'sigma2.level', 'sigma2.trend']
@property
def start_params(self):
return [np.std(self.endog)]*3
def transform_params(self, unconstrained):
return unconstrained**2
def untransform_params(self, constrained):
return constrained**0.5
def update(self, params, *args, **kwargs):
params = super(LocalLinearTrend, self).update(params, *args, **kwargs)
# Observation covariance
self.ssm['obs_cov',0,0] = params[0]
# State covariance
self.ssm[self._state_cov_idx] = params[1:]
"""
Explanation: To take advantage of the existing infrastructure, including Kalman filtering and maximum likelihood estimation, we create a new class which extends from statsmodels.tsa.statespace.MLEModel. There are a number of things that must be specified:
k_states, k_posdef: These two parameters must be provided to the base classes in initialization. The inform the statespace model about the size of, respectively, the state vector, above $\begin{pmatrix} \mu_t & \nu_t \end{pmatrix}'$, and the state error vector, above $\begin{pmatrix} \xi_t & \zeta_t \end{pmatrix}'$. Note that the dimension of the endogenous vector does not have to be specified, since it can be inferred from the endog array.
update: The method update, with argument params, must be specified (it is used when fit() is called to calculate the MLE). It takes the parameters and fills them into the appropriate state space matrices. For example, below, the params vector contains variance parameters $\begin{pmatrix} \sigma_\varepsilon^2 & \sigma_\xi^2 & \sigma_\zeta^2\end{pmatrix}$, and the update method must place them in the observation and state covariance matrices. More generally, the parameter vector might be mapped into many different places in all of the statespace matrices.
statespace matrices: by default, all state space matrices (obs_intercept, design, obs_cov, state_intercept, transition, selection, state_cov) are set to zeros. Values that are fixed (like the ones in the design and transition matrices here) can be set in initialization, whereas values that vary with the parameters should be set in the update method. Note that it is easy to forget to set the selection matrix, which is often just the identity matrix (as it is here), but not setting it will lead to a very different model (one where there is not a stochastic component to the transition equation).
start params: start parameters must be set, even if it is just a vector of zeros, although often good start parameters can be found from the data. Maximum likelihood estimation by gradient methods (as employed here) can be sensitive to the starting parameters, so it is important to select good ones if possible. Here it does not matter too much (although as variances, they should't be set zero).
initialization: in addition to defined state space matrices, all state space models must be initialized with the mean and variance for the initial distribution of the state vector. If the distribution is known, initialize_known(initial_state, initial_state_cov) can be called, or if the model is stationary (e.g. an ARMA model), initialize_stationary can be used. Otherwise, initialize_approximate_diffuse is a reasonable generic initialization (exact diffuse initialization is not yet available). Since the local linear trend model is not stationary (it is composed of random walks) and since the distribution is not generally known, we use initialize_approximate_diffuse below.
The above are the minimum necessary for a successful model. There are also a number of things that do not have to be set, but which may be helpful or important for some applications:
transform / untransform: when fit is called, the optimizer in the background will use gradient methods to select the parameters that maximize the likelihood function. By default it uses unbounded optimization, which means that it may select any parameter value. In many cases, that is not the desired behavior; variances, for example, cannot be negative. To get around this, the transform method takes the unconstrained vector of parameters provided by the optimizer and returns a constrained vector of parameters used in likelihood evaluation. untransform provides the reverse operation.
param_names: this internal method can be used to set names for the estimated parameters so that e.g. the summary provides meaningful names. If not present, parameters are named param0, param1, etc.
End of explanation
"""
import requests
from io import BytesIO
from zipfile import ZipFile
# Download the dataset
ck = requests.get('http://staff.feweb.vu.nl/koopman/projects/ckbook/OxCodeAll.zip').content
zipped = ZipFile(BytesIO(ck))
df = pd.read_table(
BytesIO(zipped.read('OxCodeIntroStateSpaceBook/Chapter_2/NorwayFinland.txt')),
skiprows=1, header=None, sep='\s+', engine='python',
names=['date','nf', 'ff']
)
"""
Explanation: Using this simple model, we can estimate the parameters from a local linear trend model. The following example is from Commandeur and Koopman (2007), section 3.4., modeling motor vehicle fatalities in Finland.
End of explanation
"""
# Load Dataset
df.index = pd.date_range(start='%d-01-01' % df.date[0], end='%d-01-01' % df.iloc[-1, 0], freq='AS')
# Log transform
df['lff'] = np.log(df['ff'])
# Setup the model
mod = LocalLinearTrend(df['lff'])
# Fit it using MLE (recall that we are fitting the three variance parameters)
res = mod.fit(disp=False)
print(res.summary())
"""
Explanation: Since we defined the local linear trend model as extending from MLEModel, the fit() method is immediately available, just as in other statsmodels maximum likelihood classes. Similarly, the returned results class supports many of the same post-estimation results, like the summary method.
End of explanation
"""
# Perform prediction and forecasting
predict = res.get_prediction()
forecast = res.get_forecast('2014')
fig, ax = plt.subplots(figsize=(10,4))
# Plot the results
df['lff'].plot(ax=ax, style='k.', label='Observations')
predict.predicted_mean.plot(ax=ax, label='One-step-ahead Prediction')
predict_ci = predict.conf_int(alpha=0.05)
predict_index = np.arange(len(predict_ci))
ax.fill_between(predict_index[2:], predict_ci.iloc[2:, 0], predict_ci.iloc[2:, 1], alpha=0.1)
forecast.predicted_mean.plot(ax=ax, style='r', label='Forecast')
forecast_ci = forecast.conf_int()
forecast_index = np.arange(len(predict_ci), len(predict_ci) + len(forecast_ci))
ax.fill_between(forecast_index, forecast_ci.iloc[:, 0], forecast_ci.iloc[:, 1], alpha=0.1)
# Cleanup the image
ax.set_ylim((4, 8));
legend = ax.legend(loc='lower left');
"""
Explanation: Finally, we can do post-estimation prediction and forecasting. Notice that the end period can be specified as a date.
End of explanation
"""
|
deculler/MachineLearningTables | Chapter3-1.ipynb | bsd-2-clause | # HIDDEN
# For Tables reference see http://data8.org/datascience/tables.html
# This useful nonsense should just go at the top of your notebook.
from datascience import *
%matplotlib inline
import matplotlib.pyplot as plots
import numpy as np
from sklearn import linear_model
plots.style.use('fivethirtyeight')
plots.rc('lines', linewidth=1, color='r')
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
# datascience version number of last run of this notebook
version.__version__
import sys
sys.path.append("..")
from ml_table import ML_Table
import locale
locale.setlocale( locale.LC_ALL, 'en_US.UTF-8' )
"""
Explanation: Concepts and data from "An Introduction to Statistical Learning, with applications in R" (Springer, 2013) with permission from the authors: G. James, D. Witten, T. Hastie and R. Tibshirani " available at www.StatLearning.com.
For Tables reference see http://data8.org/datascience/tables.html
http://jeffskinnerbox.me/notebooks/matplotlib-2d-and-3d-plotting-in-ipython.html
End of explanation
"""
# Getting the data
advertising = ML_Table.read_table("./data/Advertising.csv")
advertising.relabel(0, "id")
"""
Explanation: Acquiring and seeing trends in multidimensional data
End of explanation
"""
ax = advertising.plot_fit_1d('Sales', 'TV', advertising.regression_1d('Sales', 'TV'))
_ = ax.set_xlim(-20,300)
lr = advertising.linear_regression('Sales', 'TV')
advertising.plot_fit_1d('Sales', 'TV', lr.model)
advertising.lm_summary_1d('Sales', 'TV')
"""
Explanation: FIGURE 3.1. For the Advertising data, the least squares fit for the regression of sales onto TV is shown. The fit is found by minimizing the sum of squared errors.
Each line segment represents an error, and the fit makes a compromise by averaging their squares. In this case a linear fit captures the essence of the relationship, although it is somewhat deficient in the left of the plot.
End of explanation
"""
# Get the actual parameters that are captured within the model
advertising.regression_1d_params('Sales', 'TV')
# Regression yields a model. The computational representation of a model is a function
# That can be applied to an input, 'TV' to get an estimate of an output, 'Sales'
advertise_model_tv = advertising.regression_1d('Sales', 'TV')
# Sales with no TV advertising
advertise_model_tv(0)
# Sales with 100 units TV advertising
advertise_model_tv(100)
# Here's the output of the model applied to the input data
advertise_model_tv(advertising['TV'])
"""
Explanation: Let $\hat{y}_i = \hat{\beta_0} + \hat{\beta_1}x_i$ be the prediction for $Y$ based on the $i$-th value of $X$.
End of explanation
"""
residual = advertising['Sales'] - advertise_model_tv(advertising['TV'])
residual
"""
Explanation: The residual is the difference between the model output and the observed output
End of explanation
"""
# Residual Sum of Squares
RSS = sum(residual*residual)
RSS
# This is common enough that we have it provided as a method
advertising.RSS_model('Sales', advertising.regression_1d('Sales', 'TV'), 'TV')
# And we should move toward a general regression framework
advertising.RSS_model('Sales', advertising.linear_regression('Sales', 'TV').model, 'TV')
"""
Explanation: The residual is not very useful directly because balances over-estimates and under-estimates to produce the best overall estimate with the least error. We can understand the overall goodness of fit by the residual sum of squares - RSS.
End of explanation
"""
advertising_models = Table().with_column('Input', ['TV', 'Radio', 'Newspaper'])
advertising_models['Model'] = advertising_models.apply(lambda i: advertising.regression_1d('Sales', i), 'Input')
advertising_models['RSS'] = advertising_models.apply(lambda i, m: advertising.RSS_model('Sales', m, i), ['Input', 'Model'])
advertising_models
advertising_models = Table().with_column('Input', ['TV', 'Radio', 'Newspaper'])
advertising_models['Model'] = advertising_models.apply(lambda i: advertising.linear_regression('Sales', i).model, 'Input')
advertising_models['RSS'] = advertising_models.apply(lambda i, m: advertising.RSS_model('Sales', m, i), ['Input', 'Model'])
advertising_models
"""
Explanation: With this, we can build an independent model for each of the inputs and look at the associates RSS.
End of explanation
"""
for mode, mdl in zip(advertising_models['Input'], advertising_models['Model']) :
advertising.plot_fit_1d('Sales', mode, mdl)
# RSS at arbitrary point
res = lambda b0, b1: advertising.RSS('Sales', b0 + b1*advertising['TV'])
res(7.0325935491276965, 0.047536640433019729)
"""
Explanation: We can look at how well each of these inputs predict the output by visualizing the residuals.
The magnitude of the RSS gives a sense of the error.
End of explanation
"""
ax = advertising.RSS_contour('Sales', 'TV', sensitivity=0.2)
ax = advertising.RSS_wireframe('Sales', 'TV', sensitivity=0.2)
# The minima point
advertising.linear_regression('Sales', 'TV').params
# Some other points along the trough
points = [(0.042, 8.0), (0.044, 7.6), (0.050, 6.6), (0.054, 6.0)]
ax = advertising.RSS_contour('Sales', 'TV', sensitivity=0.2)
ax.plot([b0 for b1, b0 in points], [b1 for b1, b0 in points], 'ro')
[advertising.RSS('Sales', b0 + b1*advertising['TV']) for b1,b0 in points]
# Models as lines corresponding to points along the near-minimal vectors.
ax = advertising.plot_fit_1d('Sales', 'TV', advertising.linear_regression('Sales', 'TV').model)
for b1, b0 in points:
fit = lambda x: b0 + b1*x
ax.plot([0, 300], [fit(0), fit(300)])
_ = ax.set_xlim(-20,300)
"""
Explanation: Figure 3.2 - RSS and least squares regression
Regression using least squares finds parameters $b0$ and $b1$ tohat minimize the RSS. The least squares regression line estimates the population regression line.
Figure 3.2 shows (what is claimed to be) the RSS contour and surface around the regression point. The computed analog of Figure 3.2 is shown below, with the role of $b0$ and $b1$ reversed to match the tuple returned from the regression method. The plot is the text is incorrect is some important ways. The RSS is not radially symmetric around ($b0$, $b1$). Lines with a larger intercept and smaller slope or vice versa are very close to the minima, i.e., the surface is nearly flat along the upper-left to lower-right diagonal, especially where fit is not very good, since the output depends on more than this one input.
Just because a process minimizes the error does not mean that the minima is sharply defined or that the error surface has no structure. Below we go a bit beyond the text to illustrate this more fully.
End of explanation
"""
def model (x):
return 3*x + 2
def population(n, noise_scale = 1):
sample = ML_Table.runiform('x', n, -2, 2)
noise = ML_Table.rnorm('e', n, sd=noise_scale)
sample['Y'] = sample.apply(model, 'x') + noise['e']
return sample
data = population(100, 2)
data.scatter('x')
ax = data.plot_fit('Y', data.linear_regression('Y').model)
data.linear_regression('Y').params
# A random sample of the population
sample = data.sample(10)
sample.plot_fit('Y', sample.linear_regression('Y').model)
nsamples = 5
ax = data.plot_fit('Y', data.linear_regression('Y').model, linewidth=3)
for s in range(nsamples):
fit = data.sample(10).linear_regression('Y').model
ax.plot([-2, 2], [fit(-2), fit(2)], linewidth=1)
"""
Explanation: 3.1.2 Assessing the accuracy of coefficient estimates
The particular minima that is found through least squares regression is effected by the particular sample of the population that is observed and utilized for estimating the coefficients of the underlying population model.
To see this we can generate a synthetic population based on an ideal model plus noise. Here we can peek at the entire population (which in most settings cannot be observed). We then take samples of this population and fit a regression to those. We can then see how these regression lines differ from the population regression line, which is not exactly the ideal model.
End of explanation
"""
adv_sigma = advertising.RSE_model('Sales', advertising.linear_regression('Sales', 'TV').model, 'TV')
adv_sigma
b0, b1 = advertising.linear_regression('Sales', 'TV').params
b0, b1
advertising.RSS_model('Sales', advertising.linear_regression('Sales', 'TV').model, 'TV')
SE_b0, SE_b1 = advertising.SE_1d_params('Sales', 'TV')
SE_b0, SE_b1
# b0 95% confidence interval
(b0-2*SE_b0, b0+2*SE_b0)
# b1 95% confidence interval
(b1-2*SE_b1, b1+2*SE_b1)
# t-statistic of the slope
b0/SE_b0
# t-statistics of the intercept
b1/SE_b1
# Similar to summary of a linear model in R
advertising.lm_summary_1d('Sales', 'TV')
# We can just barely reject the null hypothesis for Newspaper
# advertising effecting sales
advertising.lm_summary_1d('Sales', 'Newspaper')
"""
Explanation: "The property of unbiasedness holds for the least squares coefficient estimates given by (3.4) as well: if we estimate β0 and β1 on the basis of a particular data set, then our estimates won’t be exactly equal to β0 and β1. But if we could average the estimates obtained over a huge number of data sets, then the average of these estimates would be spot on!"
To compute the standard errors associated with $β_0$ and $β_1$, we use the following formulas.
The slope, $b_1$:
$SE(\hat{β_1})^2 = \frac{σ^2}{\sum_{i=1}^n (x_i - \bar{x})^2}$
The intercept, $b_0$:
$SE(\hat{β_0})^2 = σ^2 [\frac{1}{n} + \frac{\bar{x}^2}{\sum_{i=1}^n (x_i - \bar{x})^2} ] $,
where $σ^2 = Var(ε)$.
In general, $σ^2$ is not known, but can be estimated from the data.
The estimate of σ is known as the residual standard error, and is given by the formula
$RSE = \sqrt{RSS/(n − 2)}$.
End of explanation
"""
adver_model = advertising.regression_1d('Sales', 'TV')
advertising.RSE_model('Sales', adver_model, 'TV')
"""
Explanation: 3.1.3 Assessing the Acurracy of the Model
Once we have rejected the null hypothesis in favor of the alternative hypothesis, it is natural to want to quantify the extent to which the model fits the data. The quality of a linear regression fit is typically assessed using two related quantities: the residual standard error (RSE) and the $R^2$ statistic.
The RSE provides an absolute measure of lack of fit of the model to the data.
End of explanation
"""
advertising.R2_model('Sales', adver_model, 'TV')
# the other models of advertising suggest that there is more going on
advertising.R2_model('Sales', adver_model, 'Radio')
advertising.R2_model('Sales', adver_model, 'Newspaper')
"""
Explanation: The $R^2$ statistic provides an alternative measure of fit. It takes the form of a proportion—the proportion of variance explained—and so it always takes on a value between 0 and 1, and is independent of the scale of Y.
To calculate $R^2$, we use the formula
$R^2 = \frac{TSS−RSS}{TSS} = 1 − \frac{RSS}{TSS}$
where $TSS = \sum (y_i - \bar{y})^2$ is the total sum of squares.
End of explanation
"""
|
epicf/ef | examples/ribbon_beam_in_magnetic_field_contour/Ribbon Beam Contour in Uniform Magnetic Field.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
from sympy import *
init_printing()
Ex, Ey, Ez = symbols("E_x, E_y, E_z")
Bx, By, Bz, B = symbols("B_x, B_y, B_z, B")
x, y, z = symbols("x, y, z")
vx, vy, vz, v = symbols("v_x, v_y, v_z, v")
t = symbols("t")
q, m = symbols("q, m")
c, eps0 = symbols("c, epsilon_0")
"""
Explanation: Trajectory equations:
End of explanation
"""
eq_x = Eq( diff(x(t), t, 2), q / m * Ex + q / c / m * (vy * Bz - vz * By) )
eq_y = Eq( diff(y(t), t, 2), q / m * Ey + q / c / m * (-vx * Bz + vz * Bx) )
eq_z = Eq( diff(z(t), t, 2), q / m * Ez + q / c / m * (vx * By - vy * Bx) )
display( eq_x, eq_y, eq_z )
"""
Explanation: The equation of motion:
$$
\begin{gather}
m \frac{d^2 \vec{r} }{dt^2} =
q \vec{E} + \frac{q}{c} [ \vec{v} \vec{B} ]
\end{gather}
$$
In Cortesian coordinates:
End of explanation
"""
uni_mgn_subs = [ (Bx, 0), (By, 0), (Bz, B) ]
eq_x = eq_x.subs(uni_mgn_subs)
eq_y = eq_y.subs(uni_mgn_subs)
eq_z = eq_z.subs(uni_mgn_subs)
display( eq_x, eq_y, eq_z )
"""
Explanation: For the case of a uniform magnetic field
along the $z$-axis:
$$ \vec{B} = B_z = B, \quad B_x = 0, \quad B_y = 0 $$
End of explanation
"""
zero_EyEz_subs = [ (Ey, 0), (Ez, 0) ]
eq_x = eq_x.subs(zero_EyEz_subs)
eq_y = eq_y.subs(zero_EyEz_subs)
eq_z = eq_z.subs(zero_EyEz_subs)
display( eq_x, eq_y, eq_z )
"""
Explanation: Assuming $E_z = 0$ and $E_y = 0$:
End of explanation
"""
z_eq = dsolve( eq_z, z(t) )
vz_eq = Eq( z_eq.lhs.diff(t), z_eq.rhs.diff(t) )
display( z_eq, vz_eq )
"""
Explanation: Motion is uniform along the $z$-axis:
End of explanation
"""
z_0 = 0
v_0 = v
c1_c2_system = []
initial_cond_subs = [(t, 0), (z(0), z_0), (diff(z(t),t).subs(t,0), v_0) ]
c1_c2_system.append( z_eq.subs( initial_cond_subs ) )
c1_c2_system.append( vz_eq.subs( initial_cond_subs ) )
c1, c2 = symbols("C1, C2")
c1_c2 = solve( c1_c2_system, [c1, c2] )
c1_c2
"""
Explanation: The constants of integration can be found from the initial conditions $z(0) = 0$ and $v_z(0) = v$:
End of explanation
"""
z_sol = z_eq.subs( c1_c2 )
vz_sol = vz_eq.subs( c1_c2 )
display( z_sol, vz_sol )
"""
Explanation: So that
End of explanation
"""
v_as_diff = [ (vx, diff(x(t),t)), (vy, diff(y(t),t)), (vz, diff(z_sol.lhs,t)) ]
eq_y = eq_y.subs( v_as_diff )
eq_y = Eq( integrate( eq_y.lhs, (t, 0, t) ), integrate( eq_y.rhs, (t, 0, t) ) )
eq_y
"""
Explanation: Now, the equation for $y$ can be integrated:
End of explanation
"""
x_0 = Symbol('x_0')
vy_0 = 0
initial_cond_subs = [(x(0), x_0), (diff(y(t),t).subs(t,0), vy_0) ]
vy_sol = eq_y.subs( initial_cond_subs )
vy_sol
"""
Explanation: For initial conditions $x(0) = x_0, y'(0) = 0$:
End of explanation
"""
eq_x = eq_x.subs( vy, vy_sol.rhs )
eq_x = Eq( eq_x.lhs, collect( expand( eq_x.rhs ), B *q / c / m ) )
eq_x
"""
Explanation: This equation can be substituted into the equation for $x$-coorditante:
End of explanation
"""
I0 = symbols('I_0')
Ex_subs = [ (Ex, 2 * pi * I0 / v) ]
eq_x = eq_x.subs( ex_subs )
eq_x
"""
Explanation: An expression for $E_x$ can be taken from the example on ribbon beam in free space $E_x = \dfrac{ 2 \pi I_0 }{v}$:
End of explanation
"""
eq_a = Eq(a, eq_x.rhs.expand().coeff(x(t), 1))
eq_b = Eq( b, eq_x.rhs.expand().coeff(x(t), 0) )
display( eq_a , eq_b )
"""
Explanation: This is an oscillator-type equation
$$
x'' + a x + b = 0
$$
with $a$ and $b$ given by
End of explanation
"""
a, b, c = symbols("a, b, c")
osc_eqn = Eq( diff(x(t),t,2), - abs(a)*x(t) + b)
display( osc_eqn )
osc_eqn_sol = dsolve( osc_eqn )
osc_eqn_sol
"""
Explanation: It's solution is given by:
End of explanation
"""
x_0 = symbols( 'x_0' )
v_0 = 0
c1_c2_system = []
initial_cond_subs = [(t, 0), (x(0), x_0), (diff(x(t),t).subs(t,0), v_0) ]
c1_c2_system.append( osc_eqn_sol.subs( initial_cond_subs ) )
osc_eqn_sol_diff = Eq( osc_eqn_sol.lhs.diff(t), osc_eqn_sol.rhs.diff(t) )
c1_c2_system.append( osc_eqn_sol_diff.subs( initial_cond_subs ) )
c1, c2 = symbols("C1, C2")
c1_c2 = solve( c1_c2_system, [c1, c2] )
c1_c2
"""
Explanation: From initial conditions $x(0) = x_0, v_0 = 0$:
End of explanation
"""
x_sol = osc_eqn_sol.subs( c1_c2 )
x_sol
"""
Explanation: So that
End of explanation
"""
b_over_a = simplify( eq_b.rhs / abs( eq_a.rhs ).subs( abs( eq_a.rhs ), -eq_a.rhs ) )
Eq( b/abs(a), b_over_a )
"""
Explanation: Taking into account that
$$ \sqrt{|a|} = \omega_g = \frac{ q B }{mc } $$
where $\omega_g$ is the gyrofrequency, and since
End of explanation
"""
omega_g = symbols('omega_g')
eq_omega_g = Eq( omega_g, q * B / m / c )
A = symbols('A')
eq_A = Eq( A, b_over_a - x_0 )
subs_list = [ (b/abs(a), b_over_a), ( sqrt( abs(a) ), omega_g ), ( eq_A.rhs, eq_A.lhs) ]
x_sol = x_sol.subs( subs_list )
display( x_sol, eq_A, eq_omega_g )
"""
Explanation: It is possible to rewrite the solution as
End of explanation
"""
display( x_sol, z_sol )
"""
Explanation: From the laws of motion for $x(t)$ and $z(t)$
End of explanation
"""
t_from_z = solve( z_sol.subs(z(t),z), t )[0]
x_z_traj = Eq( x_sol.lhs.subs( t, z ), x_sol.rhs.subs( [(t, t_from_z)] ) )
display( x_z_traj, eq_A, eq_omega_g )
"""
Explanation: it is possible to obtain a trajectory equation:
End of explanation
"""
|
quantopian/research_public | notebooks/lectures/Introduction_to_Futures/questions/notebook.ipynb | apache-2.0 | # Useful Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Exercises: Introduction to Futures Contracts
By Christopher van Hoecke, Maxwell Margenot, and Delaney Mackenzie
Lecture Link :
https://www.quantopian.com/lectures/introduction-to-futures
IMPORTANT NOTE:
This lecture corresponds to the Futures lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
End of explanation
"""
bushels = 15000
spot_symbol = 'CORN'
futures_contract = symbols('CNU16')
spot_prices = get_pricing(spot_symbol, start_date = '2016-06-01', end_date = '2016-09-15', fields = 'price')
futures_prices = get_pricing(futures_contract, start_date = '2016-06-01',end_date='2016-09-15', fields='price')
# Sale date of corn
sale_date = '2016-09-14'
# Plotting
plt.plot(spot_prices);
plt.axvline(sale_date, linestyle='dashed', color='r', label='Sale Date')
plt.legend();
p = spot_prices.loc[sale_date]
spot_multiplier = bushels//6
market_profits =
print 'profits from market price: $', market_profits
## Your code goes here
futures_entry_date = '2016-06-01'
futures_profits =
print 'profits from future contract:', futures_profits, '$'
## Your code goes here
lost_profits =
print 'Profits the producer lost in a year:', int(lost_profits), '$'
"""
Explanation: Exercise 1: Futures Contract vs. Spot Markets
In 2016, a corn farmer decided to sell his corn at the spot price to a distributor.
Decide whether his decision to sell corn at the spot price was a wise one by comparing his profits from the market contract with his potential profits from the futures contract.
Use the price of CORN, an ETF, as the spot price for 6 bushels of corn when he goes to market.
You may also assume the following:
- The producer plans on selling 15,000 bushels of corn in September 2016
- The farmer would enter into a futures position on June 1st
- The spot price sale would take place on the same date that the futures contract expires
- No fees or payments are included in the futures sale or the market sale
(Note that the listed futures price of corn is for one bushel.)
End of explanation
"""
## Your code goes here
N = # Days to expiry of futures contract
cost_of_carry = # Cost of Carry
spot_price = pd.Series(np.ones(N), name = "Spot Price")
futures_price = pd.Series(np.ones(N), name = "Futures Price")
## Your code goes here
spot_price[0] = # Starting Spot Price
futures_price[0] = spot_price[0]*np.exp(cost_of_carry*N)
for n in range(1, N):
spot_price[n] = spot_price[n-1]*(1 + np.random.normal(0, 0.05))
futures_price[n] = spot_price[n]*np.exp(cost_of_carry*(N - n))
spot_price.plot()
futures_price.plot()
plt.legend()
plt.title('Contango')
plt.xlabel('Time')
plt.ylabel('Price');
"""
Explanation: Exercise 2: Carrying Costs
a. Contango
Consider the same corn farmer from Exercise 1.
Calculate the theoretical futures price series as a function of time, given the following:
- The cost of carry is $0.01$
- The spot price of corn was originally 1000 dollars, and that the price is driven by a normal distribution
- Maturity is achieved after 100 days
$$\text{Recall:} \quad F(t, T) = S(t)e^{c(T - t)}$$
End of explanation
"""
## Your code goes here
N = # Days to expiry of futures contract
cost_of_carry = # Cost of Carriny
spot_price = pd.Series(np.ones(N), name = "Spot Price")
futures_price = pd.Series(np.ones(N), name = "Futures Price")
## Your code goes here
spot_price[0] = # Starting Spot Price
futures_price[0] = spot_price[0]*np.exp(cost_of_carry*N)
for n in range(1, N):
spot_price[n] = spot_price[n-1]*(1 + np.random.normal(0, 0.05))
futures_price[n] = spot_price[n]*np.exp(cost_of_carry*(N - n))
spot_price.plot()
futures_price.plot()
plt.legend()
plt.title('Contango')
plt.xlabel('Time')
plt.ylabel('Price');
"""
Explanation: b. Backwardation
Consider the corn farmer again.
Calculate the futures price as a function of time, given the following:
The cost of carry is -0.01
The spot price of corn was originally \$1000, and that the price is driven by a normal distribution
Maturity is achieved after 100 days
End of explanation
"""
|
amkatrutsa/MIPT-Opt | Spring2021/acc_grad.ipynb | mit | import numpy as np
n = 100
# Random
# A = np.random.randn(n, n)
# A = A.T.dot(A)
# Clustered eigenvalues
A = np.diagflat([np.ones(n//4), 10 * np.ones(n//4), 100*np.ones(n//4), 1000* np.ones(n//4)])
U = np.random.rand(n, n)
Q, _ = np.linalg.qr(U)
A = Q.dot(A).dot(Q.T)
A = (A + A.T) * 0.5
print("A is normal matrix: ||AA* - A*A|| =", np.linalg.norm(A.dot(A.T) - A.T.dot(A)))
b = np.random.randn(n)
# Hilbert matrix
# A = np.array([[1.0 / (i+j - 1) for i in range(1, n+1)] for j in range(1, n+1)])
# b = np.ones(n)
f = lambda x: 0.5 * x.dot(A.dot(x)) - b.dot(x)
grad_f = lambda x: A.dot(x) - b
x0 = np.zeros(n)
"""
Explanation: Beyond gradient descent
Метод сопряжённых градиентов и тяжёлого шарика
Система линейных уравнений vs. задача безусловной минимизации
Рассмотрим задачу
$$
\min_{x \in \mathbb{R}^n} \frac{1}{2}x^{\top}Ax - b^{\top}x,
$$
где $A \in \mathbb{S}^n_{++}$.
Из необходимого условия экстремума имеем
$$
Ax^* = b
$$
Также обозначим $f'(x_k) = Ax_k - b = r_k$
Как решить систему $Ax = b$?
Прямые методы основаны на матричных разложениях:
Плотная матрица $A$: для размерностей не больше нескольких тысяч
Разреженная (sparse) матрица $A$: для размерностей порядка $10^4 - 10^5$
Итерационные методы: хороши во многих случаях, единственный подход для задач с размерностью $ > 10^6$
Метод сопряжённых направлений
В градиентном спуске направления убывания - анти-градиенты, но для функций с плохо обусловленным гессианом сходимость медленная.
Идея: двигаться вдоль направлений, которые гарантируют сходимость за $n$ шагов.
Определение. Множество ненулевых векторов ${p_0, \ldots, p_l}$ называется сопряжённым относительно матрицы $A \in \mathbb{S}^n_{++}$, если
$$
p^{\top}_iAp_j = 0, \qquad i \neq j
$$
Утверждение. Для любой $x_0 \in \mathbb{R}^n$ последовательность ${x_k}$, генерируемая методом сопряжённых направлений, сходится к решению системы $Ax = b$ максимум за $n$ шагов.
```python
def ConjugateDirections(x0, A, b, p):
x = x0
r = A.dot(x) - b
for i in range(len(p)):
alpha = - (r.dot(p[i])) / (p[i].dot(A.dot(p[i])))
x = x + alpha * p[i]
r = A.dot(x) - b
return x
```
Примеры сопряжённых направлений
Собственные векторы матрицы $A$
Для любого набора из $n$ векторов можно провести аналог ортогонализации Грама-Шмидта и получить сопряжённые направления
Вопрос: что такое ортогонализация Грама-Шмидта? :)
Геометрическая интерпретация (Mathematics Stack Exchange)
<center><img src="./cg.png" ></center>
Метод сопряжённых градиентов
Идея: новое направление $p_k$ ищется в виде $p_k = -r_k + \beta_k p_{k-1}$, где $\beta_k$ выбирается, исходя из требования сопряжённости $p_k$ и $p_{k-1}$:
$$
\beta_k = \dfrac{p^{\top}{k-1}Ar_k}{p^{\top}{k-1}Ap_{k-1}}
$$
Таким образом, для получения следующего сопряжённого направления $p_k$ необходимо хранить только сопряжённое направление $p_{k-1}$ и остаток $r_k$ с предыдущей итерации.
Вопрос: как находить размер шага $\alpha_k$?
Сопряжённость сопряжённых градиентов
Теорема
Пусть после $k$ итераций $x_k \neq x^*$. Тогда
$\langle r_k, r_i \rangle = 0, \; i = 1, \ldots k - 1$
$\mathtt{span}(r_0, \ldots, r_k) = \mathtt{span}(r_0, Ar_0, \ldots, A^kr_0)$
$\mathtt{span}(p_0, \ldots, p_k) = \mathtt{span}(r_0, Ar_0, \ldots, A^kr_0)$
$p_k^{\top}Ap_i = 0$, $i = 1,\ldots,k-1$
Теоремы сходимости
Теорема 1. Если матрица $A$ имеет только $r$ различных собственных значений, то метод сопряжённых градиентов cойдётся за $r$ итераций.
Теорема 2. Имеет место следующая оценка сходимости
$$
\| x_{k} - x^ \|_A \leq 2\left( \dfrac{\sqrt{\kappa(A)} - 1}{\sqrt{\kappa(A)} + 1} \right)^k \|x_0 - x^\|_A,
$$
где $\|x\|^2_A = x^{\top}Ax$ и $\kappa(A) = \frac{\lambda_1(A)}{\lambda_n(A)}$ - число обусловленности матрицы $A$, $\lambda_1(A) \geq ... \geq \lambda_n(A)$ - собственные значения матрицы $A$
Замечание: сравните коэффициент геометрической прогрессии с аналогом в градиентном спуске.
Интерпретации метода сопряжённых градиентов
Градиентный спуск в пространстве $y = Sx$, где $S = [p_0, \ldots, p_n]$, в котором матрица $A$ становится диагональной (или единичной в случае ортонормированности сопряжённых направлений)
Поиск оптимального решения в Крыловском подпространстве $\mathcal{K}_k(A) = {b, Ab, A^2b, \ldots A^{k-1}b}$
$$
x_k = \arg\min_{x \in \mathcal{K}_k} f(x)
$$
Однако естественный базис Крыловского пространства неортогональный и, более того, плохо обусловлен.
Упражнение Проверьте численно, насколько быстро растёт обусловленность матрицы из векторов ${b, Ab, ... }$
Поэтому его необходимо ортогонализовать, что и происходит в методе сопряжённых градиентов
Основное свойство
$$
A^{-1}b \in \mathcal{K}_n(A)
$$
Доказательство
Теорема Гамильтона-Кэли: $p(A) = 0$, где $p(\lambda) = \det(A - \lambda I)$
$p(A)b = A^nb + a_1A^{n-1}b + \ldots + a_{n-1}Ab + a_n b = 0$
$A^{-1}p(A)b = A^{n-1}b + a_1A^{n-2}b + \ldots + a_{n-1}b + a_nA^{-1}b = 0$
$A^{-1}b = -\frac{1}{a_n}(A^{n-1}b + a_1A^{n-2}b + \ldots + a_{n-1}b)$
Улучшенная версия метода сопряжённых градиентов
На практике используются следующие формулы для шага $\alpha_k$ и коэффициента $\beta_{k}$:
$$
\alpha_k = \dfrac{r^{\top}k r_k}{p^{\top}{k}Ap_{k}} \qquad \beta_k = \dfrac{r^{\top}k r_k}{r^{\top}{k-1} r_{k-1}}
$$
Вопрос: чем они лучше базовой версии?
Псевдокод метода сопряжённых градиентов
```python
def ConjugateGradientQuadratic(x0, A, b, eps):
r = A.dot(x0) - b
p = -r
while np.linalg.norm(r) > eps:
alpha = r.dot(r) / p.dot(A.dot(p))
x = x + alpha * p
r_next = r + alpha * A.dot(p)
beta = r_next.dot(r_next) / r.dot(r)
p = -r_next + beta * p
r = r_next
return x
```
Метод сопряжённых градиентов для неквадратичной функции
Идея: использовать градиенты $f'(x_k)$ неквадратичной функции вместо остатков $r_k$ и линейный поиск шага $\alpha_k$ вместо аналитического вычисления. Получим метод Флетчера-Ривса.
```python
def ConjugateGradientFR(f, gradf, x0, eps):
x = x0
grad = gradf(x)
p = -grad
while np.linalg.norm(gradf(x)) > eps:
alpha = StepSearch(x, f, gradf, **kwargs)
x = x + alpha * p
grad_next = gradf(x)
beta = grad_next.dot(grad_next) / grad.dot(grad)
p = -grad_next + beta * p
grad = grad_next
if restart_condition:
p = -gradf(x)
return x
```
Теорема сходимости
Теорема. Пусть
- множество уровней $\mathcal{L}$ ограничено
- существует $\gamma > 0$: $\| f'(x) \|_2 \leq \gamma$ для $x \in \mathcal{L}$
Тогда
$$
\lim_{j \to \infty} \| f'(x_{k_j}) \|_2 = 0
$$
Перезапуск (restart)
Для ускорения метода сопряжённых градиентов используют технику перезапусков: удаление ранее накопленной истории и перезапуск метода с текущей точки, как будто это точка $x_0$
Существуют разные условия, сигнализирующие о том, что надо делать перезапуск, например
$k = n$
$\dfrac{|\langle f'(x_k), f'(x_{k-1}) \rangle |}{\| f'(x_k) \|_2^2} \geq \nu \approx 0.1$
Можно показать (см. Nocedal, Wright Numerical Optimization, Ch. 5, p. 125), что запуск метода Флетчера-Ривза без использования перезапусков на некоторых итерациях может приводить к крайне медленной сходимости!
Метод Полака-Рибьера и его модификации лишены подобного недостатка.
Комментарии
Замечательная методичка "An Introduction to the Conjugate Gradient Method Without the Agonizing Pain" размещена тут
Помимо метода Флетчера-Ривса существуют другие способы вычисления $\beta_k$: метод Полака-Рибьера, метод Хестенса-Штифеля...
Для метода сопряжённых градиентов требуется 4 вектора: каких?
Самой дорогой операцией является умножение матрицы на вектор
Эксперименты
Квадратичная целевая функция
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
plt.rc("text", usetex=True)
plt.rc("font", family='serif')
eigs = np.linalg.eigvalsh(A)
plt.semilogy(np.unique(eigs))
plt.ylabel("Eigenvalues", fontsize=20)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
"""
Explanation: Распределение собственных значений
End of explanation
"""
import scipy.optimize as scopt
def callback(x, array):
array.append(x)
scopt_cg_array = []
scopt_cg_callback = lambda x: callback(x, scopt_cg_array)
x = scopt.minimize(f, x0, method="CG", jac=grad_f, callback=scopt_cg_callback)
x = x.x
print("||f'(x*)|| =", np.linalg.norm(A.dot(x) - b))
print("f* =", f(x))
"""
Explanation: Правильный ответ
End of explanation
"""
def ConjugateGradientQuadratic(x0, A, b, tol=1e-8, callback=None):
x = x0
r = A.dot(x0) - b
p = -r
while np.linalg.norm(r) > tol:
alpha = r.dot(r) / p.dot(A.dot(p))
x = x + alpha * p
if callback is not None:
callback(x)
r_next = r + alpha * A.dot(p)
beta = r_next.dot(r_next) / r.dot(r)
p = -r_next + beta * p
r = r_next
return x
import liboptpy.unconstr_solvers as methods
import liboptpy.step_size as ss
max_iter = 70
print("\t CG quadratic")
cg_quad = methods.fo.ConjugateGradientQuad(A, b)
x_cg = cg_quad.solve(x0, tol=1e-7, max_iter=max_iter, disp=True)
print("\t Gradient Descent")
gd = methods.fo.GradientDescent(f, grad_f, ss.ExactLineSearch4Quad(A, b))
x_gd = gd.solve(x0, tol=1e-7, max_iter=max_iter, disp=True)
print("Condition number of A =", abs(max(eigs)) / abs(min(eigs)))
"""
Explanation: Реализация метода сопряжённых градиентов
End of explanation
"""
plt.figure(figsize=(8,6))
plt.semilogy([np.linalg.norm(grad_f(x)) for x in cg_quad.get_convergence()], label=r"$\|f'(x_k)\|^{CG}_2$", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in scopt_cg_array[:max_iter]], label=r"$\|f'(x_k)\|^{CG_{PR}}_2$", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in gd.get_convergence()], label=r"$\|f'(x_k)\|^{G}_2$", linewidth=2)
plt.legend(loc="best", fontsize=20)
plt.xlabel(r"Iteration number, $k$", fontsize=20)
plt.ylabel("Convergence rate", fontsize=20)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
print([np.linalg.norm(grad_f(x)) for x in cg_quad.get_convergence()])
plt.figure(figsize=(8,6))
plt.plot([f(x) for x in cg_quad.get_convergence()], label=r"$f(x^{CG}_k)$", linewidth=2)
plt.plot([f(x) for x in scopt_cg_array], label=r"$f(x^{CG_{PR}}_k)$", linewidth=2)
plt.plot([f(x) for x in gd.get_convergence()], label=r"$f(x^{G}_k)$", linewidth=2)
plt.legend(loc="best", fontsize=20)
plt.xlabel(r"Iteration number, $k$", fontsize=20)
plt.ylabel("Function value", fontsize=20)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
"""
Explanation: График сходимости
End of explanation
"""
import numpy as np
import sklearn.datasets as skldata
import scipy.special as scspec
import jax
import jax.numpy as jnp
from jax.config import config
config.update("jax_enable_x64", True)
n = 300
m = 1000
X, y = skldata.make_classification(n_classes=2, n_features=n, n_samples=m, n_informative=n//3, random_state=0)
X = jnp.array(X)
y = jnp.array(y)
C = 1
@jax.jit
def f(w):
return jnp.linalg.norm(w)**2 / 2 + C * jnp.mean(jnp.logaddexp(jnp.zeros(X.shape[0]), -y * (X @ w)))
autograd_f = jax.jit(jax.grad(f))
def grad_f(w):
denom = scspec.expit(-y * X.dot(w))
return w - C * X.T.dot(y * denom) / X.shape[0]
x0 = jax.random.normal(jax.random.PRNGKey(0), (n,))
print("Initial function value = {}".format(f(x0)))
print("Initial gradient norm = {}".format(jnp.linalg.norm(autograd_f(x0))))
print("Initial gradient norm = {}".format(jnp.linalg.norm(grad_f(x0))))
"""
Explanation: Неквадратичная функция
$$
f(w) = \frac12 \|w\|2^2 + C \frac1m \sum{i=1}^m \log (1 + \exp(- y_i \langle x_i, w \rangle)) \to \min_w
$$
End of explanation
"""
def ConjugateGradientFR(f, gradf, x0, num_iter=100, tol=1e-8, callback=None, restart=False):
x = x0
grad = gradf(x)
p = -grad
it = 0
while np.linalg.norm(gradf(x)) > tol and it < num_iter:
alpha = utils.backtracking(x, p, method="Wolfe", beta1=0.1, beta2=0.4, rho=0.5, f=f, grad_f=gradf)
if alpha < 1e-18:
break
x = x + alpha * p
if callback is not None:
callback(x)
grad_next = gradf(x)
beta = grad_next.dot(grad_next) / grad.dot(grad)
p = -grad_next + beta * p
grad = grad_next.copy()
it += 1
if restart and it % restart == 0:
grad = gradf(x)
p = -grad
return x
"""
Explanation: Реализация метода Флетчера-Ривса
End of explanation
"""
import scipy.optimize as scopt
import liboptpy.restarts as restarts
n_restart = 60
tol = 1e-5
max_iter = 600
scopt_cg_array = []
scopt_cg_callback = lambda x: callback(x, scopt_cg_array)
x = scopt.minimize(f, x0, tol=tol, method="CG", jac=autograd_f, callback=scopt_cg_callback, options={"maxiter": max_iter})
x = x.x
print("\t CG by Polak-Rebiere")
print("Norm of garient = {}".format(np.linalg.norm(autograd_f(x))))
print("Function value = {}".format(f(x)))
print("\t CG by Fletcher-Reeves")
# ss.Backtracking("Armijo", rho=0.5, beta=0.001, init_alpha=1.)
cg_fr = methods.fo.ConjugateGradientFR(f, autograd_f, ss.Backtracking("Wolfe", rho=0.9, beta1=0.3, beta2=0.8, init_alpha=1.))
x = cg_fr.solve(x0, tol=tol, max_iter=max_iter, disp=True)
print("\t CG by Fletcher-Reeves with restart n")
cg_fr_rest = methods.fo.ConjugateGradientFR(f, autograd_f, ss.Backtracking("Wolfe", rho=0.9, beta1=0.3, beta2=0.8,
init_alpha=1.), restarts.Restart(n // n_restart))
x = cg_fr_rest.solve(x0, tol=tol, max_iter=max_iter, disp=True)
print("\t Gradient Descent")
gd = methods.fo.GradientDescent(f, autograd_f, ss.Backtracking("Wolfe", rho=0.9, beta1=0.1, beta2=0.8, init_alpha=1.))
x = gd.solve(x0, max_iter=max_iter, tol=tol, disp=True)
plt.figure(figsize=(8, 6))
plt.semilogy([np.linalg.norm(grad_f(x)) for x in cg_fr.get_convergence()], label=r"$\|f'(x_k)\|^{CG_{FR}}_2$ no restart", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in cg_fr_rest.get_convergence()], label=r"$\|f'(x_k)\|^{CG_{FR}}_2$ restart", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in scopt_cg_array], label=r"$\|f'(x_k)\|^{CG_{PR}}_2$", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in gd.get_convergence()], label=r"$\|f'(x_k)\|^{G}_2$", linewidth=2)
plt.legend(loc="best", fontsize=16)
plt.xlabel(r"Iteration number, $k$", fontsize=20)
plt.ylabel("Convergence rate", fontsize=20)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
"""
Explanation: График сходимости
End of explanation
"""
%timeit scopt.minimize(f, x0, method="CG", tol=tol, jac=grad_f, options={"maxiter": max_iter})
%timeit cg_fr.solve(x0, tol=tol, max_iter=max_iter)
%timeit cg_fr_rest.solve(x0, tol=tol, max_iter=max_iter)
%timeit gd.solve(x0, tol=tol, max_iter=max_iter)
"""
Explanation: Время выполнения
End of explanation
"""
import liboptpy.base_optimizer as base
import numpy as np
import liboptpy.unconstr_solvers.fo as fo
import liboptpy.step_size as ss
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc("text", usetex=True)
class HeavyBall(base.LineSearchOptimizer):
def __init__(self, f, grad, step_size, beta, **kwargs):
super().__init__(f, grad, step_size, **kwargs)
self._beta = beta
def get_direction(self, x):
self._current_grad = self._grad(x)
return -self._current_grad
def _f_update_x_next(self, x, alpha, h):
if len(self.convergence) < 2:
return x + alpha * h
else:
return x + alpha * h + self._beta * (x - self.convergence[-2])
def get_stepsize(self):
return self._step_size.get_stepsize(self._grad_mem[-1], self.convergence[-1], len(self.convergence))
np.random.seed(42)
n = 100
A = np.random.randn(n, n)
A = A.T.dot(A)
x_true = np.random.randn(n)
b = A.dot(x_true)
f = lambda x: 0.5 * x.dot(A.dot(x)) - b.dot(x)
grad = lambda x: A.dot(x) - b
A_eigvals = np.linalg.eigvalsh(A)
L = np.max(A_eigvals)
mu = np.min(A_eigvals)
print(L, mu)
print("Condition number = {}".format(L / mu))
alpha_opt = 4 / (np.sqrt(L) + np.sqrt(mu))**2
beta_opt = np.maximum((1 - np.sqrt(alpha_opt * L))**2,
(1 - np.sqrt(alpha_opt * mu))**2)
print(alpha_opt, beta_opt)
beta_test = 0.95
methods = {
"GD fixed": fo.GradientDescent(f, grad, ss.ConstantStepSize(1 / L)),
"GD Armijo": fo.GradientDescent(f, grad,
ss.Backtracking("Armijo", rho=0.5, beta=0.1, init_alpha=1.)),
r"HB, $\beta = {}$".format(beta_test): HeavyBall(f, grad, ss.ConstantStepSize(1 / L), beta=beta_test),
"HB optimal": HeavyBall(f, grad, ss.ConstantStepSize(alpha_opt), beta = beta_opt),
"CG": fo.ConjugateGradientQuad(A, b)
}
x0 = np.random.randn(n)
max_iter = 5000
tol = 1e-6
for m in methods:
_ = methods[m].solve(x0=x0, max_iter=max_iter, tol=tol)
figsize = (10, 8)
fontsize = 26
plt.figure(figsize=figsize)
for m in methods:
plt.semilogy([np.linalg.norm(grad(x)) for x in methods[m].get_convergence()], label=m)
plt.legend(fontsize=fontsize, loc="best")
plt.xlabel("Number of iteration, $k$", fontsize=fontsize)
plt.ylabel(r"$\| f'(x_k)\|_2$", fontsize=fontsize)
plt.xticks(fontsize=fontsize)
_ = plt.yticks(fontsize=fontsize)
for m in methods:
print(m)
%timeit methods[m].solve(x0=x0, max_iter=max_iter, tol=tol)
"""
Explanation: Резюме
Сопряжённые направления
Метод сопряжённых градиентов
Сходимость
Эксперименты
Метод тяжёлого шарика (heavy-ball method)
Предложен в 1964 г. Б.Т. Поляком
<img src="./polyak.jpeg">
Для квадратичной целевой функции зигзагообразное поведение градиентного спуска обусловлено неоднородностью направлений
Давайте учитывать предыдущие направления для поиска новой точки
Метод тяжёлого шарика
$$
x_{k+1} = x_k - \alpha_k f'(x_k) + {\color{red}{\beta_k(x_k - x_{k-1})}}
$$
Помимо параметра шага вдоль антиградиента $\alpha_k$ появился ещё один параметр $\beta_k$
Геометрическая интерпретация метода тяжёлого шарика
Картинка отсюда
<img src="./heavy_ball.png" width=600 align="center">
Теорема сходимости
Пусть $f$ сильно выпукла с Липшицевым градиентом. Тогда для
$$
\alpha_k = \frac{4}{(\sqrt{L} + \sqrt{\mu})^2}
$$
и
$$
\beta_k = \max(|1 - \sqrt{\alpha_k L}|^2, |1 - \sqrt{\alpha_k \mu}|^2)
$$
справедлива следующая оценка сходимости
$$
\left\| \begin{bmatrix} x_{k+1} - x^ \ x_k - x^ \end{bmatrix} \right\|_2
\leq
\left( \frac{\sqrt{\kappa} - 1}{\sqrt{\kappa} + 1} \right)^k \left \|
\begin{bmatrix} x_1 - x^ \ x_0 - x^ \end{bmatrix}
\right \|_2
$$
Совпадает с оценкой снизу для методов первого порядка!
Оптимальные параметры $\alpha_k$ и $\beta_k$ определяются через неизвестные константы $L$ и $\mu$
Схема доказательства
Перепишем метод как
\begin{align}
\begin{bmatrix} x_{k+1}\ x_k \end{bmatrix} = \begin{bmatrix} (1 + \beta_k)I & -\beta_k I\ I & 0 \end{bmatrix} \begin{bmatrix} x_k\ x_{k-1} \end{bmatrix} + \begin{bmatrix} -\alpha_k f'(x_k)\ 0 \end{bmatrix}
\end{align}
Используем теорему из анализа
\begin{align}
\begin{bmatrix} x_{k+1} - x^\ x_k - x^ \end{bmatrix} = \underbrace{ \begin{bmatrix} (1 + \beta_k)I - \alpha_k \int_0^1 f''(x(\tau))d\tau & -\beta_k I\ I & 0 \end{bmatrix}}_{=A_k}\begin{bmatrix} x_k - x^\ x_{k-1} - x^\end{bmatrix},
\end{align}
где $x(\tau) = x_k + \tau(x^* - x_k) $
- В силу интегральной теоремы о среднем $A_k(x) = \int_0^1 f''(x(\tau))d\tau = f''(z)$, поэтому $L$ и $\mu$ ограничивают спектр $A_k(x)$
- Сходимость зависит от спектрального радиуса матрицы итераций $A_k$
Получим оценку на спектр $A_k$
$$
A_k = \begin{bmatrix} (1 + \beta_k)I - \alpha_k A(x_k) & -\beta_k I \ I & 0 \end{bmatrix}
$$
Пусть $A(x_k) = U\Lambda(x_k) U^{\top}$, поскольку гессиан - симметричная матрица, тогда
$$
\begin{bmatrix} U^{\top} & 0 \ 0 & U^{\top} \end{bmatrix} \begin{bmatrix} (1 + \beta_k)I - \alpha_k A(x_k) & -\beta_k I \ I & 0 \end{bmatrix} \begin{bmatrix} U & 0\ 0 & U \end{bmatrix} = \begin{bmatrix} (1 + \beta_k)I - \alpha_k \Lambda(x_k) & -\beta_k I \ I & 0 \end{bmatrix} = \hat{A}_k
$$
Ортогональное преобразование не меняет спектральный радиус матрицы
Далее сделаем перестановку строк и столбцов так, чтобы
$$
\hat{A}_k \simeq \mathrm{diag}(T_1, \ldots, T_n),
$$
где $T_i = \begin{bmatrix} 1 + \beta_k - \alpha_k \lambda_i & -\beta_k \ 1 & 0 \end{bmatrix}$ и $\simeq$ обозначает равенство спектральных радиусов поскольку матрица перестановки является ортогональной
Покажем как сделать такую перестановку на примере матрицы $4 \times 4$
$$
\begin{bmatrix}a & 0 & c & 0 \ 0 & b & 0 & c \ 1 & 0 & 0 & 0\ 0 & 1 & 0 & 0 \end{bmatrix} \rightarrow \begin{bmatrix}a & 0 & c & 0 \ 1 & 0 & 0 & 0 \ 0 & b & 0 & c \ 0 & 1 & 0 & 0 \end{bmatrix} \to \begin{bmatrix}a & c & 0 & 0 \ 1 & 0 & 0 & 0 \ 0 & 0 & b & c \ 0 & 0 & 1 & 0 \end{bmatrix}
$$
Свели задачу к оценке спектрального радиуса блочно-диагональной матрицы $\hat{A}_k$
$\rho(\hat{A}k) = \max\limits{i=1,\ldots,n} { |\lambda_1(T_i)|, |\lambda_2(T_i)|} $
Характеристическое уравнение для $T_i$
$$
\beta_k - u (1 + \beta_k - \alpha_k \lambda_i - u) = 0 \quad u^2 - u(1 + \beta_k - \alpha_k\lambda_i) + \beta_k = 0
$$
- Дальнейшее изучение распределения корней и их границ даёт оценку из условия теоремы
Эксперименты
Тестовая задача 1
$$
f(x) = \frac{1}{2}x^{\top}Ax - b^{\top}x \to \min_x,
$$
где матрица $A$ плохо обусловлена, но положительно определена!
End of explanation
"""
n = 300
m = 1000
X, y = skldata.make_classification(n_classes=2, n_features=n, n_samples=m, n_informative=n//3)
C = 1
@jax.jit
def f(w):
return jnp.linalg.norm(w)**2 / 2 + C * jnp.mean(jnp.logaddexp(jnp.zeros(X.shape[0]), -y * (X @ w)))
# def grad(w):
# denom = scspec.expit(-y * X.dot(w))
# return w - C * X.T.dot(y * denom) / X.shape[0]
autograd_f = jax.jit(jax.grad(f))
x0 = jnp.zeros(n)
print("Initial function value = {}".format(f(x0)))
print("Initial gradient norm = {}".format(jnp.linalg.norm(grad_f(x0))))
alpha_test = 5e-3
beta_test = 0.9
methods = {
r"GD, $\alpha_k = {}$".format(alpha_test): fo.GradientDescent(f, autograd_f, ss.ConstantStepSize(alpha_test)),
"GD Armijo": fo.GradientDescent(f, autograd_f,
ss.Backtracking("Armijo", rho=0.7, beta=0.1, init_alpha=1.)),
r"HB, $\beta = {}$".format(beta_test): HeavyBall(f, autograd_f, ss.ConstantStepSize(alpha_test), beta=beta_test),
}
# x0 = np.random.rand(n)
# x0 = jnp.zeros(n)
x0 = jax.random.normal(jax.random.PRNGKey(0), (X.shape[1],))
max_iter = 400
tol = 1e-5
for m in methods:
print(m)
_ = methods[m].solve(x0=x0, max_iter=max_iter, tol=tol)
scopt_cg_array = []
scopt_cg_callback = lambda x: callback(x, scopt_cg_array)
x = scopt.minimize(f, x0, tol=tol, method="CG", jac=autograd_f, callback=scopt_cg_callback, options={"maxiter": max_iter})
x = x.x
figsize = (10, 8)
fontsize = 26
plt.figure(figsize=figsize)
for m in methods:
plt.semilogy([np.linalg.norm(autograd_f(x)) for x in methods[m].get_convergence()], label=m)
plt.semilogy([np.linalg.norm(autograd_f(x)) for x in scopt_cg_array], label="CG PR")
plt.legend(fontsize=fontsize, loc="best")
plt.xlabel("Number of iteration, $k$", fontsize=fontsize)
plt.ylabel(r"$\| f'(x_k)\|_2$", fontsize=fontsize)
plt.xticks(fontsize=fontsize)
_ = plt.yticks(fontsize=fontsize)
for m in methods:
print(m)
%timeit methods[m].solve(x0=x0, max_iter=max_iter, tol=tol)
"""
Explanation: Тестовая задача 2
$$
f(w) = \frac12 \|w\|2^2 + C \frac1m \sum{i=1}^m \log (1 + \exp(- y_i \langle x_i, w \rangle)) \to \min_w
$$
End of explanation
"""
|
VVard0g/ThreatHunter-Playbook | docs/tutorials/jupyter/notebooks/01_intro_to_python.ipynb | mit | for x in list(range(5)):
print("One number per loop..")
print(x)
if x > 2:
print("The number is greater than 2")
print("----------------------------")
"""
Explanation: Introduction to Python
Goals:
Learn basic Python operations
Understand differences in data structures
Get familiarized with conditional statements and loops
Learn to create custom functions and import python modules
Main Reference: McKinney, Wes. Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython. O'Reilly Media. Kindle Edition
Indentation
Python code is structured by indentation (tabs or spaces) instead of braces which is what other languages normally use. In addition, a colon (:) is used to define the start of an indented code block.
End of explanation
"""
a = "pedro"
a.capitalize()
"""
Explanation: Everything is an Object
Everything in Python is considered an object.
A string, a list, a function and even a number is an object.
For example, you can define a variable to reference a string and then access the methods available for the string object.
If you press the tab key after the variable name and period, you will see the methods available for it.
End of explanation
"""
a = [1,2,3]
b = a
b
"""
Explanation: Variables
In Python, when you define/create a variable, you are basically creating a reference to an object (i.e string,list,etc). If you want to define/create a new variable from the original variable, you will be creating another reference to the original object rather than copying the contents of the first variable to the second one.
End of explanation
"""
a.append(4)
b
"""
Explanation: Therefore, if you update the original variable (a), the new variable (b) will automatically reference the updated object.
End of explanation
"""
dog_name = 'Pedro'
age = 3
is_vaccinated = True
birth_year = 2015
is_vaccinated
dog_name
"""
Explanation: A variable can have a short name (like x and y) or a more descriptive name (age, dog, owner).
Rules for Python variables:
* A variable name must start with a letter or the underscore character
* A variable name cannot start with a number
* A variable name can only contain alpha-numeric characters and underscores (A-z, 0-9, and _ )
* Variable names are case-sensitive (age, Age and AGE are three different variables)
Reference:https://www.w3schools.com/python/python_variables.asp
End of explanation
"""
type(age)
type(dog_name)
type(is_vaccinated)
"""
Explanation: Data Types
As any other object, you can get information about its type via the built-in function type().
End of explanation
"""
x = 4
y = 10
x-y
x*y
y/x
y**x
x>y
x==y
y>=x
"""
Explanation: Combining variables and operations
End of explanation
"""
2+4
5*6
5>3
"""
Explanation: Binary Operators and Comparisons
End of explanation
"""
print("Hello Helk!")
"""
Explanation: The Respective Print statement
End of explanation
"""
print("x = " + str(x))
print("y = " + str(y))
if x==y:
print('yes')
else:
print('no')
"""
Explanation: Control Flows
References:
* https://docs.python.org/3/tutorial/controlflow.html
* https://docs.python.org/3/reference/compound_stmts.html#the-if-statement
If,elif,else statements
The if statement is used for conditional execution
It selects exactly one of the suites by evaluating the expressions one by one until one is found to be true; then that suite is executed.
If all expressions are false, the suite of the else clause, if present, is executed.
End of explanation
"""
if x==y:
print('They are equal')
elif x > y:
print("It is grater than")
else:
print("None of the conditionals were true")
"""
Explanation: An if statement can be optionally followed by one or more elif blocks and a catch-all else block if all of the conditions are False :
End of explanation
"""
my_dog_list=['Pedro',3,True,2015]
for i in range(0,10):
print(i*10)
"""
Explanation: Loops
For
The for statement is used to iterate over the elements of a sequence (such as a string, tuple or list) or other iterable object.
End of explanation
"""
i = 1
while i <= 5:
print(i ** 2)
i += 1
i = 1
while i > 0:
if i > 5:
break
print(i ** 2)
i += 1
"""
Explanation: While
A while loop allows you to execute a block of code until a condition evaluates to false or the loop is ended with a break command.
End of explanation
"""
my_dog_list=['Pedro',3,True,2015]
my_dog_list[0]
my_dog_list[2:4]
print("My dog's name is " + str(my_dog_list[0]) + " and he is " + str(my_dog_list[1]) + " years old.")
"""
Explanation: Data structures
References:
* https://docs.python.org/3/tutorial/datastructures.html
* https://python.swaroopch.com/data_structures.html
Lists
Lists are data structures that allow you to define an ordered collection of items.
Lists are constructed with square brackets, separating items with commas: [a, b, c].
Lists are mutable objects which means that you can modify the values contained in them.
The elements of a list can be of different types (string, integer, etc)
End of explanation
"""
my_dog_list.append("tennis balls")
my_dog_list
"""
Explanation: The list data type has some more methods an you can find them here.
One in particular is the list.append() which allows you to add an item to the end of the list. Equivalent to a[len(a):] = [x].
End of explanation
"""
my_dog_list[1] = 4
my_dog_list
"""
Explanation: You can modify the list values too:
End of explanation
"""
my_dog_dict={'name':'Pedro','age':3,'is_vaccinated':True,'birth_year':2015}
my_dog_dict
my_dog_dict['age']
my_dog_dict.keys()
my_dog_dict.values()
"""
Explanation: Dictionaries
Dictionaries are sometimes found in other languages as “associative memories” or “associative arrays”.
Dictionaries are indexed by keys, which can be any immutable type; strings and numbers can always be keys.
It is best to think of a dictionary as a set of key: value pairs, with the requirement that the keys are unique (within one dictionary).
A pair of braces creates an empty dictionary: {}.
Remember that key-value pairs in a dictionary are not ordered in any manner. If you want a particular order, then you will have to sort them yourself before using it.
End of explanation
"""
my_dog_tuple=('Pedro',3,True,2015)
my_dog_tuple
"""
Explanation: Tuples
A tuple consists of a number of values separated by commas
On output tuples are always enclosed in parentheses, so that nested tuples are interpreted correctly; they may be input with or without surrounding parentheses, although often parentheses are necessary anyway (if the tuple is part of a larger expression).
End of explanation
"""
my_dog_tuple[1]
"""
Explanation: Tuples are immutable, and usually contain a heterogeneous sequence of elements that are accessed via unpacking or indexing.
Lists are mutable, and their elements are usually homogeneous and are accessed by iterating over the list.
End of explanation
"""
seq = [ 7 , 2 , 3 , 7 , 5 , 6 , 0 , 1 ]
seq [ 1 : 5 ]
"""
Explanation: Slicing
You can select sections of most sequence types by using slice notation, which in its basic form consists of
start:stop passed to the indexing operator []
End of explanation
"""
def square(n):
return n ** 2
print("Square root of 2 is " + str(square(2)))
number_list = [1,2,3,4,5]
for number in number_list:
sn = square(number)
print("Square root of " + str(number) + " is " + str(sn))
"""
Explanation: Functions
Functions allow you to organize and reuse code blocks. If you repeat the same code across several conditions, you could make that code block a function and re-use it. Functions are declared with the def keyword and
returned from with the return keyword:
End of explanation
"""
def square(n):
return n ** 2
def cube(n):
return n ** 3
"""
Explanation: Modules
References:
* https://docs.python.org/3/tutorial/modules.html#modules
If you quit from the Python interpreter and enter it again, the definitions you have made (functions and variables) are lost.
Therefore, if you want to write a somewhat longer program, you are better off using a text editor to prepare the input for the interpreter and running it with that file as input instead.
Let's say we define two functions:
End of explanation
"""
import math_ops
for number in number_list:
sn = square(number)
cn = cube(number)
print("Square root of " + str(number) + " is " + str(sn))
print("Cube root of " + str(number) + " is " + str(cn))
print("-------------------------")
"""
Explanation: You can save the code above in a file named math.py. I created the file for you already in the current folder.
All you have to do is import the math_ops.py file
End of explanation
"""
help('modules')
"""
Explanation: You can get a list of the current installed modules too
End of explanation
"""
import datetime
"""
Explanation: Let's import the datetime module.
End of explanation
"""
dir(datetime)
"""
Explanation: Explore the datetime available methods. You can do that by typing the module name, a period after that and pressing the tab key or by using the the built-in functionn dir() as shown below:
End of explanation
"""
import datetime as dt
dir(dt)
"""
Explanation: You can also import a module with a custom name
End of explanation
"""
|
MDAnalysis/cellgrid | tutorials/CellGrid_Tutorial.ipynb | mit | import numpy as np
import cellgrid
"""
Explanation: CellGrid tutorial
CellGrids are an object than contains coordinates, which are split into Cells
End of explanation
"""
coords = np.random.random(30000).reshape(10000, 3).astype(np.float32) * 10.0
box = np.ones(3).astype(np.float32) * 10.0
"""
Explanation: We'll start with 10,000 randomly spaced points inside a 10 x 10 x 10 box.
Note that all coordinates must be inside the box (within 0.0 and 10.0) to use a CellGrid
End of explanation
"""
cg = cellgrid.CellGrid(box, 1.2, coords)
print cg
"""
Explanation: We create a CellGrid by specifying the box, the width of a cell (1.2 in this example), and the coordinates.
End of explanation
"""
cell = cg[20]
print cell
"""
Explanation: The CellGrid object has grouped the coordinates into cells in a 8x8x8 grid. Each cell has a minimum size of 1.2 x 1.2 x 1.2.
The CellGrid then provides access to Cells.
End of explanation
"""
print cell.coordinates
print cell.indices
"""
Explanation: Cells contain positions and indices.
The indices tell you the original identity of the coordinates.
End of explanation
"""
print 'I live at: ', cell.address
print 'Half my neighbours are: ', cell.half_neighbours
for other_address in cell.half_neighbours:
other_cell = cg[other_address]
print other_cell
"""
Explanation: Cells and neighbours
Cells all have an address, which defines their location in cartesian coordinates within the parent CellGrid.
End of explanation
"""
print len(cell.half_neighbours)
print len(cell.all_neighbours)
"""
Explanation: Cells can iterate over either "half_neighbours" or "all_neighbours".
End of explanation
"""
n = 0
for c in cg:
ncell = len(c)
n += ncell * (ncell - 1) // 2 # number of pairs within this cell
for other_address in c.half_neighbours:
n += ncell * len(cg[other_address])
print n
"""
Explanation: All neighbours returns the address of all 26 neighbouring cells to a given cell. These can be put into a CellGrid to get the Cell at that address.
Half neighbours refer to iterating over all 13 neighbouring cells in one direction.
This pattern prevents pairs being calculated twice.
If you are interested in all pairwise comparisons within a CellGrid, you likely want to use half_neighbours.
Note that neither list of neighbour addresses includes the cell itself!
Finding all pairs within 1.2 of each other
Now that the coordinates have been sorted into cells, all nearby pairs can be found in a more efficient way.
First we need to calculate the total number of pairs we need to calculate.
End of explanation
"""
distances = np.zeros(n, dtype=np.float32)
pair_indices = np.zeros((n, 2), dtype=np.int)
from cellgrid.cgmath import (
inter_distance_array_withpbc,
intra_distance_array_withpbc,
inter_index_array,
intra_index_array
)
pos = 0 # pointer to where in the array to fill results
for c in cg:
lenc = len(c)
intra_distance_array_withpbc(c.coordinates, box, distances[pos:])
intra_index_array(c.indices, pair_indices[pos:])
pos += lenc * (lenc - 1) // 2
for other_address in c.half_neighbours:
other = cg[other_address]
inter_distance_array_withpbc(c.coordinates, other.coordinates, box, distances[pos:])
inter_index_array(c.indices, other.indices, pair_indices[pos:])
pos += lenc * len(other)
"""
Explanation: Doing a brute force nxn matrix of our 10,000 coordinates would result in 100 million pairwise comparisons, using a CellGrid has reduced this to 2.6 million.
We can now allocate the results arrays for our search. We will calculate both the distances and the identity of the pairs.
End of explanation
"""
print distances[100], pair_indices[100]
print np.linalg.norm(coords[3152] - coords[3093])
"""
Explanation: Taking for example the 100th result, this tells us that coordinates 3,152 and 3,093 are separated by 0.374 units of distance.
End of explanation
"""
new_distances, new_pairs = distances[distances < 1.2], pair_indices[distances < 1.2]
len(new_distances)
print (new_distances < 1.2).all()
"""
Explanation: This method will have found all pairs within 1.2 of each other, but also many pairs that are slightly above this.
The extra pairs can easily be filtered out using a mask.
End of explanation
"""
brute_force = np.zeros(10000 * 10000, dtype=np.float32)
inter_distance_array_withpbc(coords, coords, box, brute_force)
(brute_force < 1.2).sum()
b2 = brute_force[brute_force < 1.2]
"""
Explanation: We can check that we found all pairs by performing the brute force method to finding pairs.
End of explanation
"""
(len(b2) - 10000) / 2
"""
Explanation: To calculate the number of unique pairs using this approach, we look at all pairs with distance less than 1.2, then subtract 10,000 (the number of coordinates, which are
End of explanation
"""
|
awadalaa/DataSciencePractice | practice/05.LinearRegression.ipynb | mit | # conventional way to import pandas
import pandas as pd
# read CSV file directly from a URL and save the results
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
# display the first 5 rows
data.head()
"""
Explanation: Data science pipeline: pandas, seaborn, scikit-learn¶
Agenda
How do I use the pandas library to read data into Python?
How do I use the seaborn library to visualize data?
What is linear regression, and how does it work?
How do I train and interpret a linear regression model in scikit-learn?
What are some evaluation metrics for regression problems?
How do I choose which features to include in my model?
Types of supervised learning
Classification: Predict a categorical response
Regression: Predict a continuous response
Reading data using pandas
Pandas: popular Python library for data exploration, manipulation, and analysis
* Anaconda users: pandas is already installed
* Other users: installation instructions
End of explanation
"""
# display the last 5 rows
data.tail()
# check the shape of the DataFrame (rows, columns)
data.shape
"""
Explanation: Primary object types:
* DataFrame: rows and columns (like a spreadsheet)
* Series: a single column
End of explanation
"""
# conventional way to import seaborn
import seaborn as sns
# allow plots to appear within the notebook
%matplotlib inlineb
# visualize the relationship between the features and the response using scatterplots
sns.pairplot(data, x_vars=['TV','Radio','Newspaper'], y_vars='Sales', size=7, aspect=0.7, kind='reg')
"""
Explanation: What are the features?
* TV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars)
* Radio: advertising dollars spent on Radio
* Newspaper: advertising dollars spent on Newspaper
What is the response?
* Sales: sales of a single product in a given market (in thousands of items)
What else do we know?
* Because the response variable is continuous, this is a regression problem.
* There are 200 observations (represented by the rows), and each observation is a single market.
Visualizing data using seaborn
Seaborn: Python library for statistical data visualization built on top of Matplotlib
* Anaconda users: run conda install seaborn from the command line
* Other users: installation instructions
End of explanation
"""
# create a Python list of feature names
feature_cols = ['TV', 'Radio', 'Newspaper']
# use the list to select a subset of the original DataFrame
X = data[feature_cols]
# equivalent command to do this in one line
X = data[['TV', 'Radio', 'Newspaper']]
# print the first 5 rows
X.head()
# check the type and shape of X
print type(X)
print X.shape
# select a Series from the DataFrame
y = data['Sales']
# equivalent command that works if there are no spaces in the column name
y = data.Sales
# print the first 5 values
y.head()
# check the type and shape of y
print type(y)
print y.shape
"""
Explanation: Linear regression
Pros: fast, no tuning required, highly interpretable, well-understood
Cons: unlikely to produce the best predictive accuracy (presumes a linear relationship between the features and response)
Form of linear regression
y=β0+β1x1+β2x2+...+βnxn
* y is the response
* β0 is the intercept
* β1 is the coefficient for x1 (the first feature)
* βn is the coefficient for xn (the nth feature)
In this case:
y=β0+β1×TV+β2×Radio+β3×Newspaper
The β values are called the model coefficients. These values are "learned" during the model fitting step using the "least squares" criterion. Then, the fitted model can be used to make predictions!
Preparing X and y using pandas
scikit-learn expects X (feature matrix) and y (response vector) to be NumPy arrays.
However, pandas is built on top of NumPy.
Thus, X can be a pandas DataFrame and y can be a pandas Series!
End of explanation
"""
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# default split is 75% for training and 25% for testing
print X_train.shape
print y_train.shape
print X_test.shape
print y_test.shape
"""
Explanation: Splitting X and y into training and testing sets
End of explanation
"""
# import model
from sklearn.linear_model import LinearRegression
# instantiate
linreg = LinearRegression()
# fit the model to the training data (learn the coefficients)
linreg.fit(X_train, y_train)
"""
Explanation: Linear regression in scikit-learn
End of explanation
"""
# print the intercept and coefficients
print linreg.intercept_
print linreg.coef_
# pair the feature names with the coefficients
zip(feature_cols, linreg.coef_)
"""
Explanation: Interpreting model coefficients
End of explanation
"""
# make predictions on the testing set
y_pred = linreg.predict(X_test)
"""
Explanation: <center>y=2.88+0.0466×TV+0.179×Radio+0.00345×Newspaper</center>
How do we interpret the TV coefficient (0.0466)?
For a given amount of Radio and Newspaper ad spending, a "unit" increase in TV ad spending is associated with a 0.0466 "unit" increase in Sales.
Or more clearly: For a given amount of Radio and Newspaper ad spending, an additional \$1,000 spent on TV ads is associated with an increase in sales of 46.6 items.
Important notes:
* This is a statement of association, not causation.
* If an increase in TV ad spending was associated with a decrease in sales, β1 would be negative.
Making predictions
End of explanation
"""
# define true and predicted response values
true = [100, 50, 30, 20]
pred = [90, 50, 50, 30]
"""
Explanation: We need an evaluation metric in order to compare our predictions with the actual values!
Model evaluation metrics for regression
Evaluation metrics for classification problems, such as accuracy, are not useful for regression problems. Instead, we need evaluation metrics designed for comparing continuous values.
Let's create some example numeric predictions, and calculate three common evaluation metrics for regression problems:
End of explanation
"""
# calculate MAE by hand
print (10 + 0 + 20 + 10)/4.
# calculate MAE using scikit-learn
from sklearn import metrics
print metrics.mean_absolute_error(true, pred)
"""
Explanation: Mean Absolute Error (MAE) is the mean of the absolute value of the errors:
$$\1/n∑_{i=1}^n|y_i-ŷ_i|$$
End of explanation
"""
# calculate MSE by hand
print (10**2 + 0**2 + 20**2 + 10**2)/4.
# calculate MSE using scikit-learn
print metrics.mean_squared_error(true, pred)
"""
Explanation: Mean Squared Error (MSE) is the mean of the squared errors:
$$\1/n∑_{i=1}^n(y_i-ŷ_i)^2$$
End of explanation
"""
# calculate RMSE by hand
import numpy as np
print np.sqrt((10**2 + 0**2 + 20**2 + 10**2)/4.)
# calculate RMSE using scikit-learn
print np.sqrt(metrics.mean_squared_error(true, pred))
"""
Explanation: Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors:
$$\sqrt{1/n∑_{i=1}^n(y_i-ŷ_i)^2}$$
End of explanation
"""
print np.sqrt(metrics.mean_squared_error(y_test, y_pred))
"""
Explanation: Comparing these metrics:
MAE is the easiest to understand, because it's the average error.
MSE is more popular than MAE, because MSE "punishes" larger errors.
RMSE is even more popular than MSE, because RMSE is interpretable in the "y" units.
Computing the RMSE for our Sales predictions
End of explanation
"""
# create a Python list of feature names
feature_cols = ['TV', 'Radio']
# use the list to select a subset of the original DataFrame
X = data[feature_cols]
# select a Series from the DataFrame
y = data.Sales
# split into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# fit the model to the training data (learn the coefficients)
linreg.fit(X_train, y_train)
# make predictions on the testing set
y_pred = linreg.predict(X_test)
# compute the RMSE of our predictions
print np.sqrt(metrics.mean_squared_error(y_test, y_pred))
"""
Explanation: Feature selection
Does Newspaper "belong" in our model? In other words, does it improve the quality of our predictions?
Let's remove it from the model and check the RMSE!
End of explanation
"""
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: The RMSE decreased when we removed Newspaper from the model. (Error is something we want to minimize, so a lower number for RMSE is better.) Thus, it is unlikely that this feature is useful for predicting Sales, and should be removed from the model.
End of explanation
"""
|
dsacademybr/PythonFundamentos | Cap11/DSA-Python-Cap11-Machine-Learning.ipynb | gpl-3.0 | # Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
"""
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 11</font>
Download: http://github.com/dsacademybr
End of explanation
"""
from IPython.display import Image
Image('Workflow.png')
"""
Explanation: Prevendo a Ocorrência de Diabetes
End of explanation
"""
# Importando os módulos
import pandas as pd
import matplotlib as mat
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
pd.__version__
mat.__version__
# Carregando o dataset
df = pd.read_csv("pima-data.csv")
# Verificando o formato dos dados
df.shape
# Verificando as primeiras linhas do dataset
df.head(5)
# Verificando as últimas linhas do dataset
df.tail(5)
# Verificando se existem valores nulos
df.isnull().values.any()
# Identificando a correlação entre as variáveis
# Correlação não implica causalidade
def plot_corr(df, size=10):
corr = df.corr()
fig, ax = plt.subplots(figsize = (size, size))
ax.matshow(corr)
plt.xticks(range(len(corr.columns)), corr.columns)
plt.yticks(range(len(corr.columns)), corr.columns)
# Criando o gráfico
plot_corr(df)
# Visualizando a correlação em tabela
# Coeficiente de correlação:
# +1 = forte correlação positiva
# 0 = não há correlação
# -1 = forte correlação negativa
df.corr()
# Definindo as classes
diabetes_map = {True : 1, False : 0}
# Aplicando o mapeamento ao dataset
df['diabetes'] = df['diabetes'].map(diabetes_map)
# Verificando as primeiras linhas do dataset
df.head(5)
# Verificando como os dados estão distribuídos
num_true = len(df.loc[df['diabetes'] == True])
num_false = len(df.loc[df['diabetes'] == False])
print("Número de Casos Verdadeiros: {0} ({1:2.2f}%)".format(num_true, (num_true/ (num_true + num_false)) * 100))
print("Número de Casos Falsos : {0} ({1:2.2f}%)".format(num_false, (num_false/ (num_true + num_false)) * 100))
"""
Explanation: Conjunto de Dados do Repositório de Machine Learning da UCI / Kaggle
https://www.kaggle.com/uciml/pima-indians-diabetes-database/data
End of explanation
"""
from IPython.display import Image
Image('Treinamento.png')
import sklearn as sk
sk.__version__
from sklearn.model_selection import train_test_split
# Seleção de variáveis preditoras (Feature Selection)
atributos = ['num_preg', 'glucose_conc', 'diastolic_bp', 'thickness', 'insulin', 'bmi', 'diab_pred', 'age']
# Variável a ser prevista
atrib_prev = ['diabetes']
# Criando objetos
X = df[atributos].values
Y = df[atrib_prev].values
X
Y
# Definindo a taxa de split
split_test_size = 0.30
# Criando dados de treino e de teste
X_treino, X_teste, Y_treino, Y_teste = train_test_split(X, Y, test_size = split_test_size, random_state = 42)
# Imprimindo os resultados
print("{0:0.2f}% nos dados de treino".format((len(X_treino)/len(df.index)) * 100))
print("{0:0.2f}% nos dados de teste".format((len(X_teste)/len(df.index)) * 100))
X_treino
"""
Explanation: Spliting
70% para dados de treino e 30% para dados de teste
End of explanation
"""
print("Original True : {0} ({1:0.2f}%)".format(len(df.loc[df['diabetes'] == 1]),
(len(df.loc[df['diabetes'] ==1])/len(df.index) * 100)))
print("Original False : {0} ({1:0.2f}%)".format(len(df.loc[df['diabetes'] == 0]),
(len(df.loc[df['diabetes'] == 0])/len(df.index) * 100)))
print("")
print("Training True : {0} ({1:0.2f}%)".format(len(Y_treino[Y_treino[:] == 1]),
(len(Y_treino[Y_treino[:] == 1])/len(Y_treino) * 100)))
print("Training False : {0} ({1:0.2f}%)".format(len(Y_treino[Y_treino[:] == 0]),
(len(Y_treino[Y_treino[:] == 0])/len(Y_treino) * 100)))
print("")
print("Test True : {0} ({1:0.2f}%)".format(len(Y_teste[Y_teste[:] == 1]),
(len(Y_teste[Y_teste[:] == 1])/len(Y_teste) * 100)))
print("Test False : {0} ({1:0.2f}%)".format(len(Y_teste[Y_teste[:] == 0]),
(len(Y_teste[Y_teste[:] == 0])/len(Y_teste) * 100)))
"""
Explanation: Verificando o Split
End of explanation
"""
# Verificando se existem valores nulos
df.isnull().values.any()
df.head(5)
print("# Linhas no dataframe {0}".format(len(df)))
print("# Linhas missing glucose_conc: {0}".format(len(df.loc[df['glucose_conc'] == 0])))
print("# Linhas missing diastolic_bp: {0}".format(len(df.loc[df['diastolic_bp'] == 0])))
print("# Linhas missing thickness: {0}".format(len(df.loc[df['thickness'] == 0])))
print("# Linhas missing insulin: {0}".format(len(df.loc[df['insulin'] == 0])))
print("# Linhas missing bmi: {0}".format(len(df.loc[df['bmi'] == 0])))
print("# Linhas missing age: {0}".format(len(df.loc[df['age'] == 0])))
"""
Explanation: Valores Missing Ocultos
End of explanation
"""
from sklearn.impute import SimpleImputer
# Criando objeto
preenche_0 = SimpleImputer(missing_values = 0, strategy = "mean")
# Substituindo os valores iguais a zero, pela média dos dados
X_treino = preenche_0.fit_transform(X_treino)
X_teste = preenche_0.fit_transform(X_teste)
X_treino
"""
Explanation: Tratando Dados Missing - Impute
Substituindo os valores iguais a zero, pela média dos dados
End of explanation
"""
# Utilizando um classificador Naive Bayes
from sklearn.naive_bayes import GaussianNB
# Criando o modelo preditivo
modelo_v1 = GaussianNB()
# Treinando o modelo
modelo_v1.fit(X_treino, Y_treino.ravel())
"""
Explanation: 50 a 80% do tempo de trabalho de um Cientista de Dados é usado na preparação dos dados.
Construindo e treinando o modelo
End of explanation
"""
from sklearn import metrics
nb_predict_train = modelo_v1.predict(X_treino)
print("Exatidão (Accuracy): {0:.4f}".format(metrics.accuracy_score(Y_treino, nb_predict_train)))
print()
"""
Explanation: Verificando a exatidão no modelo nos dados de treino
End of explanation
"""
nb_predict_test = modelo_v1.predict(X_teste)
print("Exatidão (Accuracy): {0:.4f}".format(metrics.accuracy_score(Y_teste, nb_predict_test)))
print()
"""
Explanation: Verificando a exatidão no modelo nos dados de teste
End of explanation
"""
from IPython.display import Image
Image('ConfusionMatrix.jpg')
# Criando uma Confusion Matrix
print("Confusion Matrix")
print("{0}".format(metrics.confusion_matrix(Y_teste, nb_predict_test, labels = [1, 0])))
print("")
print("Classification Report")
print(metrics.classification_report(Y_teste, nb_predict_test, labels = [1, 0]))
"""
Explanation: Métricas
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
modelo_v2 = RandomForestClassifier(random_state = 42)
modelo_v2.fit(X_treino, Y_treino.ravel())
# Verificando os dados de treino
rf_predict_train = modelo_v2.predict(X_treino)
print("Exatidão (Accuracy): {0:.4f}".format(metrics.accuracy_score(Y_treino, rf_predict_train)))
# Verificando nos dados de teste
rf_predict_test = modelo_v2.predict(X_teste)
print("Exatidão (Accuracy): {0:.4f}".format(metrics.accuracy_score(Y_teste, rf_predict_test)))
print()
print("Confusion Matrix")
print("{0}".format(metrics.confusion_matrix(Y_teste, rf_predict_test, labels = [1, 0])))
print("")
print("Classification Report")
print(metrics.classification_report(Y_teste, rf_predict_test, labels = [1, 0]))
"""
Explanation: Otimizando o Modelo com RandomForest
End of explanation
"""
from sklearn.linear_model import LogisticRegression
# Terceira versão do modelo usando Regressão Logística
modelo_v3 = LogisticRegression(C = 0.7, random_state = 42, max_iter = 1000)
modelo_v3.fit(X_treino, Y_treino.ravel())
lr_predict_test = modelo_v3.predict(X_teste)
print("Exatidão (Accuracy): {0:.4f}".format(metrics.accuracy_score(Y_teste, lr_predict_test)))
print()
print("Classification Report")
print(metrics.classification_report(Y_teste, lr_predict_test, labels = [1, 0]))
### Resumindo
## Exatidão nos dados de teste
# Modelo usando algoritmo Naive Bayes = 0.7359
# Modelo usando algoritmo Random Forest = 0.7400
# Modelo usando algoritmo Regressão Logística = 0.7446
"""
Explanation: Regressão Logística
End of explanation
"""
import pickle
# Salvando o modelo para usar mais tarde
filename = 'modelo_treinado_v3.sav'
pickle.dump(modelo_v3, open(filename, 'wb'))
X_teste
# Carregando o modelo e fazendo previsão com novos conjuntos de dados
# (X_teste, Y_teste devem ser novos conjuntos de dados preparados com o procedimento de limpeza e transformação adequados)
loaded_model = pickle.load(open(filename, 'rb'))
resultado1 = loaded_model.predict(X_teste[15].reshape(1, -1))
resultado2 = loaded_model.predict(X_teste[18].reshape(1, -1))
print(resultado1)
print(resultado2)
"""
Explanation: Fazendo Previsões Com o Modelo Treinado
End of explanation
"""
|
elektrobohemian/courses | InformationRetrieval.ipynb | mit | # This cell has to be run to prepare the Jupyter notebook
# The %... is an Jupyter thing, and is not part of the Python language.
# In this case we're just telling the plotting library to draw things on
# the notebook, instead of on a separate window.
%matplotlib inline
# See all the "as ..." contructs? They're just aliasing the package names.
# That way we can call methods like plt.plot() instead of matplotlib.pyplot.plot().
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
from time import time
from math import log
# import the k-means algorithm
from sklearn.cluster import KMeans
import seaborn as sns
sns.set_style("whitegrid")
sns.set_context("poster")
"""
Explanation: Das Vektorraummodell als Grundlage automatisierter Verfahren im Information Retrieval und Text Mining
End of explanation
"""
# create sample data
d = {'doc01': (1,2), 'doc02': (10,10),'doc03':(60,40),'doc04':(100,80),'doc05':(99,81),'doc06':(1,1),'doc07':(45,55),'doc08':(9,10),'doc09':(11,11),'doc10':(1,11)}
# create a data frame from our sample data
sampleDF=pd.DataFrame(data=d)
# transpose the data frame for later use
sampleDF=sampleDF.transpose()
sampleDF.columns = ['term 2 (z.B. Retrieval)', 'term 1 (z.B. Information)']
sampleDF.head(10)
# plot the sample data
sampleDF.plot(x=0,y=1,kind='scatter',alpha=0.75,s=70) # we have to define explicitly which data is used for the x and y axes
"""
Explanation: Agenda
Abgrenzung zum Booleschen Retrievalmodell
Nutzer-zentrierte Herleitung des Vektorraummodells
Auswirkungen der natürlichen Sprache auf das Modell
Vor- und Nachteile des Vektorraummodells
Praktische Anwendungen im Text und Data Mining
Boolesches Retrieval
Relevanz wird anhand der Existenz des Anfrageterms im Dokument bestimmt
Score = 1 für relevante Dokumente
Score = 0 für nichtrelevante Dokumente
Kein Ranking möglich, ein Dokument ist relevant oder nicht
Extended Boolean Retrieval bringt Ranking
Auch in relationalen Datenbanken verwendet, In Bibliotheken: OPAC
Vorteile
Simple Anfragen sind leicht zu verstehen
Nachteile
Schwierig, genaue Anfragen zu spezifizieren
Logische Junktoren (UND, ODER) entsprechen nicht immer der natürlichen Sprache
Ergebnismengen werden schnell zu klein oder zu groß, je nach Junktoren
Ergebnis ist nur sortier- bzw. gruppierbar
Nur ist der Nutzer immer so präzise?
Nutzer sind sich im Unklaren...
über den Informationsbedarf
wie der Informationsbedarf formuliert werden kann
was in der Kollektion enthalten ist
was "gute" Ergebnisse sind, d.h.,
ob relevante Ergebnisse gefunden wurden
ob alle wichtigen Ergebnisse gefunden wurden
Dokumente, die einer Anfrage „ähnlich“ sind, sind wahrscheinlich relevant für diese Anfrage
[Kuhlthau 1991, Ingwersen 1996, Ellis/Haugan 1997, Reiterer et al. 2000] und viele mehr...
Wie lässt sich Ähnlichkeit ausdrücken?
Vom Dokument zum Vektor
Dokumente und Anfrage repräsentiert durch t-dimensionale Vektoren
t ist Termanzahl des Indexvokabulars
i-te Vektorkomponente ist Termgewicht $w_{dk}$ (reelle Zahl)
Hohes Termgewicht ⇒ entsprechender Term repräsentiert Dokument/Anfrage gut
[Salton et al. 1975]
End of explanation
"""
vq=[0,1,1,0]
vd1=[50,5,0,0]
vd2=[0,2,2,0]
print "similarity between Vq and Vd1: %i"%np.inner(vq,vd1) # inner product is another name for the scalar/dot product
print "similarity between Vq and Vd2: %i"%np.inner(vq,vd2)
"""
Explanation: Clusterhypothese
“This hypothesis may be simply stated as follows: closely associated documents tend to be relevant to the same requests.”
[van Rijsbergen 1979, Ch. 3]
Das Skalarprodukt als Ähnlichkeitsmaß
$$
similarity(V_Q,V_D)=<V_Q,V_D>=\sum_{k=1}^tw_{qk}w_{dk}=|V_Q||V_D|\cos(\alpha)
$$
Vorkommenshäufigkeit des Terms im Dokument
Vorkommenshäufigkeit $tf_{dk}$ des Terms im Dokument, desto besser beschreibt der Term das Dokument
Achtung: absolute Vorkommenshäufigkeit bei langen Dokumenten tendenziell höher
Konsequenz: lange Dokumente werden bei der Suche bevorzugt
End of explanation
"""
# with stopword in first dimension
vq=[100,0,0,1,2]
b=[99,1,2,0,0]
c=[80,0,0,2,2]
print "Before stopword elimination"
print "\tsimilarity between Vq and b: %i"%np.inner(vq,b)
print "\tsimilarity between Vq and c: %i"%np.inner(a,c)
# after stopword elimination
vq2=[0,0,1,2]
b2=[1,2,0,0]
c2=[0,0,2,2]
print "\nAfter stopword elimination"
print "\tsimilarity between Vq and b: %i"%np.inner(vq2,b2)
print "\tsimilarity between Vq and c: %i"%np.inner(a2,c2)
"""
Explanation: Vorkommenshäufigkeit des Terms im Dokument
Unabhängigkeit von Dokumentlänge durch Normierung der Dokumentvektoren:
$$
w_{dk}=\frac{tf_{dk}}{\sqrt{\sum_{i=1}^t}tf_{di}^2}
$$
Zipf's Law
Ordnet man Worte eines Textes nach ihrer Häufigkeit, so ist ihre Auftrittswahrscheinlichkeit p(n) umgekehrt proportional zu ihrer Position n innerhalb der Ordnung ⇒ Hyperbel
$$
p(n)\simeq \frac{1}{n}
$$
Zipf‘s Law als Motivation für Stoppworte
Stoppworte: Worte, die aus dem Indexvokabular entfernt werden, weil sie...
häufig (oder zu selten) sind
"ohne Bedeutung" sind
Wie wirken sich Stoppworte etc. auf Dokumentvektoren aus?
Stoppwort-Eliminierung vermindert Möglichkeiten der Phrasensuche
End of explanation
"""
N=1000.0
ni=[]
quotient=[]
quotientLog=[]
for i in range(1,500):
ni.append(i)
for i in ni:
quotient.append(N/i)
quotientLog.append(log(N/i))
plt.plot(ni,quotient,label="N/ni")
plt.plot(ni,quotientLog,label="log(N/ni)")
plt.axis([0, 500, 0, 100])
plt.ylabel('Result (limited to 100)')
plt.xlabel("ni")
plt.title("Sample results of idf, N=1000")
plt.legend()
"""
Explanation: Trennschärfe des Terms
Wenn Term in weniger Dokumenten auftritt, ist er spezifischer
Spezifischere Terme haben höhere Trennschärfe und sollten höher gewichtet werden
Ansatz: Trennschärfe als Quotient: $N/n_i$
$$
w_{dk}=\frac{tf_{dk}\frac{N}{n_k}}{\sqrt{\sum_{i=1}^t}(tf_{di}\frac{N}{n_i})^2}
$$
tf steht für term frequency
idf steht für inverse document frequency
tf∙idf-Formel ist heuristische Formel und in Varianten weit verbreitet
wird auch in Lucene und damit auch in SOLR und ElasticSearch genutzt
End of explanation
"""
# define the number of clusters to be found
true_k=3
# initialize the k-means algorithm
km = KMeans(n_clusters=true_k, init='k-means++', max_iter=100, n_init=1)
# apply the algorithm on the data
km.fit(sampleDF)
# add the detected clusters as a new column to the original data frame
sampleDF['cluster']=km.labels_
sampleDF=sampleDF.sort('cluster')
sampleDF.head(10)
clusterCenters=pd.DataFrame(data=km.cluster_centers_)
clusterCenters.head()
# plot the sample data and save the plot in the variable "ax"
ax=sampleDF.plot(x=0,y=1,kind='scatter',alpha=0.75,s=70)
# plot the centroids in red
plt.scatter(x=clusterCenters[0],y=clusterCenters[1],color='red')
# next, define the circles' centers surrounding the clusters for a better visualization result
cirlePos1=(clusterCenters[0][0],clusterCenters[1][0])
cirlePos2=(clusterCenters[0][1],clusterCenters[1][1])
cirlePos3=(clusterCenters[0][2],clusterCenters[1][2])
# create the unfilled circles with a radius of 20 (this value is arbitrary)
circ1=plt.Circle(cirlePos1,20,color='r',fill=False)
circ2=plt.Circle(cirlePos2,20,color='r',fill=False)
circ3=plt.Circle(cirlePos3,20,color='r',fill=False)
# add the circles to your plot
ax.add_patch(circ1)
ax.add_patch(circ2)
ax.add_patch(circ3)
"""
Explanation: Bewertung des Vektorraummodells
Nachteile:
Geht von Unabhängigkeit der Terme aus
Reihenfolge der Worte spielt keine Rolle (bag of words)
Keine theoretische Fundierung des Modells
Anpassung an strukturierte Dokumente nur schwer möglich
Vorteile
Einfachheit
Sehr gute empirische Retrieval-Qualität
Effiziente Implementierung durch invertierte Listen
Relevance Feedback verbessert Ergebnisqualität
Direkt für Text Mining verwendbar
Prinzipien auch auf multimediale Daten übertragbar
Ausblick auf Anwendungen im Text Mining
Clustering
End of explanation
"""
|
metpy/MetPy | v0.9/_downloads/3aec65fc693ccd0216a40e663bc10ddb/Hodograph_Inset.ipynb | bsd-3-clause | import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import numpy as np
import pandas as pd
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.plots import add_metpy_logo, Hodograph, SkewT
from metpy.units import units
"""
Explanation: Hodograph Inset
Layout a Skew-T plot with a hodograph inset into the plot.
End of explanation
"""
col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']
df = pd.read_fwf(get_test_data('may4_sounding.txt', as_file_obj=False),
skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)
df['u_wind'], df['v_wind'] = mpcalc.wind_components(df['speed'],
np.deg2rad(df['direction']))
# Drop any rows with all NaN values for T, Td, winds
df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed',
'u_wind', 'v_wind'), how='all').reset_index(drop=True)
"""
Explanation: Upper air data can be obtained using the siphon package, but for this example we will use
some of MetPy's sample data.
End of explanation
"""
p = df['pressure'].values * units.hPa
T = df['temperature'].values * units.degC
Td = df['dewpoint'].values * units.degC
wind_speed = df['speed'].values * units.knots
wind_dir = df['direction'].values * units.degrees
u, v = mpcalc.wind_components(wind_speed, wind_dir)
# Create a new figure. The dimensions here give a good aspect ratio
fig = plt.figure(figsize=(9, 9))
add_metpy_logo(fig, 115, 100)
# Grid for plots
skew = SkewT(fig, rotation=45)
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.plot_barbs(p, u, v)
skew.ax.set_ylim(1000, 100)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
# Good bounds for aspect ratio
skew.ax.set_xlim(-50, 60)
# Create a hodograph
ax_hod = inset_axes(skew.ax, '40%', '40%', loc=1)
h = Hodograph(ax_hod, component_range=80.)
h.add_grid(increment=20)
h.plot_colormapped(u, v, np.hypot(u, v))
# Show the plot
plt.show()
"""
Explanation: We will pull the data out of the example dataset into individual variables and
assign units.
End of explanation
"""
|
Chipe1/aima-python | notebooks/chapter22/Parsing.ipynb | mit | psource(Chart)
"""
Explanation: Parsing
Overview
Syntactic analysis (or parsing) of a sentence is the process of uncovering the phrase structure of the sentence according to the rules of grammar.
There are two main approaches to parsing. Top-down, start with the starting symbol and build a parse tree with the given words as its leaves, and bottom-up, where we start from the given words and build a tree that has the starting symbol as its root. Both approaches involve "guessing" ahead, so it may take longer to parse a sentence (the wrong guess mean a lot of backtracking). Thankfully, a lot of effort is spent in analyzing already analyzed substrings, so we can follow a dynamic programming approach to store and reuse these parses instead of recomputing them.
In dynamic programming, we use a data structure known as a chart, thus the algorithms parsing a chart is called chart parsing. We will cover several different chart parsing algorithms.
Chart Parsing
Overview
The chart parsing algorithm is a general form of the following algorithms. Given a non-probabilistic grammar and a sentence, this algorithm builds a parse tree in a top-down manner, with the words of the sentence as the leaves. It works with a dynamic programming approach, building a chart to store parses for substrings so that it doesn't have to analyze them again (just like the CYK algorithm). Each non-terminal, starting from S, gets replaced by its right-hand side rules in the chart until we end up with the correct parses.
Implementation
A parse is in the form [start, end, non-terminal, sub-tree, expected-transformation], where sub-tree is a tree with the corresponding non-terminal as its root and expected-transformation is a right-hand side rule of the non-terminal.
The chart parsing is implemented in a class, Chart. It is initialized with grammar and can return the list of all the parses of a sentence with the parses function.
The chart is a list of lists. The lists correspond to the lengths of substrings (including the empty string), from start to finish. When we say 'a point in the chart', we refer to a list of a certain length.
A quick rundown of the class functions:
parses: Returns a list of parses for a given sentence. If the sentence can't be parsed, it will return an empty list. Initializes the process by calling parse from the starting symbol.
parse: Parses the list of words and builds the chart.
add_edge: Adds another edge to the chart at a given point. Also, examines whether the edge extends or predicts another edge. If the edge itself is not expecting a transformation, it will extend other edges and it will predict edges otherwise.
scanner: Given a word and a point in the chart, it extends edges that were expecting a transformation that can result in the given word. For example, if the word 'the' is an 'Article' and we are examining two edges at a chart's point, with one expecting an 'Article' and the other a 'Verb', the first one will be extended while the second one will not.
predictor: If an edge can't extend other edges (because it is expecting a transformation itself), we will add to the chart rules/transformations that can help extend the edge. The new edges come from the right-hand side of the expected transformation's rules. For example, if an edge is expecting the transformation 'Adjective Noun', we will add to the chart an edge for each right-hand side rule of the non-terminal 'Adjective'.
extender: Extends edges given an edge (called E). If E's non-terminal is the same as the expected transformation of another edge (let's call it A), add to the chart a new edge with the non-terminal of A and the transformations of A minus the non-terminal that matched with E's non-terminal. For example, if an edge E has 'Article' as its non-terminal and is expecting no transformation, we need to see what edges it can extend. Let's examine the edge N. This expects a transformation of 'Noun Verb'. 'Noun' does not match with 'Article', so we move on. Another edge, A, expects a transformation of 'Article Noun' and has a non-terminal of 'NP'. We have a match! A new edge will be added with 'NP' as its non-terminal (the non-terminal of A) and 'Noun' as the expected transformation (the rest of the expected transformation of A).
You can view the source code by running the cell below:
End of explanation
"""
chart = Chart(E0)
"""
Explanation: Example
We will use the grammar E0 to parse the sentence "the stench is in 2 2".
First, we need to build a Chart object:
End of explanation
"""
print(chart.parses('the stench is in 2 2'))
"""
Explanation: And then we simply call the parses function:
End of explanation
"""
chart_trace = Chart(nlp.E0, trace=True)
chart_trace.parses('the stench is in 2 2')
"""
Explanation: You can see which edges get added by setting the optional initialization argument trace to true.
End of explanation
"""
print(chart.parses('the stench 2 2'))
"""
Explanation: Let's try and parse a sentence that is not recognized by the grammar:
End of explanation
"""
import os, sys
sys.path = [os.path.abspath("../../")] + sys.path
from nlp4e import *
from notebook4e import psource
psource(CYK_parse)
"""
Explanation: An empty list was returned.
CYK Parse
The CYK Parsing Algorithm (named after its inventors, Cocke, Younger, and Kasami) utilizes dynamic programming to parse sentences of grammar in Chomsky Normal Form.
The CYK algorithm returns an M x N x N array (named P), where N is the number of words in the sentence and M the number of non-terminal symbols in the grammar. Each element in this array shows the probability of a substring being transformed from a particular non-terminal. To find the most probable parse of the sentence, a search in the resulting array is required. Search heuristic algorithms work well in this space, and we can derive the heuristics from the properties of the grammar.
The algorithm in short works like this: There is an external loop that determines the length of the substring. Then the algorithm loops through the words in the sentence. For each word, it again loops through all the words to its right up to the first-loop length. The substring will work on in this iteration is the words from the second-loop word with the first-loop length. Finally, it loops through all the rules in the grammar and updates the substring's probability for each right-hand side non-terminal.
Implementation
The implementation takes as input a list of words and a probabilistic grammar (from the ProbGrammar class detailed above) in CNF and returns the table/dictionary P. An item's key in P is a tuple in the form (Non-terminal, the start of a substring, length of substring), and the value is a Tree object. The Tree data structure has two attributes: root and leaves. root stores the value of current tree node and leaves is a list of children nodes which may be terminal states(words in the sentence) or a sub tree.
For example, for the sentence "the monkey is dancing" and the substring "the monkey" an item can be ('NP', 0, 2): <Tree object>, which means the first two words (the substring from index 0 and length 2) can be parse to a NP and the detailed operations are recorded by a Tree object.
Before we continue, you can take a look at the source code by running the cell below:
End of explanation
"""
E_Prob_Chomsky = ProbGrammar("E_Prob_Chomsky", # A Probabilistic Grammar in CNF
ProbRules(
S = "NP VP [1]",
NP = "Article Noun [0.6] | Adjective Noun [0.4]",
VP = "Verb NP [0.5] | Verb Adjective [0.5]",
),
ProbLexicon(
Article = "the [0.5] | a [0.25] | an [0.25]",
Noun = "robot [0.4] | sheep [0.4] | fence [0.2]",
Adjective = "good [0.5] | new [0.2] | sad [0.3]",
Verb = "is [0.5] | say [0.3] | are [0.2]"
))
"""
Explanation: When updating the probability of a substring, we pick the max of its current one and the probability of the substring broken into two parts: one from the second-loop word with third-loop length, and the other from the first part's end to the remainder of the first-loop length.
Example
Let's build a probabilistic grammar in CNF:
End of explanation
"""
words = ['the', 'robot', 'is', 'good']
grammar = E_Prob_Chomsky
P = CYK_parse(words, grammar)
print(P)
"""
Explanation: Now let's see the probabilities table for the sentence "the robot is good":
End of explanation
"""
parses = {k: p.leaves for k, p in P.items()}
print(parses)
"""
Explanation: A defaultdict object is returned (defaultdict is basically a dictionary but with a default value/type). Keys are tuples in the form mentioned above and the values are the corresponding parse trees which demonstrates how the sentence will be parsed. Let's check the details of each parsing:
End of explanation
"""
for subtree in P['VP', 2, 3].leaves:
print(subtree.leaves)
"""
Explanation: Please note that each item in the returned dict represents a parsing strategy. For instance, ('Article', 0, 0): ['the'] means parsing the article at position 0 from the word the. For the key 'VP', 2, 3, it is mapped to another Tree which means this is a nested parsing step. If we print this item in detail:
End of explanation
"""
psource(astar_search_parsing)
"""
Explanation: So we can interpret this step as parsing the word at index 2 and 3 together('is' and 'good') as a verh phrase.
A-star Parsing
The CYK algorithm uses space of $O(n^2m)$ for the P and T tables, where n is the number of words in the sentence, and m is the number of nonterminal symbols in the grammar and takes time $O(n^3m)$. This is the best algorithm if we want to find the best parse and works for all possible context-free grammars. But actually, we only want to parse natural languages, not all possible grammars, which allows us to apply more efficient algorithms.
By applying a-start search, we are using the state-space search and we can get $O(n)$ running time. In this situation, each state is a list of items (words or categories), the start state is a list of words, and a goal state is the single item S.
In our code, we implemented a demonstration of astar_search_parsing which deals with the text parsing problem. By specifying different words and gramma, we can use this searching strategy to deal with different text parsing problems. The algorithm returns a boolean telling whether the input words is a sentence under the given grammar.
For detailed implementation, please execute the following block:
End of explanation
"""
grammar = E0
words = ['the', 'wumpus', 'is', 'dead']
astar_search_parsing(words, grammar)
"""
Explanation: Example
Now let's try "the wumpus is dead" example. First we need to define the grammer and words in the sentence.
End of explanation
"""
words_swaped = ["the", "is", "wupus", "dead"]
astar_search_parsing(words_swaped, grammar)
"""
Explanation: The algorithm returns a 'S' which means it treats the inputs as a sentence. If we change the order of words to make it unreadable:
End of explanation
"""
psource(beam_search_parsing)
"""
Explanation: Then the algorithm asserts that out words cannot be a sentence.
Beam Search Parsing
In the beam searching algorithm, we still treat the text parsing problem as a state-space searching algorithm. when using beam search, we consider only the b most probable alternative parses. This means we are not guaranteed to find the parse with the highest probability, but (with a careful implementation) the parser can operate in $O(n)$ time and still finds the best parse most of the time. A beam search parser with b = 1 is called a deterministic parser.
Implementation
In the beam search, we maintain a frontier which is a priority queue keep tracking of the current frontier of searching. In each step, we explore all the examples in frontier and saves the best n examples as the frontier of the exploration of the next step.
For detailed implementation, please view with the following code:
End of explanation
"""
beam_search_parsing(words, grammar)
beam_search_parsing(words_swaped, grammar)
"""
Explanation: Example
Let's try both the positive and negative wumpus example on this algorithm:
End of explanation
"""
|
dkirkby/quantum-demo | jupyter/PerturbTheory.ipynb | mit | %pylab inline
"""
Explanation: Time-Independent Perturbation Theory
End of explanation
"""
def plot_1d_unperturbed(a=1, nmax=3, ngrid=100):
x = np.linspace(0, a, ngrid)
fig, ax = plt.subplots(1, 2, figsize=(12, 5))
for n in range(1, nmax + 1):
color = 'krgb'[n - 1]
psi = np.sqrt(2 / a) * np.sin(n * np.pi * x / a)
ax[0].plot(x, psi ** 2, c=color, label=f'n={n}')
ax[1].plot([0, a], [n ** 2, n ** 2], c=color, label=f'n={n}')
ax[0].set_yticks([])
ax[0].legend()
ax[0].set_xlabel('Position x')
ax[0].set_ylabel('Stationary State $|\psi_n(x)|^2$')
ax[1].set_ylim(0, None)
ax[1].set_xlabel('Position x')
ax[1].set_ylabel('Energy $E_n / E_1$')
plot_1d_unperturbed()
"""
Explanation: Infinite 1D Square Well
Consider the potential:
$$
V(x) = \begin{cases}
0 & 0 \le x \le a \
\infty & \text{otherwise}
\end{cases}
$$
with stationary states ($n = 1, 2, 3, \ldots$):
$$
\psi_n(x) = \sqrt{\frac{2}{a}}\, \sin\left( \frac{n\pi}{a} x\right)
$$
and corresponding energies:
$$
E_n = n^2 E_1 \; .
$$
Plot the lowest-energy stationary states:
End of explanation
"""
def get_matrix(u, v, nmax=10, a=1):
n = np.arange(1, nmax + 1)
npi = n * np.pi / a
Hp = np.diag((2 * (v - u) * npi + np.sin(2 * u * npi) - np.sin(2 * v * npi)) / (2 * a * npi))
for k in n:
kpi = k * np.pi / a
num = (
n * (np.cos(npi * v) * np.sin(kpi * v) - np.cos(npi * u) * np.sin(kpi * u)) +
k * (np.cos(kpi * u) * np.sin(npi * u) - np.cos(kpi * v) * np.sin(npi * v)))
Hp[k - 1, k != n] = num[k != n] / (np.pi * (k ** 2 - n ** 2)[k != n])
return n, Hp
def plot_matrix(u, v, nmax=20, a=1):
n, Hp = get_matrix(u, v, nmax, a)
fig, ax = plt.subplots(1, 2, figsize=(13.5, 5))
vlim = np.max(np.abs(Hp))
I = ax[0].imshow(Hp, interpolation='none', vmax=+vlim, vmin=-vlim, cmap='bwr')
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.colorbar(I, ax=ax[0]).set_label('Matrix element $H\prime_{kn}$')
k = n.copy()
ck = np.zeros_like(Hp)
for k in n:
ck[k - 1, k != n] = Hp[k - 1, k != n] / (k ** 2 - n ** 2)[k != n]
vlim = max(1e-8, np.max(np.abs(ck)))
I = ax[1].imshow(ck, interpolation='none', vmax=+vlim, vmin=-vlim, cmap='bwr')
ax[1].set_xticks([])
ax[1].set_yticks([])
plt.colorbar(I, ax=ax[1]).set_label('$\psi_n^1$ coeffcient $c_k^{(n)}$')
plot_matrix(0, 1)
plot_matrix(0, 0.5)
plot_matrix(0.5, 1)
plot_matrix(0.3, 0.5)
def solve_perturb(u, v, n, dV=1, kmax=100, a=1, nx=100):
# Calculate the 0th order energy.
En0 = n ** 2
# Calculate 1st order correction to the energy
npi = n * np.pi / a
En1 = (2 * (v - u) * npi + np.sin(2 * u * npi) - np.sin(2 * v * npi)) * dV / (2 * a * npi)
# Calculate 2nd order correction to the energy
k = np.arange(1, kmax + 1)
kpi = k * np.pi / a
dotp = (n * (np.cos(npi * v) * np.sin(kpi * v) - np.cos(npi * u) * np.sin(kpi * u)) +
k * (np.cos(kpi * u) * np.sin(npi * u) - np.cos(kpi * v) * np.sin(npi * v)))
dotp[k != n] /= (np.pi * (k ** 2 - n ** 2)[k != n])
En0 = n ** 2
Ek0 = k ** 2
En2 = np.sum(dotp[k != n] ** 2 / (En0 - Ek0)[k != n]) * dV ** 2
# Calculate coefficients of the 1st order correction to the stationary state.
ck = dV * dotp
ck[k != n] /= (En0 - Ek0)[k != n]
# Build 0th and 1st order solutions.
x = np.linspace(0, a, nx).reshape(-1, 1)
psin0 = np.sqrt(2 / a) * np.sin(npi * x).reshape(-1)
psin1 = np.sum(ck[k != n] * np.sqrt(2 / a) * np.sin(kpi[k != n] * x), axis=1)
return En0, En1, En2, x.reshape(-1), psin0, psin1
def plot_perturb(u, v, n, dVmax=10):
En0, En1, En2, x, psin0, psin1 = solve_perturb(u, v, n)
ngrid = 6
dVgrid = np.linspace(0, dVmax, ngrid)
fig, ax = plt.subplots(1, 2, figsize=(13.5, 5))
sel = (x >= u) & (x < v)
colors = np.zeros((len(dVgrid), 4))
colors[:, 3] = np.linspace(1, 0.5, ngrid)
colors[:, 0] = np.linspace(0, 1, ngrid) ** 2
for dV, color in zip(dVgrid, colors):
psisq = (psin0 + dV * psin1) ** 2
psisq /= np.trapz(psisq, x)
if dV == 0:
norm = dVmax / np.max(psisq)
psisq *= norm
ax[0].plot(x, psisq, c=color, lw=2, label=f'$\delta V = {dV}$')
ax[0].plot(x[sel], dV + 0 * x[sel], c=color, lw=1)
ax[0].set_xlabel('Position $x$')
ax[0].set_ylabel(f'Perturbed $V(x)$, $|\psi_{n}^1(x)|^2$')
ax[1].plot(dVgrid, En0 + 0 * dVgrid, 'k-', label='0th order')
ax[1].plot(dVgrid, En0 + dVgrid * En1, 'k--', label='1st order')
ax[1].plot(dVgrid, En0 + dVgrid * En1 + dVgrid ** 2 * En2, 'k:', label='2nd order')
ax[1].scatter(dVgrid, En0 + dVgrid * En1 + dVgrid ** 2 * En2, lw=0, c=colors, s=50)
ax[1].scatter(dVgrid, dVgrid, c='b', marker='*', s=50, label='$\delta V$')
ax[1].legend()
ax[1].set_xlabel('Perturbation Strength $\delta V$')
ax[1].set_ylabel(f'Perturbed Energy $E_{n}^m$')
plot_perturb(0, 1, 1, 5)
plot_perturb(0, 0.5, 1, 5)
plot_perturb(0.5, 1, 1, 5)
plot_perturb(0, 0.5, 2, 15)
plot_perturb(0, 0.5, 3, 50)
plot_perturb(0.3, 0.5, 1, 5)
plot_perturb(0.3, 0.5, 2, 15)
plot_perturb(0.3, 0.5, 3, 50)
"""
Explanation: Calculate corrections when the potential is perturbed with the addition of:
$$
H' =
\begin{cases}
\delta V & u \le x \le v \
0 & \text{otherwise}
\end{cases}
$$
The first-order correction to the energy is:
$$
E_n^1 = \langle \psi_n^0\mid H'\mid \psi_n^0\rangle
= \delta V \int_u^v \left| \psi_n(x)\right|^2 dx
$$
with
$$
\int_u^v \left| \psi_n(x)\right|^2 dx = \frac{1}{2 a n\pi} \left[
2(v - u) n \pi + a \sin(2 u n\pi/a) - a \sin(2 v n\pi/a)\right] \; .
$$
The corresponding first-order correction to the stationary state is:
$$
\psi_n^1 = \sum_{k\ne n} c_k^{(n)} \psi_n^0
$$
with
$$
c_k^{(n)} = \frac{\langle \psi_k^0\mid H'\mid \psi_n^0\rangle}{E_n^0 - E_k^0} = \frac{\delta V}{E_n^0 - E_k^0}
\int_u^v \psi_k(x)^\ast \psi_n(x) dx
$$
and
$$
\int_u^v \psi_k(x)^\ast \psi_n(x) dx = \frac{1}{(k^2 - n^2)\pi}\bigl[
n \left(\cos n\pi v/a \sin k\pi v/a - \cos n\pi u/a \sin k\pi u/a\right) +
k \left(\cos k\pi u/a \sin n\pi u/a - \cos k\pi v/a \sin n\pi v/a\right)\bigr] \; .
$$
The second-order correction to the energy is:
$$
E_n^2 = \sum_{k\ne n} \frac{\left|\langle \psi_k^0\mid H'\mid \psi_n^0\rangle\right|^2}{E_n^0 - E_k^0}
= \delta V^2 \sum_{k\ne n}\frac{\left[\int_u^v \psi_k(x)^\ast \psi_n(x) dx\right]^2}{E_n^0 - E_k^0} \; .
$$
Note that when $u=0$ and $v=a$, these simplify to:
$$
E_n^1 = \delta V \quad , \quad
c_k^{(n)} = 0 \quad , \quad
E_n^2 = 0 \; .
$$
End of explanation
"""
|
stijnvanhoey/course_gis_scripting | _solved/02-scientific-python-introduction.ipynb | bsd-3-clause | import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn-white')
"""
Explanation: <p><font size="6"><b>Scientific Python essentials</b></font></p>
Introduction to GIS scripting
May, 2017
© 2017, Stijn Van Hoey (stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
Introduction
There is a large variety of packages available in Python to support research. Importing a package is like getting a piece of lab equipment out of a storage locker and setting it up on the bench for use in a project. Once a library is set up (imported), it can be used or called to perform many tasks.
In this notebook, we will focus on two fundamental packages within most scientific applications:
Numpy
Pandas
Furthermore, if plotting is required, this will be done with matplotlib package (we only use plot and imshow in this tutorial):
End of explanation
"""
import numpy as np
# np. # explore the namespace
"""
Explanation: Numpy
Introduction
NumPy is the fundamental package for scientific computing with Python.
Information for the freaks:
a powerful N-dimensional array/vector/matrix object
sophisticated (broadcasting) functions
function implementation in C/Fortran assuring good performance if vectorized
tools for integrating C/C++ and Fortran code
useful linear algebra, Fourier transform, and random number capabilities
In short: Numpy is the Python package to do fast calculations!
It is a community agreement to import the numpy package with the prefix np to identify the usage of numpy functions. Use the CTRL + SHIFT option to check the available functions of numpy:
End of explanation
"""
np.lookfor("quantile")
"""
Explanation: Numpy provides many mathematical functions, which operate element-wise on a so-called numpy.ndarray data type (in short: array).
<div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li> There is a lot of functionality in Numpy. Knowing **how to find a specific function** is more important than knowing all functions...
</ul>
</div>
You were looking for some function to derive quantiles of an array...
End of explanation
"""
#?np.percentile
# help(np.percentile)
# use SHIFT + TAB
"""
Explanation: Different methods do read the manual:
End of explanation
"""
throws = 1000 # number of rolls with the dices
stone1 = np.random.uniform(1, 6, throws) # outcome of throws with dice 1
stone2 = np.random.uniform(1, 6, throws) # outcome of throws with dice 2
total = stone1 + stone2 # sum of each outcome
histogram = plt.hist(total, bins=20) # plot as histogram
"""
Explanation: Showcases
You like to play boardgames, but you want to better know you're chances of rolling a certain combination (sum) with 2 dices:
End of explanation
"""
# random numbers (X, Y in 2 columns)
Z = np.random.random((10,2))
X, Y = Z[:,0], Z[:,1]
# Distance
R = np.sqrt(X**2 + Y**2)
# Angle
T = np.arctan2(Y, X) # Array of angles in radians
Tdegree = T*180/(np.pi) # If you like degrees more
# NEXT PART (purely for illustration)
# plot the cartesian coordinates
plt.figure(figsize=(14, 6))
ax1 = plt.subplot(121)
ax1.plot(Z[:,0], Z[:,1], 'o')
ax1.set_title("Cartesian")
#plot the polar coorsidnates
ax2 = plt.subplot(122, polar=True)
ax2.plot(T, R, 'o')
ax2.set_title("Polar")
"""
Explanation: Consider a random 10x2 matrix representing cartesian coordinates (between 0 and 1), how to convert them to polar coordinates?
End of explanation
"""
nete_bodem = np.load("../data/nete_bodem.npy")
plt.imshow(nete_bodem)
plt.colorbar(shrink=0.6)
nete_bodem_rescaled = (nete_bodem - nete_bodem.min())/(nete_bodem.max() - nete_bodem.min()) # rescale
nete_bodem_rescaled[nete_bodem_rescaled == 0.0] = np.nan # assign Nan values to zero values
plt.imshow(nete_bodem_rescaled)
plt.colorbar(shrink=0.6)
"""
Explanation: Rescale the values of a given array to values in the range [0-1] and mark zero values are Nan:
End of explanation
"""
np.array([1, 1.5, 2, 2.5]) #np.array(anylist)
"""
Explanation: (Remark: There is no GIS-component in the previous manipulation, these are pure element-wise operations on an array!)
Creating numpy array
End of explanation
"""
np.arange(5, 12, 2)
"""
Explanation: <div class="alert alert-warning">
<b>R comparison:</b><br>
<p>One could compare the numpy array to the R vector. It contains a single data type (character, float, integer) and operations are element-wise.</p>
</div>
Provide a range of values, with a begin, end and stepsize:
End of explanation
"""
np.linspace(2, 13, 3)
"""
Explanation: Provide a range of values, with a begin, end and number of values in between:
End of explanation
"""
np.zeros((5, 2)), np.ones(5)
"""
Explanation: Create empty arrays or arrays filled with ones:
End of explanation
"""
np.zeros((5, 2)).shape, np.zeros((5, 2)).size
"""
Explanation: Request the shape or the size of the arrays:
End of explanation
"""
np.random.rand(5,5) # check with np.random. + TAB for sampling from other distributions!
"""
Explanation: And creating random numbers:
End of explanation
"""
nete_bodem = np.load("../data/nete_bodem.npy")
plt.imshow(nete_bodem)
"""
Explanation: Reading in from binary file:
End of explanation
"""
nete_bodem_subset = np.loadtxt("../data/nete_bodem_subset.out")
plt.imshow(nete_bodem_subset)
"""
Explanation: Reading in from a text-file:
End of explanation
"""
my_array = np.random.randint(2, 10, 10)
my_array
my_array[:5], my_array[4:], my_array[-2:]
my_array[0:7:2]
sequence = np.arange(0, 11, 1)
sequence, sequence[::2], sequence[1::3],
"""
Explanation: Slicing (accessing values in arrays)
This i equivalent to the slicing of a list:
End of explanation
"""
my_array[:2] = 10
my_array
my_array = my_array.reshape(5, 2)
my_array
"""
Explanation: Assign new values to items
End of explanation
"""
my_array[0, :]
"""
Explanation: With multiple dimensions, we get the option of slice amongst these dimensions:
End of explanation
"""
my_array = np.random.randint(2, 10, 10)
my_array
print('Mean value is', np.mean(my_array))
print('Median value is', np.median(my_array))
print('Std is', np.std(my_array))
print('Variance is', np.var(my_array))
print('Min is', my_array.min())
print('Element of minimum value is', my_array.argmin())
print('Max is', my_array.max())
print('Sum is', np.sum(my_array))
print('Prod', np.prod(my_array))
print('Unique values in this array are:', np.unique(my_array))
print('85% Percentile value is: ', np.percentile(my_array, 85))
my_other_array = np.random.randint(2, 10, 10).reshape(2, 5)
my_other_array
"""
Explanation: Aggregation calculations
End of explanation
"""
my_other_array.max(), my_other_array.max(axis=1), my_other_array.max(axis=0)
"""
Explanation: use the argument axis to define the ax to calculate a specific statistic:
End of explanation
"""
my_array = np.random.randint(2, 10, 10)
my_array
print('Cumsum is', np.cumsum(my_array))
print('CumProd is', np.cumprod(my_array))
print('CumProd of 5 first elements is', np.cumprod(my_array)[4])
np.exp(my_array), np.sin(my_array)
my_array%3 # == 0
"""
Explanation: Element-wise operations
End of explanation
"""
np.cumsum(my_array) == my_array.cumsum()
my_array.dtype
"""
Explanation: Using the numpy available function from the library or using the object method?
End of explanation
"""
my_array.cumsum()
my_array.max(axis=0)
my_array * my_array # element-wise
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Check the documentation of both `np.cumsum()` and `my_array.cumsum()`. What is the difference?</li>
<li>Why do we use brackets () to run `cumsum` and we do not use brackets when asking for the `dtype`?</li>
</ul>
</div>
<div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li> `np.cumsum` operates a <b>method/function</b> from the numpy library with input an array, e.g. `my_array`
<li> `my_array.cumsum()` is a <b>method/function</b> available to the object `my_array`
<li> `dtype` is an attribute/characteristic of the object `my_array`
</ul>
</div>
<div class="alert alert-danger">
<ul>
<li>It is all about calling a **method/function()** on an **object** to perform an action. The available methods are provided by the packages (or any function you write and import).
<li>Objects also have **attributes**, defining the characteristics of the object (these are not actions)
</ul>
</div>
End of explanation
"""
a_list = range(1000)
%timeit [i**2 for i in a_list]
an_array = np.arange(1000)
%timeit an_array**2
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li> The operations do work on all elements of the array at the same time, you don't need a <strike>`for` loop<strike>
</ul>
</div>
What is the added value of the numpy implementation compared to 'basic' python?
End of explanation
"""
row_array = np.random.randint(1, 20, 10)
row_array
"""
Explanation: Boolean indexing and filtering (!)
This is a fancy term for making selections based on a condition!
Let's start with an array that contains random values:
End of explanation
"""
row_array > 5
boolean_mask = row_array > 5
boolean_mask
"""
Explanation: Conditions can be checked (element-wise):
End of explanation
"""
row_array[boolean_mask]
"""
Explanation: You can use this as a filter to select elements of an array:
End of explanation
"""
row_array[boolean_mask] = 20
row_array
"""
Explanation: or, also to change the values in the array corresponding to these conditions:
End of explanation
"""
row_array[row_array == 20] = -20
row_array
"""
Explanation: in short - making the values equal to 20 now -20:
End of explanation
"""
AR = np.random.randint(0, 20, 15)
AR
"""
Explanation: <div class="alert alert-warning">
<b>R comparison:</b><br>
<p>This is similar to conditional filtering in R on vectors...</p>
</div>
<div class="alert alert-danger">
Understanding conditional selections and assignments is CRUCIAL!
</div>
This requires some practice...
End of explanation
"""
sum(AR > 10)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Count the number of values in AR that are larger than 10 (note: you can count with True = 1 and False = 0)</li>
</ul>
</div>
End of explanation
"""
AR[AR%2 == 0] = 0
AR
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Change all even numbers of `AR` into zero-values.</li>
</ul>
</div>
End of explanation
"""
AR[1::2] = 30
AR
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Change all even positions of matrix AR into 30 values</li>
</ul>
</div>
End of explanation
"""
AR2 = np.random.random(10)
AR2
np.sqrt(AR2[AR2 > np.percentile(AR2, 75)])
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Select all values above the 75th `percentile` of the following array AR2 ad take the square root of these values</li>
</ul>
</div>
End of explanation
"""
AR3 = np.array([-99., 2., 3., 6., 8, -99., 7., 5., 6., -99.])
AR3[AR3 == -99.] = np.nan
AR3
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Convert all values -99. of the array AR3 into Nan-values (Note that Nan values can be provided in float arrays as `np.nan`)</li>
</ul>
</div>
End of explanation
"""
nete_bodem_subset = np.loadtxt("../data/nete_bodem_subset.out")
np.unique(nete_bodem_subset)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Get an overview of the unique values present in the array `nete_bodem_subset`</li>
</ul>
</div>
End of explanation
"""
nete_bodem_subset = np.loadtxt("../data/nete_bodem_subset.out")
nete_bodem_subset[nete_bodem_subset <= 100000.] = 0
nete_bodem_subset[nete_bodem_subset > 100000.] = 1
plt.imshow(nete_bodem_subset)
plt.colorbar(shrink=0.8)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Reclassify the values of the array `nete_bodem_subset` (binary filter):</li>
<ul>
<li>values lower than or equal to 100000 should be 0</li>
<li>values higher than 100000 should be 1</li>
</ul>
</ul>
</div>
End of explanation
"""
# community agreement: import as pd
import pandas as pd
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li> No need to retain everything, but have the reflex to search in the documentation (online docs, SHIFT-TAB, help(), lookfor())!!
<li> Conditional selections (boolean indexing) is crucial!
</ul>
</div>
This is just touching the surface of Numpy in order to proceed to the next phase (Pandas and GeoPandas)...
More extended material on Numpy is available online:
http://www.scipy-lectures.org/intro/numpy/index.html (great resource to start with scientifi python!)
https://github.com/stijnvanhoey/course_python_introduction/blob/master/scientific/numpy.ipynb (more extended version of the material covered in this tutorial)
Pandas: data analysis in Python
Introduction
For data-intensive work in Python, the Pandas library has become essential. Pandas originally meant Panel Data, though many users probably don't know that.
What is pandas?
Pandas can be thought of as NumPy arrays with labels for rows and columns, and better support for heterogeneous data types, but it's also much, much more than that.
Pandas can also be thought of as R's data.frame in Python.
Powerful for working with missing data, working with time series data, for reading and writing your data, for reshaping, grouping, merging your data,...
Pandas documentation is available on: http://pandas.pydata.org/pandas-docs/stable/
End of explanation
"""
surveys_df = pd.read_csv("../data/surveys.csv")
surveys_df.head() # Try also tail()
surveys_df.shape
surveys_df.columns
surveys_df.info()
surveys_df.dtypes
surveys_df.describe()
"""
Explanation: Data exploration
Reading in data to DataFrame
End of explanation
"""
surveys_df["weight"].hist(bins=20)
"""
Explanation: <div class="alert alert-warning">
<b>R comparison:</b><br>
<p>See the similarities and differences with the R `data.frame` - e.g. you would use `summary(df)` instead of `df.describe()` :-)</p>
</div>
End of explanation
"""
a_series = pd.Series([0.1, 0.2, 0.3, 0.4])
a_series
a_series.index, a_series.values
"""
Explanation: Series and DataFrames
A Pandas Series is a basic holder for one-dimensional labeled data. It can be created much as a NumPy array is created:
End of explanation
"""
a_series.name = "example_series"
a_series
a_series[2]
"""
Explanation: Series do have an index and values (a numpy array!) and you can give the series a name (amongst other things)
End of explanation
"""
a_series2 = pd.Series(np.arange(4), index=['a', 'b', 'c', 'd'])
a_series2['c']
"""
Explanation: Unlike the NumPy array, though, this index can be something other than integers:
End of explanation
"""
surveys_df.head()
surveys_df["species_id"].head()
"""
Explanation: A DataFrame is a tabular data structure (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index.
<img src="../img/schema-dataframe.svg" width=50%><br>
Note that in the IPython notebook, the dataframe will display in a rich HTML view:
End of explanation
"""
type(surveys_df), type(surveys_df["species_id"])
"""
Explanation: If you selecte a single column of a DataFrame, you end up with... a Series:
End of explanation
"""
print('Mean weight is', surveys_df["weight"].mean())
print('Median weight is', surveys_df["weight"].median())
print('Std of weight is', surveys_df["weight"].std())
print('Variance of weight is', surveys_df["weight"].var())
print('Min is', surveys_df["weight"].min())
print('Element of minimum value is', surveys_df["weight"].argmin())
print('Max is', surveys_df["weight"].max())
print('Sum is', surveys_df["weight"].sum())
print('85% Percentile value is: ', surveys_df["weight"].quantile(0.85))
"""
Explanation: Aggregation and element-wise calculations
Completely similar to Numpy, aggregation statistics are available:
End of explanation
"""
surveys_df['weight_normalised'] = surveys_df["weight"]/surveys_df["weight"].mean()
"""
Explanation: Calculations are element-wise, e.g. adding the normalized weight (relative to its mean) as an additional column:
End of explanation
"""
np.sqrt(surveys_df["hindfoot_length"]).head()
"""
Explanation: Pandas and Numpy collaborate well (Numpy methods can be applied on the DataFrame values, as these are actually numpy arrays):
End of explanation
"""
surveys_df.groupby('sex')[['hindfoot_length', 'weight']].mean() # Try yourself with min, max,...
"""
Explanation: Groupby provides the functionality to do an aggregation or calculation for each group:
End of explanation
"""
# example dataframe from scratch
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries = countries.set_index('country')
countries
"""
Explanation: <div class="alert alert-warning">
<b>R comparison:</b><br>
<p>Similar with groupby in R, i.e. working with factors</p>
</div>
Slicing
<div class="alert alert-info">
<b>ATTENTION!:</b><br><br>
One of pandas' basic features is the labeling of rows and columns, but this makes indexing also a bit more complex compared to numpy. <br><br>We now have to distuinguish between:
<ul>
<li> selection by **label**: loc
<li> selection by **position** iloc
</ul>
</div>
End of explanation
"""
countries['area'] # single []
countries[['area', 'population']] # double [[]]
countries['France':'Netherlands']
"""
Explanation: The shortcut []
End of explanation
"""
countries.loc['Germany', 'area']
countries.loc['France':'Germany', ['area', 'population']]
"""
Explanation: Systematic indexing with loc and iloc
When using [] like above, you can only select from one axis at once (rows or columns, not both). For more advanced indexing, you have some extra attributes:
loc: selection by label
iloc: selection by position
End of explanation
"""
countries.iloc[0:2,1:3]
"""
Explanation: Selecting by position with iloc works similar as indexing numpy arrays:
End of explanation
"""
countries['area'] > 100000
"""
Explanation: Boolean indexing
In short, similar to Numpy:
End of explanation
"""
countries[countries['area'] > 100000]
countries['size'] = np.nan # create an exmpty new column
countries
countries.loc[countries['area'] > 100000, "size"] = 'LARGE'
countries.loc[countries['area'] <= 100000, "size"] = 'SMALL'
countries
"""
Explanation: Selecting by conditions:
End of explanation
"""
species_df = pd.read_csv("../data/species.csv", delimiter=";")
"""
Explanation: Combining DataFrames (!)
An important way to combine DataFrames is to use columns in each dataset that contain common values (a common unique id) as is done in databases. Combining DataFrames using a common field is called joining. Joining DataFrames in this way is often useful when one DataFrames is a “lookup table” containing additional data that we want to include in the other.
As an example, consider the availability of the species information in a separate lookup-table:
End of explanation
"""
species_df.head()
surveys_df.head()
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Check the other `read_` functions that are available in the Pandas package yourself.
</ul>
</div>
End of explanation
"""
merged_left = pd.merge(surveys_df, species_df, how="left", on="species_id")
merged_left.head()
"""
Explanation: We see that both tables do have a common identifier column (species_id), which we ca use to join the two tables together with the command merge:
End of explanation
"""
flowdata = pd.read_csv("../data/vmm_flowdata.csv", index_col=0,
parse_dates=True)
flowdata.head()
"""
Explanation: Optional section: Pandas is great with time series
End of explanation
"""
flowdata.index.year, flowdata.index.dayofweek, flowdata.index.dayofyear #,...
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li> `pd.read_csv` provides a lot of built-in functionality to support this kind of transactions when reading in a file! Check the **help** of the read_csv function...
</ul>
</div>
The index provides many attributes to work with:
End of explanation
"""
flowdata["2012-01-01 09:00":"2012-01-04 19:00"].plot()
"""
Explanation: Subselecting periods can be done by the string representation of dates:
End of explanation
"""
flowdata["2009"].plot()
"""
Explanation: or shorter when possible:
End of explanation
"""
flowdata.loc[(flowdata.index.days_in_month == 30) & (flowdata.index.year == 2009), "L06_347"].plot()
"""
Explanation: Combinations with other selection criteria is possible, e.g. to get all months with 30 days in the year 2009:
End of explanation
"""
flowdata[(flowdata.index.hour > 8) & (flowdata.index.hour < 20)].head()
# OR USE flowdata.between_time('08:00', '20:00')
"""
Explanation: Select all 'daytime' data (between 8h and 20h) for all days, station "L06_347":
End of explanation
"""
flowdata.resample('A').mean().plot()
"""
Explanation: A very powerful method is resample: converting the frequency of the time series (e.g. from hourly to daily data).
End of explanation
"""
daily = flowdata['LS06_348'].resample('D').mean() # calculate the daily average value
daily.resample('M').agg(['min', 'max']).plot() # calculate the monthly minimum and maximum values
"""
Explanation: A practical example is: Plot the monthly minimum and maximum of the daily average values of the LS06_348 column
End of explanation
"""
flowdata['2013'].mean().plot(kind='barh')
"""
Explanation: Other plots are supported as well, e.g. a bar plot of the mean of the stations in year 2013
End of explanation
"""
|
google/iree | samples/colab/tensorflow_hub_import.ipynb | apache-2.0 | #@title Licensed under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
"""
Explanation: Copyright 2021 The IREE Authors
End of explanation
"""
%%capture
!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://github.com/google/iree/releases
import os
import tensorflow as tf
import tensorflow_hub as hub
import tempfile
from IPython.display import clear_output
from iree.compiler import tf as tfc
# Print version information for future notebook users to reference.
print("TensorFlow version: ", tf.__version__)
ARTIFACTS_DIR = os.path.join(tempfile.gettempdir(), "iree", "colab_artifacts")
os.makedirs(ARTIFACTS_DIR, exist_ok=True)
print(f"Using artifacts directory '{ARTIFACTS_DIR}'")
"""
Explanation: IREE TensorFlow Hub Import
This notebook demonstrates how to download, import, and compile models from TensorFlow Hub. It covers:
Downloading a model from TensorFlow Hub
Ensuring the model has serving signatures needed for import
Importing and compiling the model with IREE
At the end of the notebook, the compilation artifacts are compressed into a .zip file for you to download and use in an application.
See also https://google.github.io/iree/ml-frameworks/tensorflow/.
Setup
End of explanation
"""
#@title Download the pretrained model
# Use the `hub` library to download the pretrained model to the local disk
# https://www.tensorflow.org/hub/api_docs/python/hub
HUB_PATH = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4"
model_path = hub.resolve(HUB_PATH)
print(f"Downloaded model from tfhub to path: '{model_path}'")
"""
Explanation: Import pretrained mobilenet_v2 model
IREE supports importing TensorFlow 2 models exported in the SavedModel format. This model we'll be importing is published in that format already, while other models may need to be converted first.
MobileNet V2 is a family of neural network architectures for efficient on-device image classification and related tasks. This TensorFlow Hub module contains a trained instance of one particular network architecture packaged to perform image classification.
End of explanation
"""
#@title Check for serving signatures
# Load the SavedModel from the local disk and check if it has serving signatures
# https://www.tensorflow.org/guide/saved_model#loading_and_using_a_custom_model
loaded_model = tf.saved_model.load(model_path)
serving_signatures = list(loaded_model.signatures.keys())
print(f"Loaded SavedModel from '{model_path}'")
print(f"Serving signatures: {serving_signatures}")
# Also check with the saved_model_cli:
print("\n---\n")
print("Checking for signature_defs using saved_model_cli:\n")
!saved_model_cli show --dir {model_path} --tag_set serve --signature_def serving_default
"""
Explanation: Check for serving signatures and re-export as needed
IREE's compiler tools, like TensorFlow's saved_model_cli and other tools, require "serving signatures" to be defined in SavedModels.
More references:
https://www.tensorflow.org/tfx/serving/signature_defs
https://blog.tensorflow.org/2021/03/a-tour-of-savedmodel-signatures.html
End of explanation
"""
#@title Look up input signatures to use when exporting
# To save serving signatures we need to specify a `ConcreteFunction` with a
# TensorSpec signature. We can determine what this signature should be by
# looking at any documentation for the model or running the saved_model_cli.
!saved_model_cli show --dir {model_path} --all \
2> /dev/null | grep "inputs: TensorSpec" | tail -n 1
#@title Re-export the model using the known signature
# Get a concrete function using the signature we found above.
#
# The first element of the shape is a dynamic batch size. We'll be running
# inference on a single image at a time, so set it to `1`. The rest of the
# shape is the fixed image dimensions [width=224, height=224, channels=3].
call = loaded_model.__call__.get_concrete_function(tf.TensorSpec([1, 224, 224, 3], tf.float32))
# Save the model, setting the concrete function as a serving signature.
# https://www.tensorflow.org/guide/saved_model#saving_a_custom_model
resaved_model_path = '/tmp/resaved_model'
tf.saved_model.save(loaded_model, resaved_model_path, signatures=call)
clear_output() # Skip over TensorFlow's output.
print(f"Saved model with serving signatures to '{resaved_model_path}'")
# Load the model back into memory and check that it has serving signatures now
reloaded_model = tf.saved_model.load(resaved_model_path)
reloaded_serving_signatures = list(reloaded_model.signatures.keys())
print(f"\nReloaded SavedModel from '{resaved_model_path}'")
print(f"Serving signatures: {reloaded_serving_signatures}")
# Also check with the saved_model_cli:
print("\n---\n")
print("Checking for signature_defs using saved_model_cli:\n")
!saved_model_cli show --dir {resaved_model_path} --tag_set serve --signature_def serving_default
"""
Explanation: Since the model we downloaded did not include any serving signatures, we'll re-export it with serving signatures defined.
https://www.tensorflow.org/guide/saved_model#specifying_signatures_during_export
End of explanation
"""
#@title Import from SavedModel
# The main output file from compilation is a .vmfb "VM FlatBuffer". This file
# can used to run the compiled model with IREE's runtime.
output_file = os.path.join(ARTIFACTS_DIR, "mobilenet_v2.vmfb")
# As compilation runs, dump some intermediate .mlir files for future inspection.
tf_input = os.path.join(ARTIFACTS_DIR, "mobilenet_v2_tf_input.mlir")
iree_input = os.path.join(ARTIFACTS_DIR, "mobilenet_v2_iree_input.mlir")
# Since our SavedModel uses signature defs, we use `saved_model_tags` with
# `import_type="SIGNATURE_DEF"`. If the SavedModel used an object graph, we
# would use `exported_names` with `import_type="OBJECT_GRAPH"` instead.
# We'll set `target_backends=["vmvx"]` to use IREE's reference CPU backend.
# We could instead use different backends here, or set `import_only=True` then
# download the imported .mlir file for compilation using native tools directly.
tfc.compile_saved_model(
resaved_model_path,
output_file=output_file,
save_temp_tf_input=tf_input,
save_temp_iree_input=iree_input,
import_type="SIGNATURE_DEF",
saved_model_tags=set(["serve"]),
target_backends=["vmvx"])
clear_output() # Skip over TensorFlow's output.
print(f"Saved compiled output to '{output_file}'")
print(f"Saved tf_input to '{tf_input}'")
print(f"Saved iree_input to '{iree_input}'")
#@title Download compilation artifacts
ARTIFACTS_ZIP = "/tmp/mobilenet_colab_artifacts.zip"
print(f"Zipping '{ARTIFACTS_DIR}' to '{ARTIFACTS_ZIP}' for download...")
!cd {ARTIFACTS_DIR} && zip -r {ARTIFACTS_ZIP} .
# Note: you can also download files using the file explorer on the left
try:
from google.colab import files
print("Downloading the artifacts zip file...")
files.download(ARTIFACTS_ZIP)
except ImportError:
print("Missing google_colab Python package, can't download files")
"""
Explanation: Import and compile the SavedModel with IREE
End of explanation
"""
|
luofan18/deep-learning | intro-to-rnns/Anna_KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
encoded[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
len(vocab)
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
"""
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
"""
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
"""
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
"""
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
def build_cell(lstm_size, keep_prob):
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
"""
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
"""
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
"""
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
"""
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
"""
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
"""
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
"""
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
"""
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
"""
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
"""
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
"""
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
gully/PyKE | docs/source/tutorials/ipython_notebooks/motion-correction/Replicate_Vanderburg_2014_K2SFF.ipynb | mit | from pyke import KeplerTargetPixelFile
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Replicate Vanderburg & Johnson 2014 K2SFF Method
In this notebook we will replicate the K2SFF method from Vanderburg and Johnson 2014. The paper introduces a method for "Self Flat Fielding", by tracking how the lightcurve changes with motion of the spacecraft.
End of explanation
"""
#! wget https://www.cfa.harvard.edu/~avanderb/k2/ep60021426alldiagnostics.csv
import pandas as pd
df = pd.read_csv('ep60021426alldiagnostics.csv',index_col=False)
df.head()
"""
Explanation: Get data
End of explanation
"""
col = df[' X-centroid'].values
col = col - np.mean(col)
row = df[' Y-centroid'].values
row = row - np.mean(row)
def _get_eigen_vectors(centroid_col, centroid_row):
centroids = np.array([centroid_col, centroid_row])
eig_val, eig_vec = np.linalg.eigh(np.cov(centroids))
return eig_val, eig_vec
def _rotate(eig_vec, centroid_col, centroid_row):
centroids = np.array([centroid_col, centroid_row])
return np.dot(eig_vec, centroids)
eig_val, eig_vec = _get_eigen_vectors(col, row)
v1, v2 = eig_vec
"""
Explanation: Let's use the provided $x-y$ centroids, but we could compute these on our own too.
End of explanation
"""
plt.figure(figsize=(5, 6))
plt.plot(col*4.0, row*4.0, 'ko', ms=4)
plt.plot(col*4.0, row*4.0, 'ro', ms=1)
plt.xticks([-2, -1,0, 1, 2])
plt.yticks([-2, -1,0, 1, 2])
plt.xlabel('X position [arcseconds]')
plt.ylabel('Y position [arcseconds]')
plt.xlim(-2, 2)
plt.ylim(-2, 2)
plt.plot([0, v1[0]], [0, v1[1]], color='blue', lw=3)
plt.plot([0, v2[0]], [0, v2[1]], color='blue', lw=3);
"""
Explanation: The major axis is the last one.
End of explanation
"""
rot_colp, rot_rowp = _rotate(eig_vec, col, row)
"""
Explanation: Following the form of Figure 2 of Vanderburg & Johsnon 2014.
End of explanation
"""
plt.figure(figsize=(5, 6))
plt.plot(rot_rowp*4.0, rot_colp*4.0, 'ko', ms=4)
plt.plot(rot_rowp*4.0, rot_colp*4.0, 'ro', ms=1)
plt.xticks([-2, -1,0, 1, 2])
plt.yticks([-2, -1,0, 1, 2])
plt.xlabel("X' position [arcseconds]")
plt.ylabel("Y' position [arcseconds]")
plt.xlim(-2, 2)
plt.ylim(-2, 2)
plt.plot([0, 1], [0, 0], color='blue')
plt.plot([0, 0], [0, 1], color='blue');
"""
Explanation: You can rotate into the new reference frame.
End of explanation
"""
z = np.polyfit(rot_rowp, rot_colp, 5)
p5 = np.poly1d(z)
p5_deriv = p5.deriv()
x0_prime = np.min(rot_rowp)
xmax_prime = np.max(rot_rowp)
x_dense = np.linspace(x0_prime, xmax_prime, 2000)
plt.plot(rot_rowp, rot_colp, '.')
plt.plot(x_dense, p5(x_dense));
@np.vectorize
def arclength(x):
'''Input x1_prime, get out arclength'''
gi = x_dense <x
s_integrand = np.sqrt(1 + p5_deriv(x_dense[gi]) ** 2)
s = np.trapz(s_integrand, x=x_dense[gi])
return s
plt.plot(df[' arclength'], arclength(rot_rowp)*4.0, '.')
plt.plot([0, 4], [0, 4], 'k--');
"""
Explanation: We need to calculate the arclength using:
$$s= \int_{x'_0}^{x'_1}\sqrt{1+\left( \frac{dy'_p}{dx'}\right)^2} dx'$$
where $x^\prime_0$ is the transformed $x$ coordinate of the point with the smallest $x^\prime$ position, and $y^\prime_p$ is the best--fit polynomial function.
End of explanation
"""
from scipy.interpolate import BSpline
from scipy import interpolate
tt, ff = df['BJD - 2454833'].values, df[' Raw Flux'].values
tt = tt - tt[0]
knots = np.arange(0, tt[-1], 1.5)
t,c,k = interpolate.splrep(tt, ff, s=0, task=-1, t=knots[1:])
bspl = BSpline(t,c,k)
plt.plot(tt, ff, '.')
plt.plot(tt, bspl(tt))
"""
Explanation: It works!
Now we apply a high-pass filter. We follow the original paper by using BSplines with 1.5 day breakpoints.
End of explanation
"""
norm_ff = ff/bspl(tt)
"""
Explanation: Spline fit looks good, so normalize the flux by the long-term trend.
Plot the normalized flux versus arclength to see the position-dependent flux.
End of explanation
"""
bi = df[' Thrusters On'].values == 1.0
gi = df[' Thrusters On'].values == 0.0
al, gff = arclength(rot_rowp[gi])*4.0, norm_ff[gi]
sorted_inds = np.argsort(al)
"""
Explanation: Mask the data by keeping only the good samples.
End of explanation
"""
knots = np.array([np.min(al)]+
[np.median(splt) for splt in np.array_split(al[sorted_inds], 15)]+
[np.max(al)])
bin_means = np.array([gff[sorted_inds][0]]+
[np.mean(splt) for splt in np.array_split(gff[sorted_inds], 15)]+
[gff[sorted_inds][-1]])
zz = np.polyfit(al, gff,6)
sff = np.poly1d(zz)
al_dense = np.linspace(0, 4, 1000)
interp_func = interpolate.interp1d(knots, bin_means)
plt.figure(figsize=(5, 6))
plt.plot(arclength(rot_rowp)*4.0, norm_ff, 'ko', ms=4)
plt.plot(arclength(rot_rowp)*4.0, norm_ff, 'o', color='#3498db', ms=3)
plt.plot(arclength(rot_rowp[bi])*4.0, norm_ff[bi], 'o', color='r', ms=3)
#plt.plot(al_dense, sff(al_dense), '-', color='#e67e22')
#plt.plot(knots, bin_means, '-', color='#e67e22')
plt.plot(np.sort(al), interp_func(np.sort(al)), '-', color='#e67e22')
#plt.xticks([0, 1,2, 3, 4])
plt.xlabel('Arclength [arcseconds]')
plt.ylabel('Relative Brightness')
plt.title('EPIC 60021426, Kp =10.3')
#plt.xlim(0,4)
plt.ylim(0.997, 1.002);
"""
Explanation: We will follow the paper by interpolating 15 bins of means. This is a piecewise linear fit.
End of explanation
"""
corr_flux = gff / interp_func(al)
plt.figure(figsize=(10,6))
dy = 0.004
plt.plot(df['BJD - 2454833'], df[' Raw Flux']+dy, 'ko', ms=4)
plt.plot(df['BJD - 2454833'], df[' Raw Flux']+dy, 'o', color='#3498db', ms=3)
plt.plot(df['BJD - 2454833'][bi], df[' Raw Flux'][bi]+dy, 'o', color='r', ms=3)
plt.plot(df['BJD - 2454833'][gi], corr_flux*bspl(tt[gi]), 'o', color='k', ms = 4)
plt.plot(df['BJD - 2454833'][gi], corr_flux*bspl(tt[gi]), 'o', color='#e67e22', ms = 3)
#plt.plot(df['BJD - 2454833'][gi], df[' Corrected Flux'][gi], 'o', color='#00ff00', ms = 4)
plt.xlabel('BJD - 2454833')
plt.ylabel('Relative Brightness')
plt.xlim(1862, 1870)
plt.ylim(0.994, 1.008);
"""
Explanation: Following Figure 4 of Vanderburg & Johnson 2014.
Apply the Self Flat Field (SFF) correction:
End of explanation
"""
from pyke import LightCurve
#lc = LightCurve(time=df['BJD - 2454833'][gi], flux=corr_flux*bspl(tt[gi]))
lc = LightCurve(time=df['BJD - 2454833'][gi], flux=df[' Corrected Flux'][gi])
lc.cdpp(savgol_window=201)
"""
Explanation: Following Figure 5 of Vanderburg & Johnson 2015.
Let's compute the CDPP:
End of explanation
"""
from pyke.lightcurve import SFFCorrector
sff = SFFCorrector()
lc_corrected = sff.correct(df['BJD - 2454833'][gi].values,
df[' Raw Flux'][gi].values,
col[gi], row[gi], niters=1, windows=1, polyorder=5)
plt.figure(figsize=(10,6))
dy = 0.004
plt.plot(df['BJD - 2454833'], df[' Raw Flux']+dy, 'ko', ms=4)
plt.plot(df['BJD - 2454833'], df[' Raw Flux']+dy, 'o', color='#3498db', ms=3)
plt.plot(df['BJD - 2454833'][bi], df[' Raw Flux'][bi]+dy, 'o', color='r', ms=3)
plt.plot(df['BJD - 2454833'][gi], lc_corrected.flux*bspl(tt[gi]), 'o', color='k', ms = 4)
plt.plot(df['BJD - 2454833'][gi], lc_corrected.flux*bspl(tt[gi]), 'o', color='pink', ms = 3)
plt.xlabel('BJD - 2454833')
plt.ylabel('Relative Brightness')
plt.xlim(1862, 1870)
plt.ylim(0.994, 1.008);
sff._plot_normflux_arclength()
sff._plot_rotated_centroids()
"""
Explanation: The end.
Using PyKE:
End of explanation
"""
|
adelle207/pyladies.cz | original/v1/s011-dicts/simple-api.ipynb | mit | import requests
# Stažení stránky
stranka = requests.get('https://python.cz/')
# Ověření, že se vše povedlo
stranka.raise_for_status()
# Vypsání obsahu stránky
print(stranka.text)
"""
Explanation: Ukázka práce s API
Instalace knihovny requests
Aktivujte si virtuální prostředí a v něm spusťte následující příkaz
(venv) > python -m pip install requests
Rozhraní aplikací dělané pro prohlížeč
Výstupem takových aplikací je směs kódu různých technologií (HTML, CSS, JS), který dokáže prohlížeč převést do grafické podoby a zobrazit uživateli přívětivou formou. Nicméně pro zpracování v aplikaci není tento výstup vhodný, protože obsahuje hromadu informací, které pro nás nemají žádnou hodnotu - jako například barvy písma, odsazení atp.
End of explanation
"""
data = requests.get('http://pyladies-json.herokuapp.com/prezidenti/všichni')
data.raise_for_status()
print(data.text)
"""
Explanation: API k testování
API k testování vytvořil Glutexo a obsahuje pro nás zajímavé informace nejen o českých prezidentech. Zdrojové kódy v jazyce Ruby jsou i s kompletním popisem API dostupné na GitHubu a API samotné pak na adrese http://pyladies-json.herokuapp.com/.
End of explanation
"""
import json
# Konverze z textového JSON formátu na Pythoní objekt
json_data = json.loads(data.text)
# Výpis dat
print(json_data)
"""
Explanation: Konverze na Pythoní objekt
End of explanation
"""
data = requests.get('http://pyladies-json.herokuapp.com/prezidenti/1945-05-01')
data.raise_for_status()
prezidenti = json.loads(data.text)
for prezident in prezidenti:
print('Prezident {} žil od {} do {}'.format(prezident['jméno'], prezident['život']['od'], prezident['život']['do']))
print('Období úřadu prezidenta: {}'.format(prezident['jméno']))
for obdobi in prezident['úřad']:
print('Od {} do {}'.format(obdobi['od'], obdobi['do']))
"""
Explanation: Použití výstupu pro zpracování dat v programu
End of explanation
"""
|
aymeric-spiga/mcd-python | tutorial/mcd-python_tutorial.ipynb | gpl-2.0 | from mcd import mcd
# This line configures matplotlib to show figures embedded in the notebook.
%matplotlib inline
"""
Explanation: python package for the Mars Climate Database: how to use?
NB: to learn how to directly wrap within python the Fortran routine call_mcd compiled with f2py, look at the folder test_mcd
General steps to follow to use the mcd class
Step 1 import mcd class from mcd package
End of explanation
"""
req = mcd()
"""
Explanation: Step 2 create your request
End of explanation
"""
req.lat = -4.6 # latitude
req.lon = 137.4 # longitude
req.loct = 15. # local time
req.xz = 1. # vertical coordinate
req.xdate = 150.6 # areocentric longitude
"""
Explanation: Step 3 set the coordinates for your request (for instance, let us choose Curiosity landing site)
End of explanation
"""
req.update()
"""
Explanation: Step 4 retrieve fields from the Mars Climate Database (all fields are stored in the req object)
End of explanation
"""
req.printcoord()
"""
Explanation: Step 5 print requested results
Requested coordinates (for a reminder)
End of explanation
"""
req.printmeanvar()
"""
Explanation: Main atmospheric variables
End of explanation
"""
req.printmcd()
"""
Explanation: Shortcut: req.printmcd() is equivalent to the three previous commands in a row (update+printcoord+printmeanvar)
End of explanation
"""
req.printextvar(22)
"""
Explanation: The extvar number can also be used to inquire a specific variable (see Fortran sources).
End of explanation
"""
req.printextvar("tsurf")
"""
Explanation: Another way to inquire for a specific variable is through a string.
End of explanation
"""
req.printallextvar()
"""
Explanation: Print all field
End of explanation
"""
req.diurnal()
req.plot1d("t")
"""
Explanation: 1D slices
Request 1D plot of diurnal cycle for one variable ...
End of explanation
"""
req.plot1d(["t","p","u","v"])
"""
Explanation: ... and for several variables
End of explanation
"""
req.xzs = -3500.
req.xze = 15000.
req.zkey = 2
req.lat = 25.
req.lon = 195.
req.loct = 4.2
req.xdate = 140.
req.profile(nd=50)
req.plot1d("t")
"""
Explanation: Request seasonal cycle (this takes a longer time)
~~~python
req.seasonal()
req.plot1d(["tsurf","u","v"])
~~~
1D slicing also works for vertical profiles. Start and end of profile can be set easily
--- as well as the kind of vertical coordinate through the zkey variable (as in MCD Fortran routines)
End of explanation
"""
tpot = req.temptab*((610./req.prestab)**(1.0/3.9))
print tpot
"""
Explanation: It is a good place here to remind you that any field stored in the Mars Climate Database is inside the req object.
This allows you to work out further calculations, e.g. to combine several variables to obtain new diagnostics.
Here is for instance a calculation for potential temperature
End of explanation
"""
req = mcd()
req.diurnal()
req.getascii("t",filename="diurnal.txt")
%cat diurnal.txt ; rm -rf diurnal.txt
"""
Explanation: 1D slices: ASCII outputs
It is easy to get ASCII files containing 1D slices. Say, for instance, diurnal cycle of temperature.
End of explanation
"""
test = mcd()
test.loct = 15.
test.xz = 10000.
test.map2d("t")
"""
Explanation: 2D mapping
Simple 2D longitude-latitude map with default cylindrical view. Map projections can be used, provided basemap is installed -- for instance, for Robinson projection add proj="robin" to the map2d call below.
End of explanation
"""
test.htmlmap2d("t")
"""
Explanation: You can also use the method htmlmap2d that will create a PNG file with your figure in it. This is the function actually used in the online MCD interface.
End of explanation
"""
test.map2d("t",incwind=True)
"""
Explanation: Adding wind vectors can be done with the incwind argument.
End of explanation
"""
test.map2d(["t","u"])
"""
Explanation: NB: map2d works with several variables
End of explanation
"""
figname = test.getnameset()+'.png'
print figname
"""
Explanation: NB: to save figures
~~~python
import matplotlib.pyplot as mpl
mpl.savefig("temp.png",dpi=85,bbox_inches='tight',pad_inches=0.25)
~~~
To obtain a name corresponding to the request
End of explanation
"""
test = mcd()
test.xz = 50000.
test.lat = 20.
test.locts = 0.
test.locte = 24.
test.lons = -180.
test.lone = +180.
test.htmlplot2d("tsurf",figname="hov.png")
"""
Explanation: advanced diagnostics
Hövmoller plot
End of explanation
"""
test = mcd()
test.zonmean = True
test.lats = -90.
test.late = 90.
test.htmlplot2d("u",figname="zonm.png")
"""
Explanation: Zonal average : lat/altitude figure
End of explanation
"""
test = mcd()
test.zonmean = True
test.lats = -90.
test.late = 90.
test.xdates = 0.
test.xdatee = 360.
test.htmlplot2d("h2ovap",figname="zonmm.png")
"""
Explanation: Zonal average : ls/lat figure
End of explanation
"""
test.zonmean = False
test.lon = 0.
test.htmlplot2d("h2ovap",figname="hovls.png")
"""
Explanation: see difference with following case
End of explanation
"""
|
SciTools/cube_browser | doc/browsing_cubes/four_axes.ipynb | bsd-3-clause | import iris
import iris.plot as iplt
import matplotlib.pyplot as plt
from cube_browser import Contour, Browser, Contourf, Pcolormesh
"""
Explanation: Four Axes
This notebook demonstrates the use of Cube Browser to produce multiple plots using only the code (i.e. no selection widgets).
End of explanation
"""
air_potential_temperature = iris.load_cube(iris.sample_data_path('colpex.pp'), 'air_potential_temperature')
print air_potential_temperature
"""
Explanation: Load and prepare your cubes
End of explanation
"""
projection = iplt.default_projection(air_potential_temperature)
ax1 = plt.subplot(221, projection=projection)
ax2 = plt.subplot(222, projection=projection)
ax3 = plt.subplot(223, projection=projection)
ax4 = plt.subplot(224, projection=projection)
cf1 = Pcolormesh(air_potential_temperature[0, 0], ax1)
cf2 = Contour(air_potential_temperature[:, 0], ax2)
cf3 = Contour(air_potential_temperature[0], ax3)
cf4 = Pcolormesh(air_potential_temperature, ax4)
Browser([cf1, cf2, cf3, cf4]).display()
"""
Explanation: Set up your map projection and your axes, and then make your plots using your preferred plot-types and layout.
Finally, display your plots with their associated sliders.
End of explanation
"""
|
kingmolnar/MSA8150 | Lecture Notes/03-InformationBased/03-InformationBased.ipynb | cc0-1.0 | ### define function information gain IG(d_feature, D_set)
def IG(d_feature, D_set):
# compute the information gain by dividing with feature d
return 0.0
def splitDataSet(d_feature, D_set):
Left_set = []
Right_set = []
return (Left_set, Right_set)
def myID3(list_of_features, D_set):
if len(list_of_features)==0:
# we're done
elif len(D_set)<=1:
# we're done
else:
for f in list_of_features:
# which feature has greatest information gain on D_set?
(L_set, R_set) = splitDataSet(fmax, D_set)
remfeatures = list_of_feature.remove(fmax)
myID3(remfeatures, L_set)
myID3(remfeatures, R_set)
"""
Explanation: Information-base Learning
Let's review the basic idea of decision trees.
Shannon's model of entropy is a weighted sum of the logs of the probabilities of each of the possible outcomes when we make a random selection from a set.
$$
H(t) = - \sum_{i=1}^l \left( P(t=i)\times log_s(P(t=i)) \right)
$$
The information gain of a descriptive feature can be understood as a measure of the reduction in the overall entropy of a prediction task by testing on that feature.
Computing information gain involves the following 3 equations:
$$
H\left(t, \mathcal{D}\right) = - \sum_{l \in levels(t)} \left( P(t=l) \times log_2(P(t=l)) \right)
$$
$$
rem\left(d,\mathcal{D}\right) =
\sum_{l \in levels\left(d\right)}
\underbrace{
\frac{|\mathcal{D}{d=l}|}{|\mathcal{D}|}}{\text{weighting}} \times
\underbrace{H\left(t, \mathcal{D}{d=l}\right)}{
\substack{
\text{entropy of}\
\text{partition }\mathcal{D}_{d=l}
}
}
$$
$$
IG\left(d,\mathcal{D}\right) = H\left(t,\mathcal{D}\right) - rem\left(d, \mathcal{D}\right)
$$
Implementing our own ID3
Let's implement a simplistic ID3 algorithm. We assume that each feature can only have two values. Hence we have two branches per tree: the left branch for the lower value or false, the right branch for higher or true.
There is no consideration of efficiency or even avoiding redundant computations. The algorithm also won't return the actual decision tree. We're just going to use print statement to indicate what the algorithm does.
However, this could be build out to something more useful...
End of explanation
"""
from sklearn import tree
X = [[0, 0], [1, 1]]
Y = [0, 1]
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X, Y)
"""
Explanation: Alternative Metrics
Entropy based information gain, preferences features with many values.
One way of addressing this issue is to use information gain ratio which is computed by dividing the information gain of a feature by the amount of information used to determine the value of the feature:
$$
GR\left(d,\mathcal{D}\right) = \frac{IG\left(d,\mathcal{D}\right)}{- \displaystyle \sum_{l \in levels\left(d\right)} \left(P(d=l) \times log_2(P(d=l)) \right)}
$$
Another commonly used measure of impurity is the Gini index:
$$
Gini\left(t,\mathcal{D}\right) = 1 - \sum_{l \in levels(t)} P(t=l)^2
$$
The Gini index can be thought of as calculating how often you would misclassify an instance in the dataset if you classified it based on the distribution of classifications in the dataset.
Information gain can be calculated using the Gini index by replacing the entropy measure with the Gini index.
Predicting Continous Targets
Regression trees are constructed so as to reduce the \defn{variance} in the set of training examples at each of the leaf nodes in the tree
We can do this by adapting the ID3 algorithm to use a measure of variance rather than a measure of classification impurity (entropy) when selecting the best attribute
The impurity (variance) at a node can be calculated using the following equation:
$$
var\left(t, \mathcal{D}\right) = \frac{\sum_{i=1}^n \left( t_i - \bar{t}\right)^2}{n-1}
$$
We select the feature to split on at a node by selecting the feature that minimizes the weighted variance across the resulting partitions:
$$
\mathbf{d}[best] = \textrm{argmin}{d \in \mathbf{d}} \sum{l \in levels(d)} \frac{|\mathcal{D}{d=l}|}{|\mathcal{D}|} \times var(t, \mathcal{D}{d=l})
$$
<img src="./fig415.png" />
Decision Trees in SciKit-learn
The scikit-learn package provide an implementation of decision tree classifiers. The following example demonstrates the use of the library on the very simplistic Iris data set.
This is an excerpt from the online documentation:
As with other classifiers, DecisionTreeClassifier takes as input two arrays: an array X, sparse or dense, of size [n_samples, n_features] holding the training samples, and an array Y of integer values, size [n_samples], holding the class labels for the training samples.
Note:
In this environment we represent each sample as a row, and features are columns. This is not always the case!
End of explanation
"""
clf.predict([[1., 0.], [0, 0], [3, 0]])
"""
Explanation: After being fitted, the model can then be used to predict the class of samples:
End of explanation
"""
from sklearn.datasets import load_iris
from sklearn import tree
iris = load_iris()
clf = tree.DecisionTreeClassifier()
clf = clf.fit(iris.data, iris.target)
iris.data
iris.target
"""
Explanation: DecisionTreeClassifier is capable of both binary (where the labels are [-1, 1]) classification and multiclass (where the labels are [0, ..., K-1]) classification.
Using the Iris dataset, we can construct a tree as follows:
End of explanation
"""
from sklearn.externals.six import StringIO
import pydot
from IPython.display import Image
dot_data = StringIO()
tree.export_graphviz(clf, out_file=dot_data,
feature_names=iris.feature_names)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
graph.write_pdf("iris.pdf")
Image(graph.create_png())
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
# Parameters
n_classes = 3
plot_colors = "bry"
plot_step = 0.02
# Load data
iris = load_iris()
for pairidx, pair in enumerate([[0, 1], [0, 2], [0, 3],
[1, 2], [1, 3], [2, 3]]):
# We only take the two corresponding features
X = iris.data[:, pair]
y = iris.target
# Shuffle
idx = np.arange(X.shape[0])
np.random.seed(13)
np.random.shuffle(idx)
X = X[idx]
y = y[idx]
# Standardize
mean = X.mean(axis=0)
std = X.std(axis=0)
X = (X - mean) / std
# Train
clf = DecisionTreeClassifier().fit(X, y)
# Plot the decision boundary
plt.subplot(2, 3, pairidx + 1)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)
plt.xlabel(iris.feature_names[pair[0]])
plt.ylabel(iris.feature_names[pair[1]])
plt.axis("tight")
# Plot the training points
for i, color in zip(range(n_classes), plot_colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1], c=color, label=iris.target_names[i],
cmap=plt.cm.Paired)
plt.axis("tight")
plt.suptitle("Decision surface of a decision tree using paired features")
plt.legend()
plt.show()
"""
Explanation: Once trained, we can export the tree in Graphviz format using the export_graphviz exporter. Below is an example export of a tree trained on the entire iris dataset.
In order to compute inline images the package pydot is required.
End of explanation
"""
# Import the necessary modules and libraries
import numpy as np
from sklearn.tree import DecisionTreeRegressor
import matplotlib.pyplot as plt
%matplotlib inline
# Create a random dataset
rng = np.random.RandomState(1)
X = np.sort(5 * rng.rand(80, 1), axis=0)
y = np.sin(X).ravel()
y[::5] += 3 * (0.5 - rng.rand(16))
# Fit regression model
regr_1 = DecisionTreeRegressor(max_depth=2)
regr_2 = DecisionTreeRegressor(max_depth=5)
regr_1.fit(X, y)
regr_2.fit(X, y)
# Predict
X_test = np.arange(0.0, 5.0, 0.01)[:, np.newaxis]
y_1 = regr_1.predict(X_test)
y_2 = regr_2.predict(X_test)
# Plot the results
plt.figure()
plt.scatter(X, y, c="k", label="data")
plt.plot(X_test, y_1, c="g", label="max_depth=2", linewidth=2)
plt.plot(X_test, y_2, c="r", label="max_depth=5", linewidth=2)
plt.xlabel("data")
plt.ylabel("target")
plt.title("Decision Tree Regression")
plt.legend()
plt.show()
## Another Canned Example
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeRegressor
# Create a random dataset
rng = np.random.RandomState(1)
X = np.sort(200 * rng.rand(100, 1) - 100, axis=0)
y = np.array([np.pi * np.sin(X).ravel(), np.pi * np.cos(X).ravel()]).T
y[::5, :] += (0.5 - rng.rand(20, 2))
# Fit regression model
regr_1 = DecisionTreeRegressor(max_depth=2)
regr_2 = DecisionTreeRegressor(max_depth=5)
regr_3 = DecisionTreeRegressor(max_depth=8)
regr_1.fit(X, y)
regr_2.fit(X, y)
regr_3.fit(X, y)
# Predict
X_test = np.arange(-100.0, 100.0, 0.01)[:, np.newaxis]
y_1 = regr_1.predict(X_test)
y_2 = regr_2.predict(X_test)
y_3 = regr_3.predict(X_test)
# Plot the results
plt.figure()
plt.scatter(y[:, 0], y[:, 1], c="k", label="data")
plt.scatter(y_1[:, 0], y_1[:, 1], c="g", label="max_depth=2")
plt.scatter(y_2[:, 0], y_2[:, 1], c="r", label="max_depth=5")
plt.scatter(y_3[:, 0], y_3[:, 1], c="b", label="max_depth=8")
plt.xlim([-6, 6])
plt.ylim([-6, 6])
plt.xlabel("data")
plt.ylabel("target")
plt.title("Multi-output Decision Tree Regression")
plt.legend()
plt.show()
"""
Explanation: Regression with Decision Trees in SciKit-learn
A 1D regression with decision tree.
The decision trees is used to fit a sine curve with addition noisy observation. As a result, it learns local linear regressions approximating the sine curve.
We can see that if the maximum depth of the tree (controlled by the max_depth parameter) is set too high, the decision trees learn too fine details of the training data and learn from the noise, i.e. they overfit.
End of explanation
"""
dataset = []
for l in open("car.data").readlines():
dataset.append(l[0:-1].split(','))
dataset[0:9]
"""
Explanation: Data Files
Here a couple of data files from the UCI Machine Learning Repository
Mushroom is a set with over 8,000 samples of categorical attributes of mushrooms. The classifer may determine whether a mushroom is edible, or not.<br />
https://archive.ics.uci.edu/ml/datasets/Mushroom<br />
https://archive.ics.uci.edu/ml/machine-learning-databases/mushroom/
Car Evaluation is set with about 1,700 samples evaluating cars as "unacceptable", "acceptable", "good", and "very good".<br />
https://archive.ics.uci.edu/ml/datasets/Car+Evaluation<br />
https://archive.ics.uci.edu/ml/machine-learning-databases/car/
End of explanation
"""
Nfeature = len(dataset[0])
Nsample = len(dataset)
print("%s x %s" % (Nsample, Nfeature))
## http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html
## use LabelEncoder instead of the home grown...
levdict = [{} for c in range(0, Nfeature)]
for c in range(0, Nfeature):
vec = []
for r in range(0, Nsample):
vec.append(dataset[r][c])
levels = list(set(vec))
for k in range(0, len(levels)):
levdict[c][levels[k]] = k
levdict
numdata = [ [ levdict[c][dataset[r][c]] for c in range(Nfeature)] for r in range(Nsample) ]
numdata[0:9]
X = [] # attributes
Y = [] # target
for sample in numdata:
X.append(sample[0:5])
Y.append(sample[6])
[ X[0:9], Y[0:9] ]
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X, Y)
clf
car_feature_names = ["bying", "maint", "doors", "person", "lug_boot", "safety"]
dot_data = StringIO()
tree.export_graphviz(clf, out_file=dot_data,
feature_names=car_feature_names)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
graph.write_pdf("cars.pdf")
###Image(graph.create_png())
"""
Explanation: Unfortunately, the scikit-learn algorithm need numerical data. We need to convert the categorical labels into numbers.
First, we determine the levels for each feature. Then we create a dictonary that to convert the labels into numbers.
End of explanation
"""
Ncategory = len(levdict[6])
CM = [ [0 for c in range(Ncategory)] for r in range(Ncategory) ]
CM
Yhat = clf.predict(X)
CM = [ [0 for c in range(Ncategory)] for r in range(Ncategory) ]
for k in range(Nsample):
CM[Yhat[k]][Y[k]] +=1
CM
for c in CM:
print(c)
"""
Explanation: Verify what we have
Let's find out how well the classifier predicts it's own training data. We use the method predict.
End of explanation
"""
|
tmolteno/TART | doc/calibration/positions/site_survey.ipynb | lgpl-3.0 | import numpy as np
from scipy.optimize import minimize
x0 = [0,0]
x1 = [0, 2209]
"""
Explanation: Antenna Position Measurement
Author: Tim Molteno. tim@elec.ac.nz.
The antennas are laid out on tiles, and these tiles are placed on site. Once this is done, a survey is needed to refine the positions of each antenna in the array.
Three reference posts are placed. The first is at the centre of the array, the second approximately 2.5 meters due north. The first and second post defines the $y$ axis. The third post is placed approximately 2.5 meters east of the centre, and reasonably close to at right angles to the $y$ axis.
The first reference point, x0, has coordinates (0,0). The second reference point, x1, has coordinates (0, y) and the third is not known, but must be established by measurement.
All measurements are made from the height of the antennas on the reference points.
End of explanation
"""
d_0_2 = 2047
d_1_2 = 3020
"""
Explanation: Locating the third reference point
The distances from reference point 2 to the other two reference points (0,1) are measured (in mm)
End of explanation
"""
def dist(a,b):
return np.sqrt((a[0]-b[0])**2 + (a[1]-b[1])**2)
def f(x):
return (dist(x0, x) - d_0_2)**2 + (dist(x1, x) - d_1_2)**2
initial_guess = [2047, 0]
res = minimize(f, initial_guess)
x2 = res.x
reference_points = [x1, x0, x2]
reference_points
"""
Explanation: Now a least squares estimator is used to work out the x-y coordinates of the third reference point (x2)
End of explanation
"""
n_ant = 24
m = np.zeros((24,3))
"""
Explanation: Finding the antennas
This is done by measuring the distance from each antenna to the three reference points x0, x1 and x2.
End of explanation
"""
m[0,:] = [1563, 855, 2618]
m[1,:] = [1407, 825, 2355]
m[2,:] = [1750, 765, 2644]
m[3,:] = [839, 1373, 2416]
m[4,:] = [1151, 1422, 2986]
m[5,:] = [842, 1410, 2662]
m[6,:] = [2527, 1119, 929]
m[7,:] = [2274, 1200, 915]
m[8,:] = [2715, 1261, 824]
m[9,:] = [1684, 1064, 1457]
m[10,:] = [2238, 546, 1501]
m[11,:] = [1834, 805, 1493]
m[12,:] = [3320, 1111, 2370]
m[13,:] = [3385, 1192, 2131]
m[14,:] = [3446, 1247, 2555]
m[15,:] = [3063, 1048, 1531]
m[16,:] = [2760, 550, 2096]
m[17,:] = [2873, 784, 1689]
m[18,:] = [2342, 934, 2979]
m[19,:] = [2638, 1142, 3179]
m[20,:] = [2186, 993, 3020]
m[21,:] = [3130, 1260, 3140]
m[22,:] = [2545, 565, 2544]
m[23,:] = [2942, 1000, 2891]
"""
Explanation: The following are the measured distances from [x1, x0, x2] from the reference points in millimeters. Note that their order must be the same as the order of the variable called 'reference_points'. In this case, they are x1,x0,x2.
End of explanation
"""
import requests
import json
pos_url = "https://tart.elec.ac.nz/signal/api/v1/imaging/antenna_positions"
def get_data(path):
server = "https://tart.elec.ac.nz/signal"
r = requests.get('{}/{}'.format(server, path))
return json.loads(r.text)
def get_pos():
return np.array(get_data('api/v1/imaging/antenna_positions'))
current_pos = get_pos()
current_pos
initial_guess = np.zeros(2*n_ant)
for i in range(n_ant):
initial_guess[2*i:2*i+2] = current_pos[i][0:2]*1000
#print(current_pos[i][0:2]*1000)
initial_guess
pos_i = current_pos*1000
import matplotlib.pyplot as plt
plt.scatter(pos_i[:,0], pos_i[:,1])
plt.xlim(-2000,2000)
plt.ylim(-2000,2000)
plt.show()
"""
Explanation: Plot the Initial Guess Points
Initial Guesses are from JSON queried from the telescope API. These are converted to millimeters.
End of explanation
"""
def f(x):
ret = 0
for i in range(n_ant):
for j in range(3):
p = [x[2*i],x[2*i+1]]
ret += (dist(reference_points[j], p) - m[i,j])**2
return ret
print(f(initial_guess))
res = minimize(f, initial_guess)
res
"""
Explanation: Criteria for Optimality
The function below is minimized when the positions (in variable x) are consistent with the measured distances m[i,j]. The initial value of this function is more than 3 million.
Note that the x input is a 1D vector of with 48 entries as [p0.x, p0.y, p1.x, p1.y]
End of explanation
"""
pos = res.x.reshape((24,2))
pos
plt.scatter(pos[:,0], pos[:,1], color='red')
plt.scatter(pos_i[:,0], pos_i[:,1], color='blue')
plt.xlim(-2000,2000)
plt.ylim(-2000,2000)
plt.grid(True)
plt.show()
"""
Explanation: The optimized positions are now known. The final value of the function is 32. Far closer to zero than 3 million!
We can recover the x,y coordinates by reshaping the array
End of explanation
"""
result = np.zeros((n_ant, 3))
result[:,:-1] = np.round(pos/1000.0, 3)
result
json_result = {}
json_result["antenna_positions"] = result.tolist()
print(json.dumps(json_result, indent=4, separators=(',', ': ')))
"""
Explanation: The API expects 3D coordinates (with a z value which is zero in this case). Therefore we add a column of zeros.
End of explanation
"""
|
JarronL/pynrc | docs/tutorials/HR8799_DMS_Level1b.ipynb | mit | # Import the usual libraries
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
#import matplotlib.patches as mpatches
# Enable inline plotting
%matplotlib inline
# Progress bar
from tqdm.auto import trange, tqdm
import pynrc
from pynrc.simul.ngNRC import create_level1b_FITS
# Disable informational messages and only include warnings and higher
pynrc.setup_logging(level='WARN')
from astropy import units as u
from astropy.coordinates import SkyCoord, Distance
from astropy.time import Time
from astropy.table import Table
import os
import pynrc
from pynrc import nrc_utils
from pynrc.simul.ngNRC import make_gaia_source_table, make_simbad_source_table
from pynrc.simul.ngNRC import create_level1b_FITS
pynrc.setup_logging('WARN', verbose=False)
"""
Explanation: DMS Level 1b Example
HR 8799 GTO 1194
This program will search for previously unknown planets using NIRCam in the F356W and F444W filters using the MASK430R for both filters. In addition, we will probe the physical characterization of the known planets, HR8789bcde, using multi-filter photometry with the LW Bar mask. The medium-band filters will be observed with a fiducial override to place the primary source on the narrow end of the occulting mask. The NIRCam observations will use two roll angles ($\pm5$ deg) and a reference star to assist with suppression of residuals in the coronagraphic image.
This notebook uses output from the APT file of PID 1194 to simulate the obsevations and save them to DMS-like FITS files (Level 1b data).
End of explanation
"""
import os
# Read in APT
pid = 1194
pid_str = f'pid{pid:05d}'
save_dir = f'/Users/jarron/NIRCam/Data/NRC_Sims/Sim_{pid_str}/'
# save_dir = f'/data/NIRData/NRC_Sims/Sim_{pid_str}/'
# APT input files
apt_file_dir = '../../notebooks/APT_output/'
fprefix = f'pid{pid}'
json_file = f'{apt_file_dir}{fprefix}.timing.json'
sm_acct_file = f'{apt_file_dir}{fprefix}.smart_accounting'
pointing_file = f'{apt_file_dir}{fprefix}.pointing'
xml_file = f'{apt_file_dir}{fprefix}.xml'
# Make sure files exist
for f in [json_file, sm_acct_file, pointing_file, xml_file]:
print(f, os.path.isfile(f))
"""
Explanation: APT Inputs
From the final APT file, we need to export a number of files, including the .timing.json, .smart_accounting, .pointing, and .xml files. These are then parsed by pynrc to configure a series of observation visits and associated NIRCam objects.
End of explanation
"""
# Define 2MASS Ks bandpass and source information
bp_k = pynrc.bp_2mass('k')
# Science source, dist, age, sptype, Teff, [Fe/H], log_g, mag, band
args_sci = ('HR 8799', 39.0, 30, 'F0V', 7430, -0.47, 4.35, 5.24, bp_k)
# References source, sptype, Teff, [Fe/H], log_g, mag, band
args_ref = ('HD 220657', 'F8III', 5888, -0.01, 3.22, 3.04, bp_k)
# Directory housing VOTables
# http://vizier.u-strasbg.fr/vizier/sed/
votdir = '../../notebooks/votables/'
# Fit spectrum to SED photometry
name_sci, dist_sci, age, spt_sci, Teff_sci, feh_sci, logg_sci, mag_sci, bp_sci = args_sci
vot = votdir + name_sci.replace(' ' ,'') + '.vot'
args = (name_sci, spt_sci, mag_sci, bp_sci, vot)
kwargs = {'Teff':Teff_sci, 'metallicity':feh_sci, 'log_g':logg_sci}
src = pynrc.source_spectrum(*args, **kwargs)
src.fit_SED(use_err=False, robust=False, wlim=[1,5])
# Final source spectrum (pysynphot)
sp_sci = src.sp_model
# Do the same for the reference source
name_ref, spt_ref, Teff_ref, feh_ref, logg_ref, mag_ref, bp_ref = args_ref
vot = votdir + name_ref.replace(' ' ,'') + '.vot'
args = (name_ref, spt_ref, mag_ref, bp_ref, vot)
kwargs = {'Teff':Teff_ref, 'metallicity':feh_ref, 'log_g':logg_ref}
ref = pynrc.source_spectrum(*args, **kwargs)
ref.fit_SED(use_err=False, robust=False, wlim=[0.5,10])
# Final reference spectrum (pysynphot)
sp_ref = ref.sp_model
# Plot spectra
fig, axes = plt.subplots(1,2, figsize=(13,4))
src.plot_SED(xr=[0.3,10], ax=axes[0])
ref.plot_SED(xr=[0.3,10], ax=axes[1])
axes[0].set_title('Science Specta -- {} ({})'.format(src.name, spt_sci))
axes[1].set_title('Refrence Specta -- {} ({})'.format(ref.name, spt_ref))
fig.tight_layout()
"""
Explanation: Source Definitions
We will utilize the source_spectrum class to generate a model fit to the known spectrophotometry. The user can find the relevant photometric data at http://vizier.u-strasbg.fr/vizier/sed/ and click download data as a VOTable.
The output spectra will then be placed into a target dictionary to be ingested into the DMS simulator portion of pyNRC.
End of explanation
"""
# Initialize target dictionary
targ_dict = {}
# Bandpass corresponding to companion renormalization flux
bp_pl_norm = pynrc.read_filter('F430M')
# Assumed companion magnitudes in filter bandpass
comp_mags = np.array([16.0, 15.0, 14.6, 14.7])
# Dictionary keywords should match APT target names
targ_dict['HR8799'] = {
'type' : 'FixedTargetType',
'TargetName' : 'HR8799', 'TargetArchiveName' : 'HR8799',
'EquatorialCoordinates' : "23 07 28.7155 +21 08 3.30",
'RAProperMotion' : 108.551*u.mas/u.yr,
'DecProperMotion' : -49.639*u.mas/u.yr,
'parallax' : 24.76*u.mas, 'age_Myr' : 30,
'params_star' : {'sp' : sp_sci},
'params_companions' : {
'b' : {'xy':(-1.625, 0.564), 'runits':'arcsec', 'mass':10,
'renorm_args':(comp_mags[0], 'vegamag', bp_pl_norm)
},
'c' : {'xy':( 0.319, 0.886), 'runits':'arcsec', 'mass':10,
'renorm_args':(comp_mags[1], 'vegamag', bp_pl_norm)
},
'd' : {'xy':( 0.588, -0.384), 'runits':'arcsec', 'mass':10,
'renorm_args':(comp_mags[2], 'vegamag', bp_pl_norm)
},
'e' : {'xy':( 0.249, 0.294), 'runits':'arcsec', 'mass':10,
'renorm_args':(comp_mags[3], 'vegamag', bp_pl_norm)
},
},
'params_disk_model' : None,
'src_tbl' : None,
}
# Reference source
targ_dict['HD220657'] = {
'type' : 'FixedTargetType',
'TargetName' : 'HD220657', 'TargetArchiveName' : 'ups Peg',
'EquatorialCoordinates' : "23 25 22.7835 +23 24 14.76",
'RAProperMotion' : 192.19*u.mas/u.yr,
'DecProperMotion' : 36.12*u.mas/u.yr,
'parallax' : 19.14*u.mas, 'age_Myr' : None,
'params_star' : {'sp' : sp_ref},
'params_companions' : None,
'params_disk_model' : None,
'src_tbl' : None,
}
# Populate coordinates and calculate distance from parallax info
for k in targ_dict.keys():
d = targ_dict[k]
dist = Distance(parallax=d['parallax']) if d['parallax'] is not None else None
c = SkyCoord(d['EquatorialCoordinates'], frame='icrs', unit=(u.hourangle, u.deg),
pm_ra_cosdec=d['RAProperMotion'], pm_dec=d['DecProperMotion'],
distance=dist, obstime='J2000')
d['sky_coords'] = c
d['ra_J2000'], d['dec_J2000'] = (c.ra.deg, c.dec.deg)
d['dist_pc'] = c.distance.value if dist is not None else None
# Auto-generate source tables
src_tbl = make_gaia_source_table(c)
d['src_tbl'] = src_tbl if len(src_tbl)>0 else None
"""
Explanation: Target Information
For each target specified in the APT file, we want to populate a dictionary with coordinate information and astrophysical properties. The dictionary keys should match the APT "Name in the Proposal" so that
For each designated target, there are four types of objects that can be added: stellar source, point source companions, disk object, and/or a table of point sources specifying magnitudes in NIRCam filters.
Stellar source:
python
params_star = {
'sptype' : 'A0V', 'Teff': 10325, 'metallicity' : 0, 'log_g' : 4.09,
'v_mag' : 5.690, 'j_mag': 5.768, 'h_mag': 5.753, 'k_mag': 5.751,
},
to generate a spectrum using pynrc.stellar_spectrum or add the parameter directly
python
params_star = {'sp' : sp_star}
Companions (where bp_renorm is a pysynphot bandpass):
python
params_hci_companions = {
'b' : {'xy':(-1.625, 0.564), 'runits':'arcsec', 'renorm_args':(16.0, 'vegamag', bp_renorm)},
'c' : {'xy':( 0.319, 0.886), 'runits':'arcsec', 'renorm_args':(15.0, 'vegamag', bp_renorm)},
'd' : {'xy':( 0.588, -0.384), 'runits':'arcsec', 'renorm_args':(14.6, 'vegamag', bp_renorm)},
'e' : {'xy':( 0.249, 0.294), 'runits':'arcsec', 'renorm_args':(14.7, 'vegamag', bp_renorm)},
}
See obs_hci.add_planet() function for more information and advanced functionality.
Companion disk model:
python
params_disk_model = {
'file': 'HD10647.fits', # path to file
'wavelength': 3.0, # model wavelength (um)
'pixscale': 0.01575, # input image arcsec/pixel
'units': 'Jy/pixel', # model flux units (e.g., mJy/arcsec, Jy/pixel, etc)
'dist': 17.34, # assumed distance to model (pc)
'cen_star': True, # does the model include stellar flux?
}
Table of stellar sources:
``` python
from astropy.io import ascii
cat_file = 'lmc_catalog.cat'
names = [
'index', 'ra', 'dec', 'F070W', 'F090W', 'F115W', 'F140M', 'F150W', 'F150W2',
'F162M', 'F164N', 'F182M', 'F187N', 'F200W', 'F210M', 'F212N', 'F250M', 'F277W',
'F300M','F322W2', 'F323N', 'F335M', 'F356W', 'F360M', 'F405N', 'F410M', 'F430M',
'F444W', 'F460M', 'F466N', 'F470N', 'F480M'
]
src_tbl = ascii.read(cat_file, names=names)
``
Usemake_gaia_source_tableandmake_simbad_source_table` to query Gaia DR2 and Simbad to auto-generate tables.
End of explanation
"""
sim_config = {
# APT input files
'json_file' : json_file,
'sm_acct_file' : sm_acct_file,
'pointing_file' : pointing_file,
'xml_file' : xml_file,
# Output directory
'save_dir' : save_dir,
# Initialize random seeds if repeatability is required
# Create separate random number generators for dithers and noise
'rand_seed_init' : 1234,
# Date and time of observations
'obs_date' : '2022-11-04',
'obs_time' : '12:00:00',
# Position angle of observatory
# User should check acceptable range in APT's Roll Analysis
'pa_v3' : 90.0,
# Source information
'params_targets' : targ_dict,
# PSF size information for WebbPSF_ext
'params_webbpsf' : {'fov_pix': None, 'oversample': 2},
# Position-dependent PSFs for convolution
'params_psfconv' : {'npsf_per_full_fov': 9, 'osamp': 1, 'sptype': 'G0V'},
# Wavefront error drift settings
'params_wfedrift' : {'case': 'BOL', 'slew_init': 10, 'plot': False, 'figname': None},
# For coronagraphic masks, sample large grid of points?
'large_grid' : True,
# Slew and dither pointing uncertainties
'large_slew' : 100.0, # Slew to target (mas)
'ta_sam' : 5.0, # SAM movements from TA position (mas)
'std_sam' : 5.0, # Standard dither values (mas)
'sgd_sam' : 2.5, # Small grid dithers (mas)
# Type of image files to save; can be supplied directly
'save_slope' : False, # Save ideal noiseless slope images to FITS
'save_dms' : False, # Save DMS-like ramps to FITS
'dry_run' : False, # Perform a dry-run, not generating any data, just printing visit info
# Noise components to include in full DMS output
'params_noise' : {
'include_poisson' : True, # Photon Noise
'include_dark' : True, # Dark current
'include_bias' : True, # Bias image offset
'include_ktc' : True, # kTC Noise
'include_rn' : True, # Read Noise
'include_cpink' : True, # Correlated 1/f noise between channel
'include_upink' : True, # Channel-dependent 1/f noise
'include_acn' : True, # Alternating column noise
'apply_ipc' : True, # Interpixel capacitance
'apply_ppc' : True, # Post-pixel coupling
'amp_crosstalk' : True, # Amplifier crosstalk
'include_refoffsets': True, # Reference offsets
'include_refinst' : True, # Reference pixel instabilities
'include_colnoise' : True, # Transient detector column noise
'add_crs' : True, # Include cosmic ray
'cr_model' : 'SUNMAX', # Cosmic ray model ('SUNMAX', 'SUNMIN', or 'FLARES')
'cr_scale' : 1, # Cosmic ray probabilities scaling
'apply_nonlinearity': True, # Apply non-linearity
'random_nonlin' : True, # Add randomness to non-linearity
'apply_flats' : True, # pixel-to-pixel QE variations and field-dep illum
},
}
"""
Explanation: Create Observation Parameters
These will used for input into the data ramp simulator and directly correspond to parameters for DMS FITS creation to ingest into the JWST pipeline
End of explanation
"""
# Perform dry-run test first
# prints (detname, aperture, filter, target, visit, grp/seq/act id, expid, offset, dWFE, dtime start)
create_level1b_FITS(sim_config, dry_run=True, save_slope=False, save_dms=False,
visit_id='001:001')
# Create ideal slope images, saved to Level-1b formation
# Will print messages:
# Saving: slope_jw01194001001_03104_00001_nrca5_uncal.fits
create_level1b_FITS(sim_config, dry_run=False, save_slope=True, save_dms=False,
visit_id='001:001')
# Generate simulated Level-1b FITS files
# Will print messages:
# Saving: pynrc_jw01194001001_03106_00001_nrca5_uncal.fits
create_level1b_FITS(sim_config, dry_run=False, save_slope=False, save_dms=True,
visit_id='001:001', apname='NRCA5_MASK430R')
"""
Explanation: Perform simulations of observations
The next few cells demonstrate how to simulate observations for a single visit in the program (001:001). In order to generate a subset of observations, the create_level1b_FITS function has a few keyword settings, including visit_id, apname, filter, and detname.
In addition, there are some diagnostic-level options to ensure simulations are being generated as expected before going through the long process of making everything. makes sense during
dry_run: Won't generate any image data, but instead runs through each observation, printing detector info, SIAF aperture name, filter, visit IDs, exposure numbers, and dither information. If set to None, then grabs keyword from sim_config argument, otherwise defaults to False if not specified. If paired with save_dms, then will generate an empty set of DMS FITS files with headers populated, but data set to all zeros.
save_slope: Saves noiseless slope images to a separate DMS-like FITS file that is names slope_{DMSfilename}. If set to None, then grabs keyword from sim_config, otherwise defaults to False if not found. Is effect if dry_run=True.
save_dms: Option to disable simulation of ramp data and creation of DMS FITS. If dry_run=True, then setting save_dms=True will save DMS FITS files populated with all zeros. If set to None, then grabs keyword from sim_config; if no keyword is found, then defaults to True if dry_run=False, otherwise False.
End of explanation
"""
|
timnon/pyschedule | example-notebooks/readme-notebook.ipynb | apache-2.0 | pip install pyschedule
"""
Explanation: pyschedule - resource-constrained scheduling in python
pyschedule is the easiest way to match tasks with resources. Do you need to plan a conference or schedule your employees and there are a lot of requirements to satisfy, like availability of rooms or maximal allowed working times? Then pyschedule might be for you. Install it with pip:
End of explanation
"""
# Load pyschedule and create a scenario with ten steps planning horizon
from pyschedule import Scenario, solvers, plotters
S = Scenario('hello_pyschedule',horizon=10)
# Create two resources
Alice, Bob = S.Resource('Alice'), S.Resource('Bob')
# Create three tasks with lengths 1,2 and 3
cook, wash, clean = S.Task('cook',1), S.Task('wash',2), S.Task('clean',3)
# Assign tasks to resources, either Alice or Bob,
# the %-operator connects tasks and resource
cook += Alice|Bob
wash += Alice|Bob
clean += Alice|Bob
# Solve and print solution
S.use_makespan_objective()
solvers.mip.solve(S,msg=1)
# Print the solution
print(S.solution())
"""
Explanation: Here is a hello world example, you can also find this document as a <a href="https://github.com/timnon/pyschedule-notebooks/blob/master/README.ipynb">notebook</a>. There are more example notebooks <a href="https://github.com/timnon/pyschedule-notebooks/">here</a> and simpler examples in the <a href="https://github.com/timnon/pyschedule/tree/master/examples">examples folder</a>. For a technical overview go to <a href="https://github.com/timnon/pyschedule/blob/master/docs/pyschedule-overview.md">here</a>.
End of explanation
"""
%matplotlib inline
plotters.matplotlib.plot(S,fig_size=(10,5))
"""
Explanation: In this example we use a makespan objective which means that we want to minimize the completion time of the last task. Hence, Bob should do the cooking from 0 to 1 and then do the washing from 1 to 3, whereas Alice will only do the cleaning from 0 to 3. This will ensure that both are done after three hours. This table representation is a little hard to read, we can visualize the plan using matplotlib:
End of explanation
"""
solvers.mip.solve(S,kind='SCIP')
"""
Explanation: pyschedule supports different solvers, classical <a href="https://en.wikipedia.org/wiki/Integer_programming">MIP</a>- as well as <a href="https://en.wikipedia.org/wiki/Constraint_programming">CP</a>-based ones. All solvers and their capabilities are listed in the <a href="https://github.com/timnon/pyschedule/blob/master/docs/pyschedule-overview.md">overview notebook</a>. The default solver used above uses a standard MIP-model in combination with <a href="https://projects.coin-or.org/Cbc">CBC</a>, which is part of package <a href="https://pypi.python.org/pypi/PuLP">pulp</a>. If you have <a href="http://scip.zib.de/">SCIP</a> installed (command "scip" must be running), you can easily switch to SCIP using:
End of explanation
"""
|
shikhar413/openmc | examples/jupyter/post-processing.ipynb | mit | %matplotlib inline
from IPython.display import Image
import numpy as np
import matplotlib.pyplot as plt
import openmc
"""
Explanation: Post Processing
This notebook demonstrates some basic post-processing tasks that can be performed with the Python API, such as plotting a 2D mesh tally and plotting neutron source sites from an eigenvalue calculation. The problem we will use is a simple reflected pin-cell.
End of explanation
"""
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
water.add_nuclide('B10', 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
"""
Explanation: Generate Input Files
First we need to define materials that will be used in the problem. We'll create three materials for the fuel, water, and cladding of the fuel pin.
End of explanation
"""
# Instantiate a Materials collection
materials = openmc.Materials([fuel, water, zircaloy])
# Export to "materials.xml"
materials.export_to_xml()
"""
Explanation: With our three materials, we can now create a materials file object that can be exported to an actual XML file.
End of explanation
"""
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective')
max_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective')
"""
Explanation: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
End of explanation
"""
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
pin_cell_universe.add_cell(moderator_cell)
"""
Explanation: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
"""
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = pin_cell_universe
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
"""
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
"""
# Create Geometry and set root Universe
geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
geometry.export_to_xml()
"""
Explanation: We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.
End of explanation
"""
# OpenMC simulation parameters
settings = openmc.Settings()
settings.batches = 100
settings.inactive = 10
settings.particles = 5000
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings.export_to_xml()
"""
Explanation: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 90 active batches each with 5000 particles.
End of explanation
"""
plot = openmc.Plot.from_geometry(geometry)
plot.pixels = (250, 250)
plot.to_ipython_image()
"""
Explanation: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.
End of explanation
"""
# Instantiate an empty Tallies object
tallies = openmc.Tallies()
# Create mesh which will be used for tally
mesh = openmc.RegularMesh()
mesh.dimension = [100, 100]
mesh.lower_left = [-0.63, -0.63]
mesh.upper_right = [0.63, 0.63]
# Create mesh filter for tally
mesh_filter = openmc.MeshFilter(mesh)
# Create mesh tally to score flux and fission rate
tally = openmc.Tally(name='flux')
tally.filters = [mesh_filter]
tally.scores = ['flux', 'fission']
tallies.append(tally)
# Export to "tallies.xml"
tallies.export_to_xml()
"""
Explanation: As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a 2D mesh tally.
End of explanation
"""
# Run OpenMC!
openmc.run()
"""
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
"""
# Load the statepoint file
sp = openmc.StatePoint('statepoint.100.h5')
"""
Explanation: Tally Data Processing
Our simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, data from the statepoint file is only read into memory when it is requested. This helps keep the memory use to a minimum even when a statepoint file may be huge.
End of explanation
"""
tally = sp.get_tally(scores=['flux'])
print(tally)
"""
Explanation: Next we need to get the tally, which can be done with the StatePoint.get_tally(...) method.
End of explanation
"""
tally.sum
"""
Explanation: The statepoint file actually stores the sum and sum-of-squares for each tally bin from which the mean and variance can be calculated as described here. The sum and sum-of-squares can be accessed using the sum and sum_sq properties:
End of explanation
"""
print(tally.mean.shape)
(tally.mean, tally.std_dev)
"""
Explanation: However, the mean and standard deviation of the mean are usually what you are more interested in. The Tally class also has properties mean and std_dev which automatically calculate these statistics on-the-fly.
End of explanation
"""
flux = tally.get_slice(scores=['flux'])
fission = tally.get_slice(scores=['fission'])
print(flux)
"""
Explanation: The tally data has three dimensions: one for filter combinations, one for nuclides, and one for scores. We see that there are 10000 filter combinations (corresponding to the 100 x 100 mesh bins), a single nuclide (since none was specified), and two scores. If we only want to look at a single score, we can use the get_slice(...) method as follows.
End of explanation
"""
flux.std_dev.shape = (100, 100)
flux.mean.shape = (100, 100)
fission.std_dev.shape = (100, 100)
fission.mean.shape = (100, 100)
fig = plt.subplot(121)
fig.imshow(flux.mean)
fig2 = plt.subplot(122)
fig2.imshow(fission.mean)
"""
Explanation: To get the bins into a form that we can plot, we can simply change the shape of the array since it is a numpy array.
End of explanation
"""
# Determine relative error
relative_error = np.zeros_like(flux.std_dev)
nonzero = flux.mean > 0
relative_error[nonzero] = flux.std_dev[nonzero] / flux.mean[nonzero]
# distribution of relative errors
ret = plt.hist(relative_error[nonzero], bins=50)
"""
Explanation: Now let's say we want to look at the distribution of relative errors of our tally bins for flux. First we create a new variable called relative_error and set it to the ratio of the standard deviation and the mean, being careful not to divide by zero in case some bins were never scored to.
End of explanation
"""
sp.source
"""
Explanation: Source Sites
Source sites can be accessed from the source property. As shown below, the source sites are represented as a numpy array with a structured datatype.
End of explanation
"""
sp.source['E']
"""
Explanation: If we want, say, only the energies from the source sites, we can simply index the source array with the name of the field:
End of explanation
"""
# Create log-spaced energy bins from 1 keV to 10 MeV
energy_bins = np.logspace(3,7)
# Calculate pdf for source energies
probability, bin_edges = np.histogram(sp.source['E'], energy_bins, density=True)
# Make sure integrating the PDF gives us unity
print(sum(probability*np.diff(energy_bins)))
# Plot source energy PDF
plt.semilogx(energy_bins[:-1], probability*np.diff(energy_bins), drawstyle='steps')
plt.xlabel('Energy (eV)')
plt.ylabel('Probability/eV')
"""
Explanation: Now, we can look at things like the energy distribution of source sites. Note that we don't directly use the matplotlib.pyplot.hist method since our binning is logarithmic.
End of explanation
"""
plt.quiver(sp.source['r']['x'], sp.source['r']['y'],
sp.source['u']['x'], sp.source['u']['y'],
np.log(sp.source['E']), cmap='jet', scale=20.0)
plt.colorbar()
plt.xlim((-0.5,0.5))
plt.ylim((-0.5,0.5))
"""
Explanation: Let's also look at the spatial distribution of the sites. To make the plot a little more interesting, we can also include the direction of the particle emitted from the source and color each source by the logarithm of its energy.
End of explanation
"""
|
eds-uga/csci1360-fa16 | assignments/A5/A5_Q2.ipynb | mit | assert Movie
m = Movie(title = "Avengers: Infinity War", year = 2018, stars = 4.9, genre = "Action/Adventure", "Chris Evans", "Robert Downey, Jr.", "Scarlett Johansson")
assert m.title == "Avengers: Infinity War"
assert m.year == 2018
assert set(m.starring) == set(["Chris Evans", "Robert Downey, Jr.", "Scarlett Johansson"])
assert m.genre == "Action/Adventure"
assert int(m.stars) == 4
m = Movie(title = "Avengers: Untitled", year = 2019, stars = 4.95, genre = "Action/Adventure", "Person 1", "Person 2", "Unnamed Avenger", "Who now?")
assert m.getTitle() == "Avengers: Untitled"
assert set(m.getStarring()) == set(["Person 1", "Person 2", "Unnamed Avenger", "Who now?"])
assert int(m.getStars()) == int(4.95)
m.addActor("Probably Robert Downey")
assert set(m.getStarring()) == set(["Person 1", "Person 2", "Unnamed Avenger", "Who now?", "Probably Robert Downey"])
m.remActor("person 2")
assert set(m.getStarring()) == set(["Person 1", "Unnamed Avenger", "Who now?", "Probably Robert Downey"])
"""
Explanation: Q2
In this question, we'll work with objects to develop a movie tracker.
A
Write a Movie class. It should have the following attributes:
title: string title of the movie
starring: list of actors and actresses (strings) in the movie
year: integer year of release
stars: number of stars, 0-5
genre: string indicating the genre of the movie
It should have the following methods:
A constructor with arguments for all the class attributes
getTitle(): returns the title of the movie
getStarring(): returns the list of starring actors and actresses
addActor(actor): adds an actor's name to the list of those starring
remActor(actor): removes the specified actor from the list of those starring (case-insensitive)
getStars(): returns the number of stars for the movie
No import statements allowed.
End of explanation
"""
assert MovieTracker
m1 = Movie(title = "Avengers: Infinity War", year = 2018, stars = 4.9, genre = "Action/Adventure", "Chris Evans", "Robert Downey, Jr.", "Scarlett Johansson")
mt = MovieTracker()
mt.addMovie(m1)
assert len(mt.movies) == 1
assert mt.avgRating() == 4.9
assert len(mt.ratingAtLeast(5.0)) == 0
m1 = Movie(title = "Avengers: Infinity War", year = 2018, stars = 4.9, genre = "Action/Adventure", "Chris Evans", "Robert Downey, Jr.", "Scarlett Johansson")
m2 = Movie(title = "Avengers: Untitled", year = 2019, stars = 4.95, genre = "Action/Adventure", "Person 1", "Person 2", "Unnamed Avenger", "Who now?")
mt = MovieTracker()
mt.addMovie(m1)
mt.addMovie(m2)
assert len(mt.movies) == 2
assert mt.avgRating() == 4.925
assert len(mt.ratingAtLeast(4)) == 2
assert len(mt.getGenre("action/adventure")) == 2
mt.remMovie("avengers: untitled")
assert len(mt.movies) == 1
"""
Explanation: B
Write a MovieTracker class. It will be used to track multiple Movie instances simultaneously. It should have the following attributes:
movies: a list of Movie instances
It should have the following methods:
A constructor that takes no arguments and initializes the class attribute to be an empty list
addMovie(movie): adds a new Movie object to its internal list of movies
remMovie(movie): removes the specified Movie object; equality is determined by movie title (case-insensitive)
avgRating(): returns the average rating (stars) of all the movies in the tracker
getGenre(genre): returns all the movies in the tracker that belong to the specified genre (case-insensitive), otherwise returns an empty list
ratingAtLeast(rating): returns all the movies in the tracker with at least the specified rating, otherwise returns an empty list
End of explanation
"""
|
Brunel-Visualization/Brunel | python/src/examples/.ipynb_checkpoints/Brunel Cars-checkpoint.ipynb | apache-2.0 | import pandas as pd
import ibmcognitive
cars = pd.read_csv("data/Cars.csv")
cars.head(6)
"""
Explanation: Demo of Brunel on Cars Data
The Data
We read the data into a pandas data frame. In this case we are grabbing some data that represents cars.
We read it in and call the brunel use method to ensure the names are usable
End of explanation
"""
brunel x(mpg) y(horsepower) color(origin) :: width=800, height=200, output=d3
brunel x(horsepower) y(weight) color(origin) tooltip(name) filter(year) :: width=800, height=200, output=d3
brunel bar x(mpg) y(#count) filter(mpg):: data=cars, width=900, height=400, output=d3
brunel chord y(origin, year) size(#count) color(origin) :: width=500, height=400, output=d3
brunel treemap y(origin, year, cylinders) color(mpg) mean(mpg) size(#count) label(cylinders) :: width=900, height=600
"""
Explanation: Basics
We import the Brunel module and create a couple of simple scatterplots.
We use the brunel magic to do so
The basic format of each call to Brunel is simple; whether it is a single line or a set of lines (a cell magic),
they are concatenated together, and the result interprested as one command.
This command must start with an ACTION, but may have a set of options at the end specified as ACTION :: OPTIONS.
ACTION is the Brunel action string; OPTIONS are key=value pairs:
* data defines the pandas dataframe to use. If not specified, the pandas data that best fits the action command will be used
* width and height may be supplied to set the resulting size
For details on the Brunel Action languages, see the Online Docs on Bluemix
End of explanation
"""
def identify(x, search):
for y in search:
if y.lower() in x.lower(): return y
return None
cars['Type'] = cars.name.map(lambda x: identify(x, ["Ford", "Buick"]))
%%brunel x(engine) y(mpg) color(Type) style('size:50%; fill:#eee') +
text x(engine) y(mpg) color(Type) label(Type) style('text {font-size:14; font-weight:bold; fill:darker}')
:: width=800, height=800, output=d3
brunel x(mpg) y(horsepower) color(origin) tooltip(mpg) filter(mpg) :: width=800, height=200
from random import randint;
randint(2,9)
brunel x(mpg) y(horsepower) color(origin) tooltip(mpg) filter(acceleration) :: width=800, height=200
"""
Explanation: Using the Dataframe
Since Brunel uses the data frame, we can modify or add to that object to show data in different ways. In the following example we apply a function that takes a name and sees if it matches one of a set of sub-strings. We map this function to the car names to create a new column consisting of the names that match either "Ford" or "Buick", and use that in our Brunel action.
Because the Brunel action is long -- we are adding some CSS styling, we split it into two parts for convenience.
End of explanation
"""
|
chetan51/nupic.research | projects/dynamic_sparse/notebooks/kWinners-backup.ipynb | gpl-3.0 | dataset = Dataset(config=dict(dataset_name='MNIST', data_dir='~/nta/results'))
# build up a small neural network
inputs = []
def init_weights():
W1 = torch.randn((4,10), requires_grad=True)
b1 = torch.zeros(10, requires_grad=True)
W2 = torch.randn((10,3), requires_grad=True)
b2 = torch.zeros(3, requires_grad=True)
return [W1, b1, W2, b2]
# torch cross_entropy is log softmax activation + negative log likelihood
loss_func = F.cross_entropy
# simple feedforward model
def model(input):
W1, b1, W2, b2 = parameters
x = input @ W1 + b1
x = F.relu(x)
x = x @ W2 + b2
return x
# calculate accuracy
def accuracy(out, y):
preds = torch.argmax(out, dim=1)
return (preds == y).float().mean().item()
from sklearn.model_selection import StratifiedKFold
cv = StratifiedKFold(n_splits=3)
# train
lr = 0.01
epochs = 1000
for train, test in cv.split(x, y):
x_train, y_train = x[train], y[train]
x_test, y_test = x[test], y[test]
parameters = init_weights()
print("Accuracy before training: {:.4f}".format(accuracy(model(x), y)))
for epoch in range(epochs):
loss = loss_func(model(x_train), y_train)
if epoch % (epochs/5) == 0:
print("Loss: {:.8f}".format(loss.item()))
# backpropagate
loss.backward()
with torch.no_grad():
for param in parameters:
# update weights
param -= lr * param.grad
# zero gradients
param.grad.zero_()
print("Training Accuracy after training: {:.4f}".format(accuracy(model(x_train), y_train)))
print("Test Accuracy after training: {:.4f}".format(accuracy(model(x_test), y_test)))
print("---------------------------")
"""
Explanation: from sklearn import datasets
iris = datasets.load_iris()
x = torch.tensor(iris.data, dtype=torch.float)
y = torch.tensor(iris.target, dtype=torch.long)
x.shape, y.shape
End of explanation
"""
import torch
from torch import nn
from torchvision import models
class KWinners(nn.Module):
def __init__(self, k=10):
super(KWinners, self).__init__()
self.duty_cycle = None
self.k = 10
self.beta = 100
self.T = 1000
self.current_time = 0
def forward(self, x):
# initialize duty cycle
if self.duty_cycle is None:
self.duty_cycle = torch.zeros_like(k)
# keep track of number of past iteratctions
if self.current_time < self.T:
self.current_time += 1
# calculating threshold and updating duty cycle
# should not be in the graph
tx = x.clone().detach()
# no need to calculate gradients
with torch.set_grad_enabled(False):
# get threshold
# nonzero_mask = torch.nonzero(tx) # will need for sparse weights
threshold = self._get_threshold(tx)
# calculate boosting
self._update_duty_cycle(mask)
boosting = self._calculate_boosting()
# get mask
tx *= boosting
mask = tx > threshold
return x * mask
def _get_threshold(self, x):
"""Calculate dynamic theshold"""
abs_x = torch.abs(x).view(-1)
pos = abs_x.size()[0] - self.k
threshold, _ = torch.kthvalue(abs_x, pos)
return threshold
def _update_duty_cycle(self, mask):
"""Update duty cycle"""
time = min(self.T, self.current_time)
self.duty_cycle *= (time-1)/time
self.duty_cycle += mask.float() / time
def _calculate_boosting(self):
"""Calculate boosting according to formula on spatial pooling paper"""
mean_duty_cycle = torch.mean(self.duty_cycle)
diff_duty_cycle = self.duty_cycle - mean_duty_cycle
boosting = (self.beta * diff_duty_cycle).exp()
return boosting
"""
Explanation: Seems to be overfitting the model nicely. Actions:
- Test accuracy - DONE
- Repeat the experiment with a held out test set, still holds? - DONE
- Replace RELU with k-Winners - is k-Winners working? - TODO
- Extend to larger dataset, MNIST
- Replace RELU with a class
- Extend to larger model, CNNs
- Run similar tests for both RELU and k-Winners - results hold?
End of explanation
"""
|
atavory/ibex | examples/boston_plotting_cv_preds.ipynb | bsd-3-clause | import pandas as pd
import numpy as np
from sklearn import datasets
from sklearn import model_selection
import seaborn as sns
sns.set_style('whitegrid')
sns.despine()
from ibex import trans
from ibex.sklearn import linear_model as pd_linear_model
from ibex.sklearn import decomposition as pd_decomposition
from ibex.sklearn import preprocessing as pd_preprocessing
from ibex.sklearn import ensemble as pd_ensemble
from ibex import xgboost as pd_xgboost
from ibex.sklearn import model_selection as pd_model_selection
%pylab inline
"""
Explanation: Plotting Cross-Validated Predictions On The Boston Dataset
This notebook illustrates finding feature importance in the Boston dataset. It is a version of the Scikit-Learn example Plotting Cross-Validated Predictions.
The main point it shows is using pandas structures throughout the code, and integrate nicely with seaborn.
End of explanation
"""
dataset = datasets.load_boston()
boston = pd.DataFrame(dataset.data, columns=dataset.feature_names)
features = dataset.feature_names
boston['price'] = dataset.target
boston.head()
"""
Explanation: Loading The Data
First we load the dataset into a pandas.DataFrame.
End of explanation
"""
linear_y_hat = pd_model_selection.cross_val_predict(
pd_linear_model.LinearRegression(),
boston[features],
boston.price)
linear_y_hat.head()
linear_cv= pd.concat([linear_y_hat, boston.price], axis=1)
linear_cv['type'] = 'linear'
linear_cv.columns = ['y_hat', 'y', 'regressor']
linear_cv.head()
rf_y_hat = pd_model_selection.cross_val_predict(
pd_ensemble.RandomForestRegressor(),
boston[features],
boston.price)
rf_cv= pd.concat([rf_y_hat, boston.price], axis=1)
rf_cv['type'] = 'rf'
rf_cv.columns = ['y_hat', 'y', 'regressor']
xgb_rf_y_hat = pd_model_selection.cross_val_predict(
pd_xgboost.XGBRegressor(),
boston[features],
boston.price)
xgb_rf_cv= pd.concat([xgb_rf_y_hat, boston.price], axis=1)
xgb_rf_cv['type'] = 'xgb_rf'
xgb_rf_cv.columns = ['y_hat', 'y', 'regressor']
cvs = pd.concat([linear_cv, rf_cv, xgb_rf_cv])
cvs.regressor.unique()
"""
Explanation: Building The Cross Validated Predictions
We will use a linear predictor, and a random forest predictor.
End of explanation
"""
min_, max_ = cvs[['y_hat', 'y']].min().min(), cvs[['y_hat', 'y']].max().max()
sns.lmplot(
x='y',
y='y_hat',
hue='regressor',
data=cvs,
palette={'linear': 'grey', 'rf': 'brown', 'xgb_rf': 'green'});
plot(np.linspace(min_, max_, 100), np.linspace(min_, max_, 100), '--', color='darkgrey');
tick_params(colors='0.6')
xlim((min_, max_))
ylim((min_, max_))
figtext(
0,
-0.1,
'Cross-validated predictions for linear and random-forest regressor on the price in the Boston dataset;\n'
'the linear regressor has inferior performance here, in particular for lower prices');
"""
Explanation: Plotting The Cross-Validated Predictions
Finally, we can plot the results:
End of explanation
"""
|
tensorflow/model-remediation | docs/min_diff/guide/integrating_min_diff_with_min_diff_model.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install --upgrade tensorflow-model-remediation
import tensorflow as tf
tf.get_logger().setLevel('ERROR') # Avoid TF warnings.
from tensorflow_model_remediation import min_diff
from tensorflow_model_remediation.tools.tutorials_utils import uci as tutorials_utils
"""
Explanation: Integrating MinDiff with MinDiffModel
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/responsible_ai/model_remediation/min_diff/guide/integrating_min_diff_with_min_diff_model">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-remediation/blob/master/docs/min_diff/guide/integrating_min_diff_with_min_diff_model.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/model-remediation/blob/master/docs/min_diff/guide/integrating_min_diff_with_min_diff_model.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a>
</td>
<td>
<a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/model-remediation/docs/min_diff/guide/integrating_min_diff_with_min_diff_model.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table></div>
Introduction
There are two steps to integrating MinDiff into your model:
Prepare the data (covered in the input preparation guide).
Alter or create a model that will integrate MinDiff during training.
This guide will cover the simplest way to complete the second step: using MinDiffModel.
Setup
End of explanation
"""
# Original DataFrame for training, sampled at 0.3 for reduced runtimes.
train_df = tutorials_utils.get_uci_data(split='train', sample=0.3)
# Dataset needed to train with MinDiff.
train_with_min_diff_ds = (
tutorials_utils.get_uci_with_min_diff_dataset(split='train', sample=0.3))
"""
Explanation: First, download the data. For succinctness, the input preparation logic has been factored out into helper functions as described in the input preparation guide. You can read the full guide for details on this process.
End of explanation
"""
model = tutorials_utils.get_uci_model()
model.compile(optimizer='adam', loss='binary_crossentropy')
df_without_target = train_df.drop(['target'], axis=1) # Drop 'target' for x.
_ = model.fit(
x=dict(df_without_target), # The model expects a dictionary of features.
y=train_df['target'],
batch_size=128,
epochs=1)
"""
Explanation: Original Model
This guide uses a basic, untuned keras.Model using the Functional API to highlight using MinDiff. In a real world application, you would carefully choose the model architecture and use tuning to improve model quality before attempting to address any fairness issues.
Since MinDiffModel is designed to work with most Keras Model classes, we have factored out the logic of building the model into a helper function: get_uci_model.
Training with a Pandas DataFrame
This guide trains over a single epoch for speed, but could easily improve the model's performance by increasing the number of epochs.
End of explanation
"""
model = tutorials_utils.get_uci_model()
model.compile(optimizer='adam', loss='binary_crossentropy')
_ = model.fit(
tutorials_utils.df_to_dataset(train_df, batch_size=128), # Converted to Dataset.
epochs=1)
"""
Explanation: Training with a tf.data.Dataset
The equivalent training with a tf.data.Dataset would look very similar (although initialization and input randomness may yield slightly different results).
End of explanation
"""
original_model = tutorials_utils.get_uci_model()
"""
Explanation: Integrating MinDiff for training
Once the data has been prepared, apply MinDiff to your model with the following steps:
Create the original model as you would without MinDiff.
End of explanation
"""
min_diff_model = min_diff.keras.MinDiffModel(
original_model=original_model,
loss=min_diff.losses.MMDLoss(),
loss_weight=1)
"""
Explanation: Wrap it in a MinDiffModel.
End of explanation
"""
min_diff_model.compile(optimizer='adam', loss='binary_crossentropy')
"""
Explanation: Compile it as you would without MinDiff.
End of explanation
"""
_ = min_diff_model.fit(train_with_min_diff_ds, epochs=1)
"""
Explanation: Train it with the MinDiff dataset (train_with_min_diff_ds in this case).
End of explanation
"""
_ = min_diff_model.evaluate(
tutorials_utils.df_to_dataset(train_df, batch_size=128))
# Calling with MinDiff data will include min_diff_loss in metrics.
_ = min_diff_model.evaluate(train_with_min_diff_ds)
"""
Explanation: Evaluation and Prediction with MinDiffModel
Both evaluating and predicting with a MinDiffModel are similar to doing so with the original model.
When calling evaluate you can pass in either the original dataset or the one containing MinDiff data. If you choose the latter, you will also get the min_diff_loss metric in addition to any other metrics being measured loss will also include the min_diff_loss.
When calling evaluate you can pass in either the original dataset or the one containing MinDiff data. If you include MinDiff in the call to evaluate, two things will differ:
An additional metric called min_diff_loss will be present in the output.
The value of the loss metric will be the sum of the original loss metric (not shown in the output) and the min_diff_loss.
End of explanation
"""
_ = min_diff_model.predict(
tutorials_utils.df_to_dataset(train_df, batch_size=128))
_ = min_diff_model.predict(train_with_min_diff_ds) # Identical to results above.
"""
Explanation: When calling predict you can technically also pass in the dataset with the MinDiff data but it will be ignored and not affect the output.
End of explanation
"""
print('MinDiffModel.fit == keras.Model.fit')
print(min_diff.keras.MinDiffModel.fit == tf.keras.Model.fit)
print('MinDiffModel.train_step == keras.Model.train_step')
print(min_diff.keras.MinDiffModel.train_step == tf.keras.Model.train_step)
"""
Explanation: Limitations of using MinDiffModel directly
When using MinDiffModel as described above, most methods will use the default implementations of tf.keras.Model (exceptions listed in the API documentation).
End of explanation
"""
print('Sequential.fit == keras.Model.fit')
print(tf.keras.Sequential.fit == tf.keras.Model.fit)
print('tf.keras.Sequential.train_step == keras.Model.train_step')
print(tf.keras.Sequential.train_step == tf.keras.Model.train_step)
"""
Explanation: For keras.Sequential or keras.Model, this is perfectly fine since they use the same functions.
End of explanation
"""
class CustomModel(tf.keras.Model):
def train_step(self, **kwargs):
pass # Custom implementation.
print('CustomModel.train_step == keras.Model.train_step')
print(CustomModel.train_step == tf.keras.Model.train_step)
"""
Explanation: However, if your model is a subclass of keras.Model, wrapping it with MinDiffModel will effectively lose the customization.
End of explanation
"""
|
cahya-wirawan/SDC-LaneLines-P1 | test.ipynb | mit | import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
%matplotlib inline
img = mpimg.imread('test.jpg')
plt.imshow(img)
"""
Explanation: Run all the cells below to make sure everything is working and ready to go. All cells should run without error.
Test Matplotlib and Plotting
End of explanation
"""
import cv2
# convert the image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
plt.imshow(gray, cmap='Greys_r')
"""
Explanation: Test OpenCV
End of explanation
"""
import tensorflow as tf
with tf.Session() as sess:
a = tf.constant(1)
b = tf.constant(2)
c = a + b
# Should be 3
print("1 + 2 = {}".format(sess.run(c)))
"""
Explanation: Test TensorFlow
End of explanation
"""
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
"""
Explanation: Test Moviepy
End of explanation
"""
new_clip_output = 'test_output.mp4'
test_clip = VideoFileClip("test.mp4")
new_clip = test_clip.fl_image(lambda x: cv2.cvtColor(x, cv2.COLOR_RGB2YUV)) #NOTE: this function expects color images!!
%time new_clip.write_videofile(new_clip_output, audio=False)
HTML("""
<video width="640" height="300" controls>
<source src="{0}" type="video/mp4">
</video>
""".format(new_clip_output))
"""
Explanation: Create a new video with moviepy by processing each frame to YUV color space.
End of explanation
"""
|
hektor-monteiro/python-notebooks | aula-11_Solucao-EDOs.ipynb | gpl-2.0 | ################################################################
# Queda livre
# definindo o problema
#
# d^2x/dt^2 = −g
#
# y = [x,v] e dy/dt = [v,-g]
#
# definimos o estado do sistema como y
#
def quedalivre(estado, tempo):
g0 = estado[1]
g1 = -9.8
return np.array( [ g0 , g1 ] )
"""
Explanation: Equações diferencias ordinárias
Equações diferencias ordinárias (EDO) são equações contendo uma ou mais funções de uma variável independente e suas derivadas.
A Lei de Newton é um exemplo de EDO:
$$ \frac{d\vec{P}}{dt} = \vec{F} $$
EDO aparecem em inúmeros contextos científicos e práticos no mundo. É sem dúvida uma das principais ferramentas matemáticas usadas em particular na Física.
Para uma descrição detalhada de EDOs veja: https://en.wikipedia.org/wiki/Ordinary_differential_equation
Em muitas situações soluções exatas analíticas não são possíveis e portato precisamos adotar técnicas numéricas. Veremos aqui as principais técnicas e estratégias para resolver este tipo de equação.
Método de Euler
Para entender de maneira mais simples a estratégia de solução numérica dessas equações partimos de:
$$ a = \frac{dv}{dt} $$
podemos reescrever a equação acima como:
$$ dv = a~dt $$
que pode ser discretizada como
$$\Delta v = a~ \Delta t$$ (2)
A partir daqui fica fácil implementar estratégias numéricas. Para tanto precisamos definir as condições iniciais do problema que desejamos resolver. A evolução da solução a partir das condições iniciais $t_0$ e $v_0$ é obtida como:
$$ v_{i+1} = v_i + \frac{dv}{dt}~\Delta t $$
Como sabemos a relação entre velocidade e posição podemos obter uma relação semelhante a acima para obter a posição do problema em questão, definindo assim o estado $[x_i,v_i]$ para uma dado tempo $t$.
O método descrito acima é bem simples de ser implementado mas produz soluções bem aproximadas. Ele é conhecido como Método de Euler.
Montando o problema
Antes de seguirmos na discussão dos métodos de resolução de EDOs, é interessante discutiros uma estratégia geral para montar o problema a ser resolvido. Isso facilitará a obtenção de soluções de problemas mais complexos e com métodos diversos mais a frente.
Considere o problema de uma particula em queda livre:
$$ \ddot{x} = -g $$
A equação acima pode ser decomposta em duas EDOs com derivadas simples:
$$ \dot{x} = v $$ $$ \dot{v} = -g $$
Usando o método de Euler para obter o estado $[x_i,v_i]$ da partícula em um dado $t$ temos:
$$ x_{i+1} = x_i + \dot{x}~\Delta t $$
$$ v_{i+1} = v_i + \dot{v}~\Delta t $$
A forma acima sugere que podemos escrever o sistema acima como uma única equação vetorial sendo:
$$ y = \begin{bmatrix} x \ v \end{bmatrix} $$
$$ \dot{y} = \begin{bmatrix} v \ -g \end{bmatrix} $$
$$ y_{i+1} = y_i + \dot{y}~\Delta t ~~~~~~~~~~(1)$$
Com base na equação acima, vemos que se temos uma função que retorna as derivadas relevantes do problema e as condições iniciais, podemos usar as capacidades vetoriais do Python para definir uma estratégia genérica de solução de EDOs.
Primeiro definimos o problema ou modelo criando uma função que retorna as derivadas de todos os elementos de $y$.
End of explanation
"""
def euler(y,t,dt,model):
ynext = y + model(y, t) * dt
return ynext
"""
Explanation: definido o modelo a ser estudado, podemos escrever a parte que realiza a solução, no caso a implementação do método de Euler:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
#plt.figure()
for N in range(5,50,10): # looping para gerar soluções com resoluções diferentes
#definindo valores iniciais
tmax = 5.0 #Tempo máximo para o cálculo
dt = tmax / float(N-1) # delta t a ser usado
time = np.linspace(0 , tmax , N) # vetor dos tempos
#definindo o vetor de estado do modelo
y = np.zeros( [ N, 2 ] )
#definindo os estado inicial
y [0,0] = 100. # posição inicial
y [0,1] = 0. # velocidade inicial
# Aqui aplicamos o método de Euler a partir da condição inicial
# e seguindo pelo vetor de tempo
for j in range (N-1):
y[j+1] = euler(y[j], time[j], dt, quedalivre)
# plota as soluções obtidas
plt.plot(time,y[:,0], label=str(N))
plt.ylim(0,110)
plt.xlabel('tempo')
plt.ylabel('espaco')
plt.legend()
"""
Explanation: Note que a função do método recebe a posição atual, ou estado atual, o tempo atual e o $\Delta t$ adotado, assim como a função que define o modelo a ser estudado. Esta função implemnta a equação (1) discutida acima.
Implementando as duas funções acima para solução de um problema específico agora fica fácil:
End of explanation
"""
################################################################
# Oscilador Harmonico
# definindo o problema para a equação
# dxˆ2/dt = − g/l sin(theta)
#
# d1 -> dtheta/dt=v e d2 -> dv/dt = − g/l sin(theta)
def pendulo(estado, tempo):
l = 0.1 # comprimento do pendulo
g0 = estado[1] # primeira derivada
g1 = -9.8/l*np.sin(estado[0]) # segunda derivada
return np.array( [ g0 , g1 ] )
"""
Explanation: Erros no método de Euler
Para avaliar os erros cometidos pelo método fazemos uma expansão de Série de Taylor da função y no ponto $t=t_0+h$:
$$ y(t_{0}+h)=y(t_{0})+hy'(t_{0})+{\frac {1}{2}}h^{2}y''(t_{0})+O(h^{3}). $$
note que os dois primeiros termos da série são exatamente o método de Euler. Com isso o erro cometido em cada passo, ou erro de truncamento local (ETL) é dado por:
$$ \mathrm {ETL} =y(t_{0}+h)-y_{1}={\frac {1}{2}}h^{2}y''(t_{0})+O(h^{3}). $$
para $ \xi \in [t_{0},t_{0}+h]$.
Como obtemos as soluções em um número de intervalos e não somente em um ponto, o erro local se propaga para os passos seguintes. Desse modo obtemos o erro de truncamento global (ETG):
$$ \mathrm {ETG} = n\mathrm {ETL} = \frac{(t-t_0)}{h} {\frac {1}{2}}h^{2}y''(t_{0}) = (t-t_0) {\frac {1}{2}}hy''(t_{0}) $$
Ou seja, o método de Euler é um método de primeira ordem em h.
Uma variante simples do método de Euler é o Metodo do Ponto Médio. Neste método a derivada é estimada no ponto $(t+h/2)$, ou seja, no ponto médio entre $t$ e $t+h$. Com essa simples mudança o método do ponto médio se torna um método de orden 2.
Métodos de Runge-Kutta
Os métodos de Runge-Kutta (RK) podem ser pensados como uma generalização da filosofia apresentada para o Método de Euler. Sendo $\frac{dy}{dx}=f(x,y)$ temos que os métodos RK são da forma:
$$ y_{i+1} = y_i + \varphi (x_i,y_i) h ~~~~~~~~~~(2)$$
onde $\varphi (x_i,y_i)$ são estimativas das derivadas necessárias.
As estimativas das derivadas podem ser do tipo explicito:
$$ \varphi (x_i,y_i) = \sum {i=1}^{s}a{i}k_{i} ~~~~~~~~~~(3)$$
ou do tipo implicito:
$$ y_{n+1}=y_{n}+h\sum {i=1}^{s}b{i}k_{i} ~~~~~~~~~~(4)$$
onde:
$$ \displaystyle k_{i}=f\left(t_{n}+c_{i}h,\ y_{n}+h\sum {j=1}^{s}a{ij}k_{j}\right),\quad i=1,\ldots ,s. $$
onde $a_i$ são constantes e $k_i$ são combinações lineares de estimativas da derivada $\frac{dy}{dx}$.
Para entender melhor vamos construir as formulas para o método RK de segunda ordem.
Runge-Kutta de segunda ordem
Para obter as expressões do método RK de $2^a$ ordem começamos pela integração formal do problema a ser estudado:
$$ \frac{dy}{dt} = f(y,t) \implies y(t) = \int{f(y,t)dt} \implies y_{i+1} = y_i + \int^{t_{i+1}}_{t_i}{f(y,t)dt} $$
Para obter a integral expandimos usando uma série de Taylor em torno do ponto médio $t_{i+1/2}$:
$$ f(t,y) \approx f(y_{i+1/2},t_{i+1/2}) + (t-t_{i+1/2}) \frac{df}{dt}{(t{i+1/2})} + O(h^{2}) $$
como o termo $(t-t_{i+1/2})$ elevado a qualquer potência ímpar é igualmente positivo e negativo no intervalo $t_i<t<t_{i+1}$ temos:
$$ \int^{t_{i+1}}{t_i}{f(y,t)dt} \approx f(y{i+1/2},t_{i+1/2})(t_{i+1}-t_i) + O(h^{3})$$
sendo $h=(t_{i+1}-t_i)$ temos:
$$ y_{i+1} = y_i + f(y_{i+1/2},t_{i+1/2})h + O(h^{3})$$
Veja que o cancelamento dos termos de primeira ordem implica em uma precisão de $2^a$ ordem para o método. Mas a relação acima ainda não é a final pois precisamos obter $f(t_{i+1/2},y_{i+1/2})$. No entanto só sabemos $f(t_{i},y_{i})$. Para fazer isso usamos o método de Euler para obter a derivada no ponto médio:
$$ y_{i+1/2} \approx y_i + \frac{h}{2}\frac{dy}{dt} = y_i + \frac{hf(y_i,t_i)}{2} $$
Assim, podemos resumir o RK de $2^a$ ordem como:
$$ k_1 = hf(y_i,t_i) $$
$$ k_2 = hf(y_i+\frac{1}{2}k_1,t_{i+1/2}) $$
$$ y_{i+1} = y_i + k_2 $$
Pendulo simples
Vamos aplicar o que vimos acima para a solução do proble do pendulo simples. Assumindo que podemos desprezar a resistência do ar podemos escrever a EDO para este sistema com base na lei de Newton como:
$$ {\frac {d^{2}\theta }{dt^{2}}}+{\frac {g}{\ell }}\sin \theta =0. $$
A solução para angulos pequenos onde $\sin \theta \approx \theta$ pode ser obtida analíticamente e é dada por:
$$ \theta (t)=\theta _{0}\cos \left({\sqrt {g \over \ell }}t\right) $$
Usando a metodologia descrita anteriormente definimos este problema como:
End of explanation
"""
def rk2 (y, time, dt, model):
k0 = dt * model(y, time )
k1 = dt * model(y + k0, time + dt)
ynext = y + 0.5 * (k0 + k1)
return ynext
"""
Explanation: O método de RK de $2^a$ ordem é implementado como:
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
# Resolvendo o problema com Euler
tmax = 2.0 #limite de tempo
plt.figure()
for N in range(10,100,30):
dt = tmax / float(N-1)
time = np.linspace(0 , tmax , N)
# definindo estado inicial
y = np.zeros( [ N, 2 ] )
y [0,0] = np.pi/8.0 # angulo inicial
y [0,1] = 0. # velocidade angular inicial
for j in range (N-1):
y[j+1] = euler(y[j], time[j], dt, pendulo)
plt.plot(time,y[:,0],label=str(N))
plt.ylim(-5,5)
plt.xlabel('tempo')
plt.ylabel('angulo')
plt.legend()
# Resolvendo o problema com RK2a
tmax = 2.0 #limite de tempo
plt.figure()
for N in range(10,100,30):
dt = tmax / float(N-1)
time = np.linspace(0 , tmax , N)
# definindo estado inicial
y = np.zeros( [ N, 2 ] )
y [0,0] = np.pi/8.0 # angulo inicial
y [0,1] = 0. # velocidade angular inicial
for j in range (N-1):
y[j+1] = rk2(y[j], time[j], dt, pendulo)
plt.plot(time,y[:,0],label=str(N))
plt.ylim(-5,5)
plt.xlabel('tempo')
plt.ylabel('angulo')
plt.legend()
"""
Explanation: Note a semelhança com a implementação do método de Euler feita anteriormente.
Para resolvermos o problema agora, basta escrever o programa de modo a definir as condições iniciais.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
################################################################
# Modelo SIR
def SRI(estado, tempo):
S, I, R = estado
g0 = -beta*S*I
g1 = beta*S*I - gamma*I
g2 = gamma*I
return np.array( [ g0 , g1, g2 ] )
def euler(y,t,dt,model):
ynext = y + model(y, t) * dt
return ynext
def rk2 (y, time, dt, model):
k0 = dt * model(y, time )
k1 = dt * model(y + k0, time + dt)
ynext = y + 0.5 * ( k0 + k1 )
return ynext
# Resolvendo o problema com RK2a
# considerando que no início de um dia havia 40 suscetíveis e 8 infectados,
# e que no dia seguinte haviam 30 e 18, respectivamente
beta = 10./(40*8*24)
# Entre 15 infectados, observou-se que 3 se recuperaram durante um dia
gamma = 3./(15*24)
dt = 24 # em horas
D = 30 # Simula para D days
tmax = (D*24) # tempo final
time = np.arange(0 , tmax , dt)
# definindo estado inicial
y = np.zeros( [ time.size, 3 ] )
y [0,0] = 50
y [0,1] = 1.
y [0,2] = 0.
y_euler = np.copy(y)
y_rk2 = np.copy(y)
# aplica Euler e RK2
for j in range (time.size-1):
y_rk2[j+1,:] = rk2(y_rk2[j,:], time[j], dt, SRI)
y_euler[j+1,:] = euler(y_euler[j,:], time[j], dt, SRI)
plt.figure()
plt.plot(time/24.,y_euler[:,0],label='S')
plt.plot(time/24.,y_euler[:,1],label='I')
plt.plot(time/24.,y_euler[:,2],label='R')
plt.xlabel('tempo (dias)')
plt.ylabel('numero')
plt.legend()
plt.figure()
plt.plot(time/24.,y_rk2[:,0],label='S')
plt.plot(time/24.,y_rk2[:,1],label='I')
plt.plot(time/24.,y_rk2[:,2],label='R')
plt.xlabel('tempo (dias)')
plt.ylabel('numero')
plt.legend()
import matplotlib.pyplot as plt
import numpy as np
np.arange(0 , tmax , dt)
time.size
"""
Explanation: Formulação geral do problema
Vimos acima que podemos formular o problema de modo a facilitar sua implementação numerica para diversos algorítmos. Podemos formular o problema também de forma mais generalizada de modo que possamos implementar problemas mais complexos, por exemplo, quando tivermos diversas equações diferencias com mais de uma variável.
Suponha por exemplo que tenhamo duas equações diferencias que queremos resolver:
$$ \frac{dx}{dt} = f_x(x,y,t), ~~~ \frac{dy}{dt} = f_y(x,y,t) $$
onde $f_x$ e $f_y$ são funções de x, y e t e possívelmente não lineares.
Podemos escrever esse sistema de equações usando notação vetorial como:
$$ \frac{d\mathbf{r}}{dt} = \mathbf{f}(\mathbf{r},t) $$
onde $\mathbf{r}=(x,y,...)$ e $\mathbf{f}$ é o vetor de funções $\mathbf{f}(\mathbf{r},t) = (f_x(\mathbf{r},t),f_y(\mathbf{r},t),...)$
Com essa notação podemos expressar por exemplo o método de Euler como:
$$ \mathbf{r}(t+h) = \mathbf{r}(t) + h\mathbf{f}(\mathbf{r},t)$$
Runge-Kutta de quarta ordem
A lógica na obtenção do método RK de segunda ordem pode ser expandido através da obtenção de expansões de Taylor em torno de outros pontos e depois obtendo uma combinação linear das estimativas. Em princípio podería-se expandir em ordens cada vez maiores, no entanto em geral um bom equilibrio entre precisão e complexidade (custo computacional) é o RK de quarta ordem. As equações para este algorítmo são:
$$ x(t+h) = x(t) + \frac{1}{6} ( k_1 + 2k_2 + 2k_3 + k_4 ) $$
onde
$$ k_1 = hf(x,t) $$
$$ k_2 = hf(x+1/2k_1,t+1/2h) $$
$$ k_3 = hf(x+1/2k_2,t+1/2h) $$
$$ k_4 = hf(x+k_3,t+h) $$
Para mais detalhes veja: https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods#:~:text=In%20numerical%20analysis%2C%20the%20Runge,solutions%20of%20ordinary%20differential%20equations.
Podemos aplicar o método acima de forma genérica como em métodos anteriores subsituindo $f(x,t)$ por $\mathbf{f}(\mathbf{r},t)$.
Aplicando a forma generalizada da formulação
Para entender melhor a aplicação do que foi discutido acima, vamos tentar modelar o problema do espalhamento de uma infecção.
Imagine um internato no interior do país. Esta escola é uma sociedade pequena e fechada. De repente, um ou mais alunos pegam gripe. Esperamos que a gripe possa se espalhar de forma bastante eficaz ou desaparecer. A questão é quantos alunos e funcionários da escola serão afetados. Um modelo simples pode nos ajudar a compreender a dinâmica de como a doença se espalha.
Usamos uma função $S(t)$ contar quantos indivíduos, no momento $t$, têm a possibilidade de se infectar. Aqui, $t$ pode contar horas ou dias, por exemplo. Esses indivíduos constituem uma categoria denominada suscetíveis. Outra categoria consiste nos indivíduos infectados. Vamos usar a funao $I(t)$ contar quantos há na categoria I no momento $t$. Assumimos aqui por simplicidade que um indivíduo que se recuperou da doença ganhou imunidade. Também existe uma pequena possibilidade de que um infectado morra. Em ambos os casos, o indivíduo é movido da categoria I para uma categoria que chamamos de categoria removida, rotulada com R. Usamos a função $R(t)$ para contar o número de indivíduos na categoria R categoria no tempo $t$. Quem entra na categoria R, não pode sair desta categoria.
Para resumir, a propagação desta doença é essencialmente a dinâmica de mover os indivíduos da categoria S para a I e depois para a R:
Este exemplo é um resumo do que é apresentado em detalhes em: http://hplgit.github.io/prog4comp/doc/pub/p4c-sphinx-Python/._pylight005.html#spreading-of-diseases
O problema pode ser descrito pelo seguinte sistema de equações:
$$ S' = -\beta SI $$
$$ I' = \beta SI - \gamma I $$
$$ R' = \gamma I $$
onde:
$\beta$ é o número médio de contatos por pessoa por tempo, multiplicado pela probabilidade de transmissão da doença em um contato entre um sujeito suscetível e um infeccioso
$\gamma$ é a probabilidade de que um indivíduo se recupere em um intervalo de tempo unitário. Alternativamente, se um indivíduo é infeccioso por um período médio de tempo $D$, então $\gamma = 1 / D$
detalhes sobre esse modelo e outras variantes podem ser obtidos aqui: https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology
Tomando em conta a formulação vetorial discutida acima podemos escrever o modelo como:
$$ \mathbf{r} = (S(t), I(t), R(t)) $$
$$ \mathbf{r}^{n+1} = \mathbf{r}^n + \Delta t \mathbf{f}(\mathbf{r}^n, t_n) $$
End of explanation
"""
|
ioam/geoviews | examples/Homepage.ipynb | bsd-3-clause | import geoviews as gv
import geoviews.feature as gf
import xarray as xr
from cartopy import crs
gv.extension('bokeh', 'matplotlib')
(gf.ocean + gf.land + gf.ocean * gf.land * gf.coastline * gf.borders).opts(
'Feature', projection=crs.Geostationary(), global_extent=True, height=325).cols(3)
"""
Explanation: GeoViews is a Python library that makes it easy to explore and visualize geographical, meteorological, and oceanographic datasets, such as those used in weather, climate, and remote sensing research.
GeoViews is built on the HoloViews library for building flexible visualizations of multidimensional data. GeoViews adds a family of geographic plot types based on the Cartopy library, plotted using either the Matplotlib or Bokeh packages.
With GeoViews, you can now work easily and naturally with large, multidimensional geographic datasets, instantly visualizing any subset or combination of them, while always being able to access the raw data underlying any plot. Here's a simple example:
End of explanation
"""
dataset = gv.Dataset(xr.open_dataset('./data/ensemble.nc'))
ensemble = dataset.to(gv.Image, ['longitude', 'latitude'], 'surface_temperature')
gv.output(ensemble.opts(cmap='viridis', colorbar=True, fig_size=200, backend='matplotlib') * gf.coastline(),
backend='matplotlib')
"""
Explanation: GeoViews is designed to work well with the Iris and xarray libraries for working with multidimensional arrays, such as those stored in netCDF files. GeoViews also accepts data as NumPy arrays and Pandas data frames. In each case, the data can be left stored in its original, native format, wrapped in a HoloViews or GeoViews object that provides instant interactive visualizations.
The following example loads a dataset originally taken from iris-sample-data and quickly builds an interactive tool for exploring how the data changes over time:
End of explanation
"""
import geopandas as gpd
gv.Polygons(gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')), vdims=['pop_est', ('name', 'Country')]).opts(
tools=['hover'], width=600, projection=crs.Robinson()
)
"""
Explanation: GeoViews also natively supports geopandas datastructures allowing us to easily plot shapefiles and choropleths:
End of explanation
"""
|
chungjjang80/FRETBursts | notebooks/Example - Exporting Burst Data Including Timestamps.ipynb | gpl-2.0 | from fretbursts import *
sns = init_notebook()
"""
Explanation: Exporting Burst Data
This notebook is part of a tutorial series for the FRETBursts burst analysis software.
In this notebook, show a few example of how to export FRETBursts
burst data to a file.
<div class="alert alert-info">
Please <b>cite</b> FRETBursts in publications or presentations!
</div>
Loading the software
We start by loading FRETBursts:
End of explanation
"""
url = 'http://files.figshare.com/2182601/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5'
"""
Explanation: Downloading the data file
The full list of smFRET measurements used in the FRETBursts tutorials
can be found on Figshare.
This is the file we will download:
End of explanation
"""
download_file(url, save_dir='./data')
"""
Explanation: <div class="alert alert-success">
You can change the <code>url</code> variable above to download your own data file.
This is useful if you are executing FRETBursts online and you want to use
your own data file. See
<a href="1. First Steps - Start here if new to Jupyter Notebooks.ipynb">First Steps</a>.
</div>
Here, we download the data file and put it in a folder named data,
inside the notebook folder:
End of explanation
"""
filename = "./data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5"
filename
"""
Explanation: NOTE: If you modified the url variable providing an invalid URL
the previous command will fail. In this case, edit the cell containing
the url and re-try the download.
Loading the data file
Here, we can directly define the file name to be loaded:
End of explanation
"""
import os
if os.path.isfile(filename):
print("Perfect, file found!")
else:
print("Sorry, file:\n%s not found" % filename)
d = loader.photon_hdf5(filename)
"""
Explanation: Let's check that the file exists:
End of explanation
"""
d.ph_times_t, d.det_t
"""
Explanation: μs-ALEX parameters
At this point, timestamps and detectors numbers are contained in the ph_times_t and det_t attributes of d. Let's print them:
End of explanation
"""
d.add(det_donor_accept = (0, 1),
alex_period = 4000,
offset = 700,
D_ON = (2180, 3900),
A_ON = (200, 1800))
"""
Explanation: We need to define some ALEX parameters:
End of explanation
"""
bpl.plot_alternation_hist(d)
"""
Explanation: Here the parameters are:
det_donor_accept: donor and acceptor channels
alex_period: length of excitation period (in timestamps units)
D_ON and A_ON: donor and acceptor excitation windows
offset: the offset between the start of alternation and start of timestamping
(see also Definition of alternation periods).
To check that the above parameters are correct, we need to plot the histogram of timestamps (modulo the alternation period) and superimpose the two excitation period definitions to it:
End of explanation
"""
loader.alex_apply_period(d)
"""
Explanation: If the previous alternation histogram looks correct,
the corresponding definitions of the excitation periods can be applied to the data using the following command:
End of explanation
"""
d.calc_bg(bg.exp_fit, time_s=30, tail_min_us='auto', F_bg=1.7)
d.burst_search(L=10, m=10, F=6)
"""
Explanation: If the previous histogram does not look right, the parameters in the d.add(...) cell can be modified and checked by running the histogram plot cell until everything looks fine. Don't forget to apply the
parameters with loader.usalex_apply_period(d) as a last step.
NOTE: After applying the ALEX parameters a new array of
timestamps containing only photons inside the excitation periods
is created (name d.ph_times_m). To save memory, by default,
the old timestamps array (d.ph_times_t) is deleted. Therefore,
in the following, when we talk about all-photon selection we always
refer to all photons inside both excitation periods.
Background and burst search
End of explanation
"""
ds = d.select_bursts(select_bursts.size, th1=60)
"""
Explanation: First we filter the bursts to avoid creating big files:
End of explanation
"""
bursts = bext.burst_data(ds, include_bg=True, include_ph_index=True)
bursts.head(5) # display first 5 bursts
"""
Explanation: Exporting Burst Data
By burst-data we mean all the scalar burst parameters, e.g. size, duration, background, etc...
We can easily get a table (a pandas DataFrame) with all the burst data as follows:
End of explanation
"""
bursts.to_csv('%s_burst_data.csv' % filename.replace('.hdf5', ''))
"""
Explanation: Once we have the DataFrame, saving it to disk in CSV format is trivial:
End of explanation
"""
ds.A_ex
#{0: DexDem, 1:DexAem, 2: AexDem, 3: AemAem}
(ds.A_ex[0].view('int8') << 1) + ds.A_em[0].view('int8')
"""
Explanation: Exporting Bursts Timestamps
Exporting timestamps and other photon-data for each bursts is a little trickier
because the data is less uniform (i.e. each burst has a different number of photons).
In the following example, we will save a csv file with variable-length columns.
Each burst is represented by to lines: one line for timestamps and one line for the
photon-stream (excitation/emission period) the timestamps belongs to.
Let's start by creating an array of photon streams containing
one of the values 0, 1, 2 or 3 for each timestamp.
These values will correspond to the DexDem, DexAem, AexDem, AemAem
photon streams respectively.
End of explanation
"""
header = """\
# BPH2CSV: %s
# Lines per burst: 3
# - timestamps (int64): in 12.5 ns units
# - nanotimes (int16): in raw TCSPC unit (3.3ps)
# - stream (uint8): the photon stream according to the mapping {0: DexDem, 1: DexAem, 2: AexDem, 3: AemAem}
""" % filename
print(header)
"""
Explanation: Now we define an header documenting the file format. Ww will also include the
filename of the measurement.
This is just an example including nanotimes:
End of explanation
"""
header = """\
# BPH2CSV: %s
# Lines per burst: 2
# - timestamps (int64): in 12.5 ns units
# - stream (uint8): the photon stream according to the mapping {0: DexDem, 1: DexAem, 2: AexDem, 3: AemAem}
""" % filename
print(header)
"""
Explanation: And this is header we are going to use:
End of explanation
"""
out_fname = '%s_burst_timestamps.csv' % filename.replace('.hdf5', '')
dx = ds
ich = 0
bursts = dx.mburst[ich]
timestamps = dx.ph_times_m[ich]
stream = (dx.A_ex[ich].view('int8') << 1) + dx.A_em[ich].view('int8')
with open(out_fname, 'wt') as f:
f.write(header)
for times, period in zip(bl.iter_bursts_ph(timestamps, bursts),
bl.iter_bursts_ph(stream, bursts)):
times.tofile(f, sep=',')
f.write('\n')
period.tofile(f, sep=',')
f.write('\n')
"""
Explanation: We can now save the data to disk:
End of explanation
"""
import pandas as pd
"""
Explanation: Done!
Read the file back
For consistency check, we can read back the data we just saved.
As an exercise we will put the results in a pandas DataFrame
which is more convenient than an array for holding this data.
End of explanation
"""
with open(out_fname) as f:
lines = []
lines.append(f.readline())
while lines[-1].startswith('#'):
lines.append(f.readline())
header = ''.join(lines[:-1])
print(header)
stream_map = {0: 'DexDem', 1: 'DexAem', 2: 'AexDem', 3: 'AemAem'}
nrows = int(header.split('\n')[1].split(':')[1].strip())
header_len = len(header.split('\n')) - 1
header_len, nrows
"""
Explanation: We start reading the header and computing
some file-specific constants.
End of explanation
"""
burstph = (pd.read_csv(out_fname, skiprows=header_len, nrows=nrows, header=None).T
.rename(columns={0: 'timestamp', 1: 'stream'}))
burstph.stream = (burstph.stream
.apply(lambda x: stream_map[pd.to_numeric(x)])
.astype('category', categories=['DexDem', 'DexAem', 'AexDem', 'AemAem'], ordered=True))
burstph
"""
Explanation: As a test, we load the data for the first burst into a dataframe, converting the numerical column "streams"
into photon-stream names (strings). The new column is of type categorical, so it will take very little space:
End of explanation
"""
import csv
from builtins import int # python 2 workaround, can be removed on python 3
# Read data in two list of lists
t_list, s_list = [], []
with open(out_fname) as f:
for i in range(header_len):
f.readline()
csvreader = csv.reader(f)
for row in csvreader:
t_list.append([int(v) for v in row])
s_list.append([int(v) for v in next(csvreader)])
# Turn the inner list into pandas.DataFrame
d_list = []
for ib, (t, s) in enumerate(zip(t_list, s_list)):
d_list.append(
pd.DataFrame({'timestamp': t, 'stream': s}, columns=['timestamp', 'stream'])
.assign(iburst=ib)
)
# Concatenate dataframes
burstph = pd.concat(d_list, ignore_index=True)
# Convert stream column into categorical
burstph.stream = (burstph.stream
.apply(lambda x: stream_map[pd.to_numeric(x)])
.astype('category', categories=['DexDem', 'DexAem', 'AexDem', 'AemAem'], ordered=True))
burstph
burstph.dtypes
"""
Explanation: For reading the whole file I use a different approach. First, I load the entire file
in two lists of lists (one for timestamps and one for the stream). Next, I create
a single DataFrame with a third column indicating the burst index.
End of explanation
"""
|
abhi1509/deep-learning | language-translation/dlnd_language_translation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
source_text_ids = [[source_vocab_to_int.get(word, source_vocab_to_int['<UNK>']) for word in line.split()] for line in source_text.split('\n')]
target_text_ids = [[target_vocab_to_int.get(word) for word in line.split()] + [target_vocab_to_int['<EOS>']] for line in target_text.split('\n')]
return source_text_ids, target_text_ids
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
"""
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
target_sequence_length = tf.placeholder(tf.int32, [None], name='target_sequence_length')
max_target_len = tf.reduce_max(input_tensor=target_sequence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, [None], name='source_sequence_length')
return input, targets, learning_rate, keep_prob, target_sequence_length, max_target_len, source_sequence_length
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
"""
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)
"""
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
"""
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
"""
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
# RNN cell
def make_cell(rnn_size):
enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
enc_cell = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob)
return enc_cell
# RNN cell layer
enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)
return enc_output, enc_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
# Helper for the training process. Used by BasicDecoder to read inputs.
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
# Basic decoder
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
training_helper,
encoder_state,
output_layer)
# Perform dynamic decoding using the decoder
training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder,
impute_finished=True,
maximum_iterations=max_summary_length)[0]
return training_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
"""
# param keep_prob is unused
# Helper for the inference process.
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(embedding=dec_embeddings,
start_tokens=start_tokens,
end_token=end_of_sequence_id)
# Basic decoder
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
inference_helper,
encoder_state,
output_layer)
# Perform dynamic decoding using the decoder
inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)[0]
# inference_decoder_output = tf.nn.dropout(inference_decoder_output, keep_prob)
return inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
"""
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# 1. Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# 2. Construct the decoder cell
def make_cell(rnn_size):
dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
return dec_cell
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
# 3. Dense layer to translate the decoder's output at each time
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
# 4. Set up a training decoder and an inference decoder
# Training Decoder
with tf.variable_scope("decode"):
# def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
# target_sequence_length, max_summary_length,
# output_layer, keep_prob):
training_decoder_output = decoding_layer_train(
encoder_state,
dec_cell,
dec_embed_input,
target_sequence_length,
max_target_sequence_length,
output_layer,
keep_prob)
with tf.variable_scope("decode", reuse=True):
# def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
# end_of_sequence_id, max_target_sequence_length,
# vocab_size, output_layer, batch_size, keep_prob):
inference_decoder_output = decoding_layer_infer(
encoder_state,
dec_cell,
dec_embeddings,
target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],
max_target_sequence_length,
target_vocab_size,
output_layer,
batch_size,
keep_prob)
return training_decoder_output, inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
# source_sequence_length, source_vocab_size,
# encoding_embedding_size):
# return enc_output, enc_state
_, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
enc_embedding_size)
# def process_decoder_input(target_data, target_vocab_to_int, batch_size):
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
# def decoding_layer(dec_input, encoder_state,
# target_sequence_length, max_target_sequence_length,
# rnn_size,
# num_layers, target_vocab_to_int, target_vocab_size,
# batch_size, keep_prob, decoding_embedding_size):
training_decoder_output, inference_decoder_output = decoding_layer(dec_input,
enc_state,
target_sequence_length,
max_target_sentence_length,
rnn_size,
num_layers,
target_vocab_to_int,
target_vocab_size,
batch_size,
keep_prob,
dec_embedding_size)
return training_decoder_output, inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
"""
# # Number of Epochs
# epochs = 12
# # Batch Size
# batch_size = 1500
# # RNN Size
# rnn_size = 60
# # Number of Layers
# num_layers = 1
# # Embedding Size
# encoding_embedding_size = 300
# decoding_embedding_size = 300
# # Learning Rate
# learning_rate = 0.001
# # Dropout Keep Probability
# keep_probability = 0.8
# display_step = 10
## out === 0.7858
# # Number of Epochs
# epochs = 15
# # Batch Size
# batch_size = 1500
# # RNN Size
# rnn_size = 65
# # Number of Layers
# num_layers = 1
# # Embedding Size
# encoding_embedding_size = 300
# decoding_embedding_size = 300
# # Learning Rate
# learning_rate = 0.001
# # Dropout Keep Probability
# keep_probability = 0.85
# display_step = 10
## out === 0.83
# # Number of Epochs
# epochs = 20
# # Batch Size
# batch_size = 2000
# # RNN Size
# rnn_size = 70
# # Number of Layers
# num_layers = 1
# # Embedding Size
# encoding_embedding_size = 512
# decoding_embedding_size = 512
# # Learning Rate
# learning_rate = 0.001
# # Dropout Keep Probability
# keep_probability = 0.9
# display_step = 20
##out == 0.87
# # Number of Epochs
# epochs = 10
# # Batch Size
# batch_size = 1000
# # RNN Size
# rnn_size = 50
# # Number of Layers
# num_layers = 1
# # Embedding Size
# encoding_embedding_size = 512
# decoding_embedding_size = 512
# # Learning Rate
# learning_rate = 0.01
# # Dropout Keep Probability
# keep_probability = 0.85
# display_step = 20
##out 0.71
# Number of Epochs
epochs = 15
# Batch Size
batch_size = 1500
# RNN Size
rnn_size = 100
# Number of Layers
num_layers = 1
# Embedding Size
encoding_embedding_size = 512
decoding_embedding_size = 512
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.95
display_step = 20
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
"""
Explanation: Batch and pad the source and target sequences
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
converted = [vocab_to_int.get(word, vocab_to_int.get('<UNK>')) for word in sentence.lower().split()]
return converted
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
letsgoexploring/beapy-package | beapyExample.ipynb | mit | import numpy as np
import pandas as pd
import urllib
import datetime
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
import beapy
apiKey = '3EDEAA66-4B2B-4926-83C9-FD2089747A5B'
bea = beapy.initialize(apiKey =apiKey)
"""
Explanation: beapy
beapy is a Python package for obtaining data from the API of the Bureau of Economic Analysis.
End of explanation
"""
# Get a list of the the data sets available from the BEA along with descriptions.
bea.getDataSetList()
# The getDataSet() method adds a dataSetList attiribute that is a list of the available datasets:
print(bea.dataSetList)
# Get a list of the the parameters for the NIPA dataset
bea.getParameterList('NIPA')
# The getParameterList() method adds a parameterList attiribute that is a list of the parameters of the chosen dataset.
print(bea.parameterList)
# Get a list of the values that the Frequency parameter in the NIPA dataset can take:
bea.getParameterValues('NIPA','Frequency')
# Download data from Table 1.1.5, TableID: 5. and plot
results = bea.getNipa(TableID=5,Frequency='A',Year='X')
frame =results['data']
np.log(frame['Gross domestic product']).plot(grid=True,lw=3)
"""
Explanation: Methods for searching for data
getDataSetList(): returns the available datasets.
getParameterList(dataSetName): returns the parameters of the specified dataset.
getParameterValues(dataSetName,ParameterName): returns the values accepted for a parameter of the specified dataset.
End of explanation
"""
bea.getParameterValues('RegionalData','KeyCode')
bea.getParameterValues('RegionalData','GeoFips')
bea.getParameterValues('RegionalData','Year')
bea.getParameterValues('RegionalData','KeyCode')
"""
Explanation: Datasets
There are 10 datasets available through the BEA API:
RegionalData (statistics by state, county, and MSA)
NIPA (National Income and Product Accounts)
~~NIUnderlyingDetail (National Income and Product Accounts)~~
Fixed Assets
~~Direct Investment and Multinational Enterprises (MNEs)~~
Gross Domestic Product by Industry (GDPbyIndustry)
ITA (International Transactions)
IIP (International Investment Position)
Regional Income (detailed regional income and employment data sets)
RegionalProduct (detailed state and MSA product data sets)
beapy provides a separate method for accessing the data in each datset:
getRegionalData.(KeyCode=None,GeoFips='STATE',Year='ALL')
getNipa.(TableID=None,Frequency=None,Year='X',ShowMillions='N')
~~getNIUnderlyingDetail.()~~
getFixedAssets.()
~~getDirectInvestmentMNEs.()~~
getGrossDomesticProductByIndustry.()
getIta.()
getIip.()
getRegionalIncome.()
getRegionalProduct.()
Datasets and methods with a ~~strikethrough~~ are not currently accessible with the package.
Regional Data
getRegionalData.(KeyCode=None,GeoFips='STATE',Year='ALL')
Method for accessing data from the US at county, state, and regional levels.
End of explanation
"""
# Get per capita personal income at the state level for all years.
result = bea.getRegionalData(KeyCode='PCPI_SI',GeoFips = 'STATE', Year = 'ALL')
frame = result['data']
# For each state including Washington, D.C., find the percentage difference between state pc income and US pc income.
for state in frame.columns:
f = 100*(frame[state] - frame['United States'])/frame['United States']
f.plot(grid=True)
"""
Explanation: Example: Converging relative per capita incomes in the US
End of explanation
"""
|
brianbreitsch/sportstat | notebooks/parsing-example.ipynb | bsd-3-clause | osu_roster_filepath = '../data/osu_roster.csv'
"""
Explanation: Parsing tsv files and populating the database
End of explanation
"""
!head {osu_roster_filepath}
"""
Explanation: Inspecting the first few lines of the file, we get a feel for this data schema.
Mongo Considerations:
- can specify categories for validation
End of explanation
"""
ohio_state = Team()
ohio_state.city = 'Columbus'
ohio_state.name = 'Buckeyes'
ohio_state.state = 'OH'
ohio_state.save()
with open(osu_roster_filepath, 'r') as f:
for line in f.readlines():
items = line.split('\t')
number = int(items[0])
name = items[1]
first, last = items[2].split(', ')
position = items[3]
something = items[4]
height = items[5]
weight = items[6]
something = items[7]
year = items[8]
hometown = items[9]
athlete = Athlete()
athlete.number = number
athlete.name = name
athlete.first = first
athlete.last = last
athlete.position = position
athlete.height = height
athlete.weight = weight
athlete.year = year
athlete.hometown = hometown
# athlete.save()
print(number, name, first, last, position, something, height, weight, something, year, hometown)
"""
Explanation: |Number|Name| Name | Position | Something | Height | Weight | Something | Year | Hometown |
|------|----|------|----------|-----------|--------|--------|-----------|------|----------|
| ... | ...| .... | ... | ... | ... | ... | ... | ... | ... |
End of explanation
"""
|
intel-analytics/BigDL | apps/anomaly-detection-hd/autoencoder-zoo.ipynb | apache-2.0 | from bigdl.dllib.nncontext import *
sc = init_nncontext("Anomaly Detection HD Example")
from scipy.io import arff
import pandas as pd
import os
dataset = "ionosphere" #real world dataset
data_dir = os.getenv("BIGDL_HOME")+"/bin/data/HiCS/"+dataset+".arff"
rawdata, _ = arff.loadarff(data_dir)
data = pd.DataFrame(rawdata)
"""
Explanation: Anomaly Detection in High Dimensions Using Auto-encoder
Anomaly detection detects data points (outliers) which do not conform to an expected pattern or other items in a dataset. In statistics, anomalies, also known as outliers, are observation points that are distant from other observations. In this notebook we demonstrate how to do unsupervised anomaly detection in high dimensions using auto-encoder.
We are using one of the HiCS realworld data sets (link) for demo. The data set contains 32 dimensions and 351 instances, with 126 of them being outliers. Alternative datasets are in the same directory, however, typically datasets with higher dimension and lower outlier proportion will have a better performance. Data points with higher reconstruction error are more likely to be outliers, and this notebook can also show which dimensions the points are outlying.
References:
* Neural-based Outlier Discovery
Initialization
Initilize nn context and load data
End of explanation
"""
data.head(5)
"""
Explanation: The dataset contains 32 dimensions and 351 instances, with 126 of them being outliers.
End of explanation
"""
labels = data['class'].astype(int)
del data['class']
labels[labels != 0] = 1
"""
Explanation: Data preprocessing
Generate labels and normalize the data between 0 and 1.
generate labels
End of explanation
"""
from sklearn.preprocessing import MinMaxScaler
data_norm = MinMaxScaler().fit_transform(data).astype('float32')
print("Instances: %d \nOutliers: %d\nAttributes: %d" % (len(data), sum(labels), len(data_norm[0])))
"""
Explanation: MinMaxScaler is used since we need to keep the features of outliers
End of explanation
"""
from bigdl.dllib.keras.layers import Input, Dense
from bigdl.dllib.keras.models import Model
compress_rate=0.8
origin_dim=len(data_norm[0])
input = Input(shape=(origin_dim,))
encoded = Dense(int(compress_rate*origin_dim), activation='relu')(input)
decoded = Dense(origin_dim, activation='sigmoid')(encoded)
autoencoder = Model(input, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
"""
Explanation: Build the model
End of explanation
"""
autoencoder.fit(x=data_norm,
y=data_norm,
batch_size=100,
nb_epoch=2,
validation_data=None)
"""
Explanation: Training
End of explanation
"""
data_trans = autoencoder.predict(data_norm).collect()
"""
Explanation: Prediction
Data are encoded and reconstructed as data_trans
End of explanation
"""
import numpy as np
dist = []
for i, x in enumerate(data_norm):
dist.append(np.linalg.norm(data_norm[i] - data_trans[i]))
dist=np.array(dist)
"""
Explanation: Evaluation
Calculate the euclidean distance for each point from ground truth to its reconstruction. The further the distance is, the more likely the point will be an outlier.
End of explanation
"""
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
%matplotlib inline
fpr, tpr, threshold = roc_curve(labels, dist)
roc_auc = auc(fpr, tpr)
print('AUC = %f' % roc_auc)
plt.figure(figsize=(10, 7))
plt.plot(fpr, tpr, 'k--',
label='mean ROC (area = %0.2f)' % roc_auc, lw=2, color='red')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.plot([0, 1],
[0, 1],
linestyle='--',
color=(0.6, 0.6, 0.6))
plt.xlabel('False Positive rate')
plt.ylabel('True Positive rate')
plt.title('ROC Autoencoder compress rate: %0.1f ' % compress_rate + "\nInstances: %d, Outliers: %d, Attributes: %d" % (len(data), sum(labels), len(data_norm[0])))
plt.legend(loc="lower right")
plt.tight_layout()
plt.show()
"""
Explanation: Plot the ROC curve to assess the quality of anomaly detection. Here, we have achieved an AUC of 0.94 which is very good.
End of explanation
"""
plt.figure(figsize=(15, 7))
label_colors=[]*len(labels)
label_colors = list(map(lambda x: "r" if x==1 else "b", labels))
plt.scatter(data.index, dist, c=label_colors, s=15)
plt.xlabel('Index')
plt.ylabel('Score')
plt.title("Outlier Score\nInstances: %d, Outliers: %d, Attributes: %d" % (len(data), sum(labels), len(data_norm[0])))
plt.tight_layout()
#plt.savefig("./fig/"+dataset+".score.png")
plt.show()
"""
Explanation: Plot the outlier scores for each single data point. The higher scores should represent higher possibility of being outliers. Compared to the ground truth, where positive data points are indicated as red and negative as blue, positive data points have a much higher outlier score than negative points as expected.
End of explanation
"""
outlier_indices = np.argsort(-dist)[0:20]
print(outlier_indices)
"""
Explanation: Show top 20 data points with highest outlier score in descending order
End of explanation
"""
def error_in_dim(index):
error = []
for i, x in enumerate(data_norm[index]):
error.append(abs(data_norm[index][i] - data_trans[index][i]))
error=np.array(error)
return error
example = 204
plt.figure(figsize=(10,7))
plt.plot(error_in_dim(example))
plt.xlabel('Index')
plt.ylabel('Reconstruction error')
plt.title("Reconstruction error in each dimension of point %d" % example)
plt.tight_layout()
plt.show()
"""
Explanation: By looking at the reconstruction error, we can find hints about the dimensions in which a particular data point is outlying.
Here, we plot the reconstruction error in dimension at data point of 204 which has the second highest outlier score.
End of explanation
"""
print(np.argsort(-error_in_dim(example))[0:3])
"""
Explanation: Show top 3 dimensions with highest reconstruction error in descending order. Data point of 204 has high reconstruction error at subspace of [8,23,29].
End of explanation
"""
indicator = ['b']*len(data)
indicator[204] = 'r'
indicator=pd.Series(indicator)
from mpl_toolkits.mplot3d import Axes3D
threedee = plt.figure(figsize=(20,14)).gca(projection='3d')
threedee.scatter(data['var_0028'], data['var_0029'], zs=data['var_0030'],
c=indicator)
threedee.set_xlabel('28')
threedee.set_ylabel('29')
threedee.set_zlabel('30')
plt.show()
"""
Explanation: Look at the position of the point in a subspace [28,29,30], data point of 204 as an outlier, indicated as red dot is far away from other data points.
End of explanation
"""
plt.figure(figsize=(10,7))
outliers = [21, 232, 212, 100, 122, 19]
for i in outliers:
plt.plot(error_in_dim(i), label=i)
plt.legend(loc=1)
plt.xlabel('Index')
plt.ylabel('Reconstruction error')
plt.title("Reconstruction error in each dimension of outliers")
plt.tight_layout()
plt.show()
"""
Explanation: You can find back the information in which each object is an outlier by looking at the reconstruction errors, or at least reduce drastically the search space. Here, we plot the reconstruction errors of outliers in the subspace of [8], data points of 21 232, 212, 100, 122, 19.
End of explanation
"""
from mpl_toolkits.mplot3d import Axes3D
threedee = plt.figure(figsize=(20,14)).gca(projection='3d')
threedee.scatter(data['var_0007'], data['var_0008'], zs=data['var_0009'], c='b', alpha=0.1)
for i in outliers:
threedee.scatter(data['var_0007'][i], data['var_0008'][i], zs=data['var_0009'][i], c="r", s=60)
#print(data['var_0007'][i], data['var_0008'][i], data['var_0009'][i])
threedee.set_xlabel('7')
threedee.set_ylabel('8')
threedee.set_zlabel('9')
plt.show()
"""
Explanation: Generally look into the subspace of [6,7,8] for the full dataset, the outliers are indicated as red dots, we can find out that a couple of data points are outlying.
End of explanation
"""
|
mariedekou/pymks_overview | notebooks/.ipynb_checkpoints/cahn_hilliard-checkpoint.ipynb | mit | %matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Cahn-Hilliard Example
This example demonstrates how to use PyMKS to solve the Cahn-Hilliard equation. The first section provides some background information about the Cahn-Hilliard equation as well as details about calibrating and validating the MKS model. The example demonstrates how to generate sample data, calibrate the influence coefficients and then pick an appropriate number of local states when state space is continuous. The MKS model and a spectral solution of the Cahn-Hilliard equation are compared on a larger test microstructure over multiple time steps.
Cahn-Hilliard Equation
The Cahn-Hilliard equation is used to simulate microstructure evolution during spinodial decomposition and has the following form,
$$ \dot{\phi} = \nabla^2 \left( \phi^3 - \phi \right) - \gamma \nabla^4 \phi $$
where $\phi$ is a conserved ordered parameter and $\sqrt{\gamma}$ represents the width of the interface. In this example, the Cahn-Hilliard equation is solved using a semi-implicit spectral scheme with periodic boundary conditions, see Chang and Rutenberg for more details.
End of explanation
"""
import pymks
from pymks.datasets import make_cahn_hilliard
n = 41
n_samples = 400
dt = 1e-2
np.random.seed(99)
= make_cahn_hilliard(n_samples=n_samples, size=(n, n), dt=dt)
"""
Explanation: Modeling with MKS
In this example the MKS equation will be used to predict microstructure at the next time step using
$$p[s, 1] = \sum_{r=0}^{S-1} \alpha[l, r, 1] \sum_{l=0}^{L-1} m[l, s - r, 0] + ...$$
where $p[s, n + 1]$ is the concentration field at location $s$ and at time $n + 1$, $r$ is the convolution dummy variable and $l$ indicates the local states varable. $\alpha[l, r, n]$ are the influence coefficients and $m[l, r, 0]$ the microstructure function given to the model. $S$ is the total discretized volume and $L$ is the total number of local states n_states choosen to use.
The model will march forward in time by recussively replacing discretizing $p[s, n]$ and substituing it back for $m[l, s - r, n]$.
Calibration Datasets
Unlike the elastostatic examples, the microstructure (concentration field) for this simulation doesn't have discrete phases. The microstructure is a continuous field that can have a range of values which can change over time, therefore the first order influence coefficients cannot be calibrated with delta microstructures. Instead a large number of simulations with random initial conditions are used to calibrate the first order influence coefficients using linear regression.
The function make_cahn_hilliard from pymks.datasets provides an interface to generate calibration datasets for the influence coefficients. To use make_cahn_hilliard, we need to set the number of samples we want to use to calibrate the influence coefficients using n_samples, the size of the simulation domain using size and the time step using dt.
End of explanation
"""
from pymks.tools import draw_concentrations
"""
Explanation: The function make_cahnHilliard generates n_samples number of random microstructures, X, and the associated updated microstructures, y, after one time step y. The following cell plots one of these microstructures along with its update.
End of explanation
"""
import sklearn
from sklearn.cross_validation import train_test_split
split_shape = (X.shape[0],) + (np.product(X.shape[1:]),)
X_train, X_test, y_train, y_test = train_test_split(X.reshape(split_shape), y.reshape(split_shape),
test_size=0.5, random_state=3)
"""
Explanation: Calibrate Influence Coefficients
As mentioned above, the microstructures (concentration fields) does not have discrete phases. This leaves the number of local states in local state space as a free hyper parameter. In previous work it has been shown that as you increase the number of local states, the accuracy of MKS model increases (see Fast et al.), but as the number of local states increases, the difference in accuracy decreases. Some work needs to be done in order to find the practical number of local states that we will use.
Optimizing the Number of Local States
Let's split the calibrate dataset into testing and training datasets. The function train_test_split for the machine learning python module sklearn provides a convenient interface to do this. 80% of the dataset will be used for training and the remaining 20% will be used for testing by setting test_size equal to 0.2. The state of the random number generator used to make the split can be set using random_state.
End of explanation
"""
from pymks import MKSLocalizationModel
from pymks.bases import PrimitiveBasis
"""
Explanation: We are now going to calibrate the influence coefficients while varying the number of local states from 2 up to 20. Each of these models will then predict the evolution of the concentration fields. Mean square error will be used to compared the results with the testing dataset to evaluate how the MKS model's performance changes as we change the number of local states.
First we need to import the class MKSLocalizationModel from pymks.
End of explanation
"""
from sklearn.grid_search import GridSearchCV
parameters_to_tune = {'n_states': np.arange(2, 11)}
prim_basis = PrimitiveBasis(2, [-1, 1])
model = MKSLocalizationModel(prim_basis)
gs = GridSearchCV(model, parameters_to_tune, cv=5, fit_params={'size': (n, n)})
gs.fit
print(gs.best_estimator_)
print(gs.score(X_test, y_test))
from pymks.tools import draw_gridscores
"""
Explanation: Next we will calibrate the influence coefficients while varying the number of local states and compute the mean squared error. The following demonstrates how to use Scikit-learn's GridSearchCV to optimize n_states as a hyperparameter. Of course, the best fit is always with a larger value of n_states. Increasing this parameter does not overfit the data.
End of explanation
"""
model = MKSLocalizationModel(basis=PrimitiveBasis(6, [-1, 1]))
model.fit(X, y)
"""
Explanation: As expected the accuracy of the MKS model monotonically increases as we increase n_states, but accuracy doesn't improve significantly as n_states gets larger than signal digits.
In order to save on computation costs let's set calibrate the influence coefficients with n_states equal to 6, but realize that if we need slightly more accuracy the value can be increased.
End of explanation
"""
from pymks.tools import draw_coeff
"""
Explanation: Here are the first 4 influence coefficients.
End of explanation
"""
from pymks.datasets.cahn_hilliard_simulation import CahnHilliardSimulation
np.random.seed(191)
phi0 = np.random.normal(0, 1e-9, (1, n, n))
ch_sim = CahnHilliardSimulation(dt=dt)
phi_sim = phi0.copy()
phi_pred = phi0.copy()
"""
Explanation: Predict Microstructure Evolution
With the calibrated influence coefficients, we are ready to predict the evolution of a concentration field. In order to do this, we need to have the Cahn-Hilliard simulation and the MKS model start with the same initial concentration phi0 and evolve in time. In order to do the Cahn-Hilliard simulation we need an instance of the class CahnHilliardSimulation.
End of explanation
"""
time_steps = 10
for ii in range(time_steps):
ch_sim.run(phi_sim)
phi_sim = ch_sim.response
phi_pred = model.predict(phi_pred)
"""
Explanation: In order to move forward in time, we need to feed the concentration back into the Cahn-Hilliard simulation and the MKS model.
End of explanation
"""
from pymks.tools import draw_concentrations_compare
draw_concentrations((phi_sim[0], phi_pred[0]), labels=('Simulation', 'MKS'))
"""
Explanation: Let's take a look at the concentration fields.
End of explanation
"""
m = 3 * n
model.resize_coeff((m, m))
phi0 = np.random.normal(0, 1e-9, (1, m, m))
phi_sim = phi0.copy()
phi_pred = phi0.copy()
"""
Explanation: The MKS model was able to capture the microstructure evolution with 6 local states.
Resizing the Coefficients to use on Larger Systems
Now let's try and predict a larger simulation by resizing the coefficients and provide a larger initial concentratio field.
End of explanation
"""
for ii in range(1000):
ch_sim.run(phi_sim)
phi_sim = ch_sim.response
phi_pred = model.predict(phi_pred)
"""
Explanation: Once again we are going to march forward in time by feeding the concentration fields back into the Cahn-Hilliard simulation and the MKS model.
End of explanation
"""
from pymks.tools import draw_concentrations_compare
draw_concentrations_compare((phi_sim[0], phi_pred[0]), labels=('Simulation', 'MKS'))
"""
Explanation: Let's take a look at the results.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/7b89f7dac105a44e25d2fbdd898b911f/vector_mne_solution.ipynb | bsd-3-clause | # Author: Marijn van Vliet <w.m.vanvliet@gmail.com>
#
# License: BSD-3-Clause
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
smoothing_steps = 7
# Read evoked data
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
# Read inverse solution
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inv = read_inverse_operator(fname_inv)
# Apply inverse solution, set pick_ori='vector' to obtain a
# :class:`mne.VectorSourceEstimate` object
snr = 3.0
lambda2 = 1.0 / snr ** 2
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', pick_ori='vector')
# Use peak getter to move visualization to the time point of the peak magnitude
_, peak_time = stc.magnitude().get_peak(hemi='lh')
"""
Explanation: Plotting the full vector-valued MNE solution
The source space that is used for the inverse computation defines a set of
dipoles, distributed across the cortex. When visualizing a source estimate, it
is sometimes useful to show the dipole directions in addition to their
estimated magnitude. This can be accomplished by computing a
:class:mne.VectorSourceEstimate and plotting it with
:meth:stc.plot <mne.VectorSourceEstimate.plot>, which uses
:func:~mne.viz.plot_vector_source_estimates under the hood rather than
:func:~mne.viz.plot_source_estimates.
It can also be instructive to visualize the actual dipole/activation locations
in 3D space in a glass brain, as opposed to activations imposed on an inflated
surface (as typically done in :meth:mne.SourceEstimate.plot), as it allows
you to get a better sense of the underlying source geometry.
End of explanation
"""
brain = stc.plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir,
smoothing_steps=smoothing_steps)
# You can save a brain movie with:
# brain.save_movie(time_dilation=20, tmin=0.05, tmax=0.16, framerate=10,
# interpolation='linear', time_viewer=True)
"""
Explanation: Plot the source estimate:
End of explanation
"""
stc_max, directions = stc.project('pca', src=inv['src'])
# These directions must by design be close to the normals because this
# inverse was computed with loose=0.2
print('Absolute cosine similarity between source normals and directions: '
f'{np.abs(np.sum(directions * inv["source_nn"][2::3], axis=-1)).mean()}')
brain_max = stc_max.plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir,
time_label='Max power', smoothing_steps=smoothing_steps)
"""
Explanation: Plot the activation in the direction of maximal power for this data:
End of explanation
"""
brain_normal = stc.project('normal', inv['src'])[0].plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir,
time_label='Normal', smoothing_steps=smoothing_steps)
"""
Explanation: The normal is very similar:
End of explanation
"""
fname_inv_fixed = (
data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-fixed-inv.fif')
inv_fixed = read_inverse_operator(fname_inv_fixed)
stc_fixed = apply_inverse(
evoked, inv_fixed, lambda2, 'dSPM', pick_ori='vector')
brain_fixed = stc_fixed.plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir,
smoothing_steps=smoothing_steps)
"""
Explanation: You can also do this with a fixed-orientation inverse. It looks a lot like
the result above because the loose=0.2 orientation constraint keeps
sources close to fixed orientation:
End of explanation
"""
|
tomilsinszki/python_samples | learningPython.ipynb | mit | print("Hello World")
"""
Explanation: Learning Python
Hello World
print is a method that will print some text
End of explanation
"""
x = 1
if x < 0:
print("x is negative")
elif x == 0:
print("x is zero")
elif 0 < x:
print("x is positive")
else:
print("ERROR")
"""
Explanation: Indentation
Instead of using curly braces, python uses identation of blocks.
If Statements
End of explanation
"""
var_integer = 10
var_float = 10.1
var_string = "one"
print(var_integer)
print(var_float)
print(var_string)
"""
Explanation: Naming conventions
Function and variable names should all be lowercase with underscore (_) separating each word.
Variables
Variables are dynamically types, we don't need to declare the type of a variable
End of explanation
"""
my_list = [1, 2, 'three']
print(my_list)
my_list.append('four')
print(my_list)
print(my_list[0])
"""
Explanation: Lists
A list can contain any type (or mix of type) of variables.
End of explanation
"""
first_string = 'Hello'
second_string = ' '
third_string = 'World'
concatenated_string = first_string + second_string + third_string
print(concatenated_string)
evens = [2, 4, 6, 8]
odds = [1, 3, 5, 7, 9]
numbers = evens + odds
print(numbers)
"""
Explanation: Operators
End of explanation
"""
astring = "1234567890"
print(astring[3:7])
"""
Explanation: String Operations
End of explanation
"""
name = "one"
if (name == 'one'):
print('This is one')
number = "two"
if number in ["one", "two", "three", "four", "five"]:
print('number is within array')
"""
Explanation: Conditions
End of explanation
"""
primes = [2, 3, 5, 7, 11]
for prime in primes:
print(prime)
count = 0
while count < 5:
print(count)
count += 1
count = 0
while True:
print(count)
count += 1
if count >= 5:
break
for x in range(10):
if x % 2 == 0:
continue
print(x)
"""
Explanation: Loops
End of explanation
"""
def concatenate_and_print(astring0, astring1):
print(astring0 + " " + astring1)
concatenate_and_print("Hello", "World")
def sum_two_numbers(a, b):
return a + b
sum_of_numbers = sum_two_numbers(100, 11)
print(sum_of_numbers)
"""
Explanation: Functions
End of explanation
"""
phonebook = {}
phonebook["John"] = 1111111
phonebook["Tom"] = 2222222
phonebook["Jane"] = 3333333
print(phonebook)
del phonebook["John"]
print(phonebook)
"""
Explanation: Dictionaries
A dictionary is similar to an array, except a dictionary uses keys (instead of just numerical indexes).
End of explanation
"""
|
yavuzovski/playground | machine learning/google-ml-crash-course/first_steps_with_tf.ipynb | gpl-3.0 | # Load the necessary libraries
import math
from IPython import display
from matplotlib import cm, gridspec, pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
# Load the dataset
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
"""
Explanation: First Steps with TensorFlow
Learning Objectives:
* Learn fundamental TensorFlow concepts
* Use the LinearRegressor class in TensorFlow to predict median housing price, at the granularity of city blocks, based on one input feature
* Evaluate the accuracy of a model's predictions using Root Mean Squared Error (RMSE)
* Improve the accuracy of a model by tuning its hyperparameters
End of explanation
"""
california_housing_dataframe = california_housing_dataframe.reindex(np.random.permutation(california_housing_dataframe.index))
california_housing_dataframe["median_house_value"] /= 1000
california_housing_dataframe
"""
Explanation: We'll randomize the data, just to be sure not to get any pathological ordering effects that might harm the performance of Stochastic Gradient Descent. Additionally, we'll scale median_house_value to be in units of thousands, so it can be learned a little more easily with learning rates in a range that we usually use.
End of explanation
"""
california_housing_dataframe.describe()
"""
Explanation: Examine the Data
It's a good idea to get to know your data a little bit before you work with it.
We'll print out a quick summary of a few useful statistics on each column: count of examples, mean, standard deviation, max, min, and various quantiles.
End of explanation
"""
# Define the input feature: total_rooms.
my_feature = california_housing_dataframe[["total_rooms"]]
# Configure a numeric feature column for total_rooms.
feature_columns = [tf.feature_column.numeric_column("total_rooms")]
"""
Explanation: Build the First Model
In this exercise, we'll try to predict median_house_value, which will be our label (sometimes also called a target). We'll use total_rooms as our input feature.
NOTE: Our data is at the city block level, so this feature represents the total number of rooms in that block.
To train our model, we'll use the LinearRegressor interface provided by the TensorFlow Estimator API. This API takes care of a lot of the low-level model plumbing, and exposes convenient methods for performing model training, evaluation, and inference.
Step 1: Define Features and Configure Feature Columns
In order to import our training data into TensorFlow, we need to specify what type of data each feature contains. There are two main types of data we'll use in this and future exercises:
Categorical Data: Data that is textual. In this exercise, our housing data set does not contain any categorical features, but examples you might see would be the home style, the words in a real-estate ad.
Numerical Data: Data that is a number (integer or float) and that you want to treat as a number. As we will discuss more later sometimes you might want to treat numerical data (e.g., a postal code) as if it were categorical.
In TensorFlow, we indicate a feature's data type using a construct called a feature column. Feature columns store only a description of the feature data; they do not contain the feature data itself.
To start, we're going to use just one numeric input feature, total_rooms. The following code pulls the total_rooms data from our california_housing_dataframe and defines the feature column using numeric_column, which specifies its data is numeric:
End of explanation
"""
# Define the label
targets = california_housing_dataframe["median_house_value"]
"""
Explanation: Step 2: Define the Target
Next, we'll define our target, which is median_house_value. Again, we can pull it from our california_housing_dataframe:
End of explanation
"""
# Use gradient descent as the optimizer for training the model.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.0000001)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
# Configure the linear regression model with our feature columns and optimizer.
# Set a learning rate of 0.0000001 for Gradient Descent.
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=feature_columns,
optimizer=my_optimizer
)
"""
Explanation: Step 3: Configure the LinearRegressor
Next, we'll configure a linear regression model using LinearRegressor. We'll train this model using the GradientDescentOptimizer, which implements Mini-Batch Stochastic Gradient Descent (SGD). The learning_rate argument controls the size of the gradient step.
NOTE: To be safe, we also apply gradient clipping to our optimizer via clip_gradients_by_norm. Gradient clipping ensures the magnitude of the gradients do not become too large during training, which can cause gradient descent to fail.
End of explanation
"""
|
brillozon-code/pebaystats | examples/aggregation_example.ipynb | apache-2.0 | import numpy as np
import nose.tools as nt
from pebaystats import dstats
"""
Explanation: Demonstrate Aggregation of Descriptive Statistics
Here we create an array of random values and for each row of the array, we create
a distinct pebaystats.dstats object to accumulate the descriptive statistics
for the values in that row.
Once we have the data, we can use the numpy package to generate the expected
values for mean and variance of the data for each row. We can also generate the
expected mean and variance of the total data set.
We then accumulate each column value for each row into its respective dstats object
and when we have the data accumulated into these partial results, we can compare with
the expected row values.
We can then aggregate each of the row values into a final value for mean and variance
of the entire set of data and compare to the expected value.
We will need to import the numpy package as well as the dstats class from the pebaystats package. We import the nosetools package to allow direct comparison of expected and generated values.
End of explanation
"""
np.random.seed(0)
### Random data size
rows = 10
cols = 100
### Each accumulators size
depth = 2
width = 1
"""
Explanation: Now we set the parameters, including the random seed for repeatability
End of explanation
"""
### Test data -- 10 rows of 100 columns each
test_arr = np.random.random((rows,cols))
print('Test data has shape: %d, %d' % test_arr.shape)
### Expected intermediate output
mid_mean = np.mean(test_arr,axis = 1)
mid_var = np.var(test_arr, axis = 1)
### Expected final output
final_mean = np.mean(test_arr)
final_var = np.var(test_arr)
"""
Explanation: The test array can now be created and its shape checked.
The individual row statistics and overall mean and variance can be generated as the expected
values at this time as well.
End of explanation
"""
### Create an object for each row and accumulate the data in that row
statsobjects = [ dstats(depth,width) for i in range(0,rows) ]
discard = [ statsobjects[i].add(test_arr[i,j])
for j in range(0,cols)
for i in range(0,rows)]
print('\nIntermediate Results\n')
for i in range(0,rows):
values = statsobjects[i].statistics()
print('Result %d mean: %11g, variance: %11g (M2/N: %11g/%d)' %(i,values[0],values[1],statsobjects[i].moments[1],statsobjects[i].n))
print('Expected mean: %11g, variance: %11g' %(mid_mean[i],mid_var[i]))
nt.assert_almost_equal(values[0], mid_mean[i], places = 14)
nt.assert_almost_equal(values[1], mid_var[i], places = 14)
"""
Explanation: Now we can create a dstats object for each row and accumulate the row data
into its respected accumulator. We can print the generated and expected intermediate (row)
values to check that all is working correctly.
End of explanation
"""
### Aggregate result into the index 0 accumulator
discard = [ statsobjects[0].aggregate(statsobjects[i]) for i in range(1,rows) ]
values = statsobjects[0].statistics()
print('\nAggregated Results\n')
print('Result mean: %11g, variance: %11g' %(values[0],values[1]))
print('Expected mean: %11g, variance: %11g' %(final_mean,final_var))
nt.assert_almost_equal(values[0], final_mean, places = 14)
nt.assert_almost_equal(values[1], final_var, places = 14)
"""
Explanation: Now we can aggregate each of the intermediate row results into a final mean and
variance value for the entire data set. And then compare with the numpy
generated expected values
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.12.1/examples/notebooks/generated/regression_plots.ipynb | bsd-3-clause | %matplotlib inline
from statsmodels.compat import lzip
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.formula.api import ols
plt.rc("figure", figsize=(16,8))
plt.rc("font", size=14)
"""
Explanation: Regression Plots
End of explanation
"""
prestige = sm.datasets.get_rdataset("Duncan", "carData", cache=True).data
prestige.head()
prestige_model = ols("prestige ~ income + education", data=prestige).fit()
print(prestige_model.summary())
"""
Explanation: Duncan's Prestige Dataset
Load the Data
We can use a utility function to load any R dataset available from the great <a href="https://vincentarelbundock.github.io/Rdatasets/">Rdatasets package</a>.
End of explanation
"""
fig = sm.graphics.influence_plot(prestige_model, criterion="cooks")
fig.tight_layout(pad=1.0)
"""
Explanation: Influence plots
Influence plots show the (externally) studentized residuals vs. the leverage of each observation as measured by the hat matrix.
Externally studentized residuals are residuals that are scaled by their standard deviation where
$$var(\hat{\epsilon}i)=\hat{\sigma}^2_i(1-h{ii})$$
with
$$\hat{\sigma}^2_i=\frac{1}{n - p - 1 \;\;}\sum_{j}^{n}\;\;\;\forall \;\;\; j \neq i$$
$n$ is the number of observations and $p$ is the number of regressors. $h_{ii}$ is the $i$-th diagonal element of the hat matrix
$$H=X(X^{\;\prime}X)^{-1}X^{\;\prime}$$
The influence of each point can be visualized by the criterion keyword argument. Options are Cook's distance and DFFITS, two measures of influence.
End of explanation
"""
fig = sm.graphics.plot_partregress("prestige", "income", ["income", "education"], data=prestige)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress("prestige", "income", ["education"], data=prestige)
fig.tight_layout(pad=1.0)
"""
Explanation: As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br />
RR.engineer has small residual and large leverage. Conductor and minister have both high leverage and large residuals, and, <br />
therefore, large influence.
Partial Regression Plots (Duncan)
Since we are doing multivariate regressions, we cannot just look at individual bivariate plots to discern relationships. <br />
Instead, we want to look at the relationship of the dependent variable and independent variables conditional on the other <br />
independent variables. We can do this through using partial regression plots, otherwise known as added variable plots. <br />
In a partial regression plot, to discern the relationship between the response variable and the $k$-th variable, we compute <br />
the residuals by regressing the response variable versus the independent variables excluding $X_k$. We can denote this by <br />
$X_{\sim k}$. We then compute the residuals by regressing $X_k$ on $X_{\sim k}$. The partial regression plot is the plot <br />
of the former versus the latter residuals. <br />
The notable points of this plot are that the fitted line has slope $\beta_k$ and intercept zero. The residuals of this plot <br />
are the same as those of the least squares fit of the original model with full $X$. You can discern the effects of the <br />
individual data values on the estimation of a coefficient easily. If obs_labels is True, then these points are annotated <br />
with their observation label. You can also see the violation of underlying assumptions such as homoskedasticity and <br />
linearity.
End of explanation
"""
subset = ~prestige.index.isin(["conductor", "RR.engineer", "minister"])
prestige_model2 = ols("prestige ~ income + education", data=prestige, subset=subset).fit()
print(prestige_model2.summary())
"""
Explanation: As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this.
End of explanation
"""
fig = sm.graphics.plot_partregress_grid(prestige_model)
fig.tight_layout(pad=1.0)
"""
Explanation: For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br />
points, but you can use them to identify problems and then use plot_partregress to get more information.
End of explanation
"""
fig = sm.graphics.plot_ccpr(prestige_model, "education")
fig.tight_layout(pad=1.0)
"""
Explanation: Component-Component plus Residual (CCPR) Plots
The CCPR plot provides a way to judge the effect of one regressor on the <br />
response variable by taking into account the effects of the other <br />
independent variables. The partial residuals plot is defined as <br />
$\text{Residuals} + B_iX_i \text{ }\text{ }$ versus $X_i$. The component adds $B_iX_i$ versus <br />
$X_i$ to show where the fitted line would lie. Care should be taken if $X_i$ <br />
is highly correlated with any of the other independent variables. If this <br />
is the case, the variance evident in the plot will be an underestimate of <br />
the true variance.
End of explanation
"""
fig = sm.graphics.plot_ccpr_grid(prestige_model)
fig.tight_layout(pad=1.0)
"""
Explanation: As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid.
End of explanation
"""
fig = sm.graphics.plot_regress_exog(prestige_model, "education")
fig.tight_layout(pad=1.0)
"""
Explanation: Single Variable Regression Diagnostics
The plot_regress_exog function is a convenience function that gives a 2x2 plot containing the dependent variable and fitted values with confidence intervals vs. the independent variable chosen, the residuals of the model vs. the chosen independent variable, a partial regression plot, and a CCPR plot. This function can be used for quickly checking modeling assumptions with respect to a single regressor.
End of explanation
"""
fig = sm.graphics.plot_fit(prestige_model, "education")
fig.tight_layout(pad=1.0)
"""
Explanation: Fit Plot
The plot_fit function plots the fitted values versus a chosen independent variable. It includes prediction confidence intervals and optionally plots the true dependent variable.
End of explanation
"""
#dta = pd.read_csv("http://www.stat.ufl.edu/~aa/social/csv_files/statewide-crime-2.csv")
#dta = dta.set_index("State", inplace=True).dropna()
#dta.rename(columns={"VR" : "crime",
# "MR" : "murder",
# "M" : "pctmetro",
# "W" : "pctwhite",
# "H" : "pcths",
# "P" : "poverty",
# "S" : "single"
# }, inplace=True)
#
#crime_model = ols("murder ~ pctmetro + poverty + pcths + single", data=dta).fit()
dta = sm.datasets.statecrime.load_pandas().data
crime_model = ols("murder ~ urban + poverty + hs_grad + single", data=dta).fit()
print(crime_model.summary())
"""
Explanation: Statewide Crime 2009 Dataset
Compare the following to http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter4/statareg_self_assessment_answers4.htm
Though the data here is not the same as in that example. You could run that example by uncommenting the necessary cells below.
End of explanation
"""
fig = sm.graphics.plot_partregress_grid(crime_model)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress("murder", "hs_grad", ["urban", "poverty", "single"], data=dta)
fig.tight_layout(pad=1.0)
"""
Explanation: Partial Regression Plots (Crime Data)
End of explanation
"""
fig = sm.graphics.plot_leverage_resid2(crime_model)
fig.tight_layout(pad=1.0)
"""
Explanation: Leverage-Resid<sup>2</sup> Plot
Closely related to the influence_plot is the leverage-resid<sup>2</sup> plot.
End of explanation
"""
fig = sm.graphics.influence_plot(crime_model)
fig.tight_layout(pad=1.0)
"""
Explanation: Influence Plot
End of explanation
"""
from statsmodels.formula.api import rlm
rob_crime_model = rlm("murder ~ urban + poverty + hs_grad + single", data=dta,
M=sm.robust.norms.TukeyBiweight(3)).fit(conv="weights")
print(rob_crime_model.summary())
#rob_crime_model = rlm("murder ~ pctmetro + poverty + pcths + single", data=dta, M=sm.robust.norms.TukeyBiweight()).fit(conv="weights")
#print(rob_crime_model.summary())
"""
Explanation: Using robust regression to correct for outliers.
Part of the problem here in recreating the Stata results is that M-estimators are not robust to leverage points. MM-estimators should do better with this examples.
End of explanation
"""
weights = rob_crime_model.weights
idx = weights > 0
X = rob_crime_model.model.exog[idx.values]
ww = weights[idx] / weights[idx].mean()
hat_matrix_diag = ww*(X*np.linalg.pinv(X).T).sum(1)
resid = rob_crime_model.resid
resid2 = resid**2
resid2 /= resid2.sum()
nobs = int(idx.sum())
hm = hat_matrix_diag.mean()
rm = resid2.mean()
from statsmodels.graphics import utils
fig, ax = plt.subplots(figsize=(16,8))
ax.plot(resid2[idx], hat_matrix_diag, 'o')
ax = utils.annotate_axes(range(nobs), labels=rob_crime_model.model.data.row_labels[idx],
points=lzip(resid2[idx], hat_matrix_diag), offset_points=[(-5,5)]*nobs,
size="large", ax=ax)
ax.set_xlabel("resid2")
ax.set_ylabel("leverage")
ylim = ax.get_ylim()
ax.vlines(rm, *ylim)
xlim = ax.get_xlim()
ax.hlines(hm, *xlim)
ax.margins(0,0)
"""
Explanation: There is not yet an influence diagnostics method as part of RLM, but we can recreate them. (This depends on the status of issue #888)
End of explanation
"""
|
jupyter-widgets/ipywidgets | docs/source/examples/Using Interact.ipynb | bsd-3-clause | from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
"""
Explanation: Using Interact
The interact function (ipywidgets.interact) automatically creates user interface (UI) controls for exploring code and data interactively. It is the easiest way to get started using IPython's widgets.
End of explanation
"""
def f(x):
return x
"""
Explanation: Basic interact
At the most basic level, interact autogenerates UI controls for function arguments, and then calls the function with those arguments when you manipulate the controls interactively. To use interact, you need to define a function that you want to explore. Here is a function that returns its only argument x.
End of explanation
"""
interact(f, x=10);
"""
Explanation: When you pass this function as the first argument to interact along with an integer keyword argument (x=10), a slider is generated and bound to the function parameter.
End of explanation
"""
interact(f, x=True);
"""
Explanation: When you move the slider, the function is called, and its return value is printed.
If you pass True or False, interact will generate a checkbox:
End of explanation
"""
interact(f, x='Hi there!');
"""
Explanation: If you pass a string, interact will generate a text box.
End of explanation
"""
@interact(x=True, y=1.0)
def g(x, y):
return (x, y)
"""
Explanation: interact can also be used as a decorator. This allows you to define a function and interact with it in a single shot. As this example shows, interact also works with functions that have multiple arguments.
End of explanation
"""
def h(p, q):
return (p, q)
"""
Explanation: Fixing arguments using fixed
There are times when you may want to explore a function using interact, but fix one or more of its arguments to specific values. This can be accomplished by wrapping values with the fixed function.
End of explanation
"""
interact(h, p=5, q=fixed(20));
"""
Explanation: When we call interact, we pass fixed(20) for q to hold it fixed at a value of 20.
End of explanation
"""
interact(f, x=widgets.IntSlider(min=-10, max=30, step=1, value=10));
"""
Explanation: Notice that a slider is only produced for p as the value of q is fixed.
Widget abbreviations
When you pass an integer-valued keyword argument of 10 (x=10) to interact, it generates an integer-valued slider control with a range of [-10,+3*10]. In this case, 10 is an abbreviation for an actual slider widget:
python
IntSlider(min=-10, max=30, step=1, value=10)
In fact, we can get the same result if we pass this IntSlider as the keyword argument for x:
End of explanation
"""
interact(f, x=(0,4));
"""
Explanation: The following table gives an overview of different argument types, and how they map to interactive controls:
<table class="table table-condensed table-bordered">
<tr><td><strong>Keyword argument</strong></td><td><strong>Widget</strong></td></tr>
<tr><td>`True` or `False`</td><td>Checkbox</td></tr>
<tr><td>`'Hi there'`</td><td>Text</td></tr>
<tr><td>`value` or `(min,max)` or `(min,max,step)` if integers are passed</td><td>IntSlider</td></tr>
<tr><td>`value` or `(min,max)` or `(min,max,step)` if floats are passed</td><td>FloatSlider</td></tr>
<tr><td>`['orange','apple']` or `[('one', 1), ('two', 2)]</td><td>Dropdown</td></tr>
</table>
Note that a dropdown is used if a list or a list of tuples is given (signifying discrete choices), and a slider is used if a tuple is given (signifying a range).
You have seen how the checkbox and text widgets work above. Here, more details about the different abbreviations for sliders and dropdowns are given.
If a 2-tuple of integers is passed (min, max), an integer-valued slider is produced with those minimum and maximum values (inclusively). In this case, the default step size of 1 is used.
End of explanation
"""
interact(f, x=(0,8,2));
"""
Explanation: If a 3-tuple of integers is passed (min,max,step), the step size can also be set.
End of explanation
"""
interact(f, x=(0.0,10.0));
"""
Explanation: A float-valued slider is produced if any of the elements of the tuples are floats. Here the minimum is 0.0, the maximum is 10.0 and step size is 0.1 (the default).
End of explanation
"""
interact(f, x=(0.0,10.0,0.01));
"""
Explanation: The step size can be changed by passing a third element in the tuple.
End of explanation
"""
@interact(x=(0.0,20.0,0.5))
def h(x=5.5):
return x
"""
Explanation: For both integer and float-valued sliders, you can pick the initial value of the widget by passing a default keyword argument to the underlying Python function. Here we set the initial value of a float slider to 5.5.
End of explanation
"""
interact(f, x=['apples','oranges']);
"""
Explanation: Dropdown menus are constructed by passing a list of strings. In this case, the strings are both used as the names in the dropdown menu UI and passed to the underlying Python function.
End of explanation
"""
interact(f, x=[('one', 10), ('two', 20)]);
"""
Explanation: If you want a dropdown menu that passes non-string values to the Python function, you can pass a list of ('label', value) pairs. The first items are the names in the dropdown menu UI and the second items are values that are the arguments passed to the underlying Python function.
End of explanation
"""
interact(f, x=widgets.Combobox(options=["Chicago", "New York", "Washington"], value="Chicago"));
"""
Explanation: Finally, if you need more granular control than that afforded by the abbreviation, you can pass a ValueWidget instance as the argument. A ValueWidget is a widget that aims to control a single value. Most of the widgets bundled with ipywidgets inherit from ValueWidget. For more information, see this section on widget types.
End of explanation
"""
from IPython.display import display
def f(a, b):
display(a + b)
return a+b
"""
Explanation: interactive
In addition to interact, IPython provides another function, interactive, that is useful when you want to reuse the widgets that are produced or access the data that is bound to the UI controls.
Note that unlike interact, the return value of the function will not be displayed automatically, but you can display a value inside the function with IPython.display.display.
Here is a function that displays the sum of its two arguments and returns the sum. The display line may be omitted if you don't want to show the result of the function.
End of explanation
"""
w = interactive(f, a=10, b=20)
"""
Explanation: Unlike interact, interactive returns a Widget instance rather than immediately displaying the widget.
End of explanation
"""
type(w)
"""
Explanation: The widget is an interactive, a subclass of VBox, which is a container for other widgets.
End of explanation
"""
w.children
"""
Explanation: The children of the interactive are two integer-valued sliders and an output widget, produced by the widget abbreviations above.
End of explanation
"""
display(w)
"""
Explanation: To actually display the widgets, you can use IPython's display function.
End of explanation
"""
w.kwargs
"""
Explanation: At this point, the UI controls work just like they would if interact had been used. You can manipulate them interactively and the function will be called. However, the widget instance returned by interactive also gives you access to the current keyword arguments and return value of the underlying Python function.
Here are the current keyword arguments. If you rerun this cell after manipulating the sliders, the values will have changed.
End of explanation
"""
w.result
"""
Explanation: Here is the current return value of the function.
End of explanation
"""
def slow_function(i):
print(int(i),list(x for x in range(int(i)) if
str(x)==str(x)[::-1] and
str(x**2)==str(x**2)[::-1]))
return
%%time
slow_function(1e6)
"""
Explanation: Disabling continuous updates
When interacting with long running functions, realtime feedback is a burden instead of being helpful. See the following example:
End of explanation
"""
from ipywidgets import FloatSlider
interact(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5));
"""
Explanation: Notice that the output is updated even while dragging the mouse on the slider. This is not useful for long running functions due to lagging:
End of explanation
"""
interact_manual(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5));
"""
Explanation: There are two ways to mitigate this. You can either only execute on demand, or restrict execution to mouse release events.
interact_manual
The interact_manual function provides a variant of interaction that allows you to restrict execution so it is only done on demand. A button is added to the interact controls that allows you to trigger an execute event.
End of explanation
"""
slow = interactive(slow_function, {'manual': True}, i=widgets.FloatSlider(min=1e4, max=1e6, step=1e4))
slow
"""
Explanation: You can do the same thing with interactive by using a dict as the second argument, as shown below.
End of explanation
"""
interact(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5, continuous_update=False));
"""
Explanation: continuous_update
If you are using slider widgets, you can set the continuous_update kwarg to False. continuous_update is a kwarg of slider widgets that restricts executions to mouse release events.
End of explanation
"""
a = widgets.IntSlider()
b = widgets.IntSlider()
c = widgets.IntSlider()
ui = widgets.HBox([a, b, c])
def f(a, b, c):
print((a, b, c))
out = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c})
display(ui, out)
"""
Explanation: More control over the user interface: interactive_output
interactive_output provides additional flexibility: you can control how the UI elements are laid out.
Unlike interact, interactive, and interact_manual, interactive_output does not generate a user interface for the widgets. This is powerful, because it means you can create a widget, put it in a box, and then pass the widget to interactive_output, and have control over the widget and its layout.
End of explanation
"""
x_widget = FloatSlider(min=0.0, max=10.0, step=0.05)
y_widget = FloatSlider(min=0.5, max=10.0, step=0.05, value=5.0)
def update_x_range(*args):
x_widget.max = 2.0 * y_widget.value
y_widget.observe(update_x_range, 'value')
def printer(x, y):
print(x, y)
interact(printer,x=x_widget, y=y_widget);
"""
Explanation: Arguments that are dependent on each other
Arguments that are dependent on each other can be expressed manually using observe. See the following example, where one variable is used to describe the bounds of another. For more information, please see the widget events example notebook.
End of explanation
"""
%matplotlib inline
from ipywidgets import interactive
import matplotlib.pyplot as plt
import numpy as np
def f(m, b):
plt.figure(2)
x = np.linspace(-10, 10, num=1000)
plt.plot(x, m * x + b)
plt.ylim(-5, 5)
plt.show()
interactive_plot = interactive(f, m=(-2.0, 2.0), b=(-3, 3, 0.5))
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
"""
Explanation: Flickering and jumping output
On occasion, you may notice interact output flickering and jumping, causing the notebook scroll position to change as the output is updated. The interactive control has a layout, so we can set its height to an appropriate value (currently chosen manually) so that it will not change size as it is updated.
End of explanation
"""
import ipywidgets as widgets
from IPython.display import display
a = widgets.IntSlider(value=5, min=0, max=10)
def f1(a):
display(a)
def f2(a):
display(a * 2)
out1 = widgets.interactive_output(f1, {'a': a})
out2 = widgets.interactive_output(f2, {'a': a})
display(a)
display(out1)
display(out2)
"""
Explanation: Interact with multiple functions
You may want to have a single widget interact with multiple functions. This is possible by simply linking the widget to both functions using the interactive_output() function. The order of execution of the functions will be the order they were linked to the widget.
End of explanation
"""
|
goodwordalchemy/thinkstats_notes_and_exercises | code/chap14_analytical_methods.ipynb | gpl-3.0 | live, firsts, others = first.MakeFrames()
"""
Explanation: Central Limit Theorem if we add up n values for almost any distribution the distribution of the sum converges to normal as n increases.
values have to be drawn independently
values have to come from same distribution (relaxed)
distribution has to have finite mean and variance
rate of convergence depends on skewness of the distribution
t distribution the sampling distribution of correlation under the null hypothesis
Exercise 14.1
choose distribution for growth factor by year
generate a sample of adult weights choosing from the distribution of birthweights, choosing a sequence from f and computing the product
what value of n is needed to converge to lognormal?
End of explanation
"""
np.random.seed(17)
def makefDist(n=18, mu=0.17, sigma=0.1, iters=1000):
samples = np.random.lognormal(mu, sigma, n)
return samples
def genWeight(birthweights, age):
"""
birthweights: list of birthweights
age: int indicted age
"""
adultweights = []
for bw in birthweights:
fDist = makefDist(age)
fDist = np.append(fDist, bw)
fDist = np.log(fDist)
fDist = np.sum(fDist)
result = np.exp(fDist)
adultweights.append(result)
return adultweights
boyWeights = live[live.babysex==1].totalwgt_lb
boyWeights = boyWeights.dropna()
aWeights = genWeight(boyWeights, 19)
thinkplot.PrePlot(2, rows=2)
thinkplot.SubPlot(1)
cdf = thinkstats2.Cdf(aWeights)
thinkplot.Cdf(cdf)
thinkplot.SubPlot(2)
pdf = thinkstats2.EstimatedPdf(aWeights)
thinkplot.Pdf(pdf)
thinkplot.Show()
print "median",cdf.Percentile(50)
print "mean", cdf.Mean()
print "iqr", cdf.Percentile(25), cdf.Percentile(75)
print "ci", cdf.Percentile(5), cdf.Percentile(95)
def makeAWsamples(nVals, aWeights, iters=1000):
samples = []
for n in nVals:
sample = [np.sum(np.random.choice(aWeights, n))
for _ in range(iters)]
samples.append((n, sample))
return samples
nVals = [1, 10, 100, 1000]
samples = makeAWsamples(nVals, aWeights)
thinkplot.PrePlot(len(nVals), rows=len(nVals)//2, cols=len(nVals)//2)
normal.NormalPlotSamples(samples)
"""
Explanation: average male weight:
End of explanation
"""
dist1 = normal.SamplingDistMean(firsts.prglngth, len(firsts))
dist2 = normal.SamplingDistMean(others.prglngth, len(others))
dist = dist1 - dist2
print 'Standard Error:', dist.sigma
print '90% CI', dist.Percentile(5), dist.Percentile(95)
"""
Explanation: The correct answer here was to sample the "factors" from a normal distribution
Exercise 14.2
End of explanation
"""
##null hypothesis is that boys and girls scores come
##from the same population
male_before = normal.Normal(3.57, 0.28**2)
male_after = normal.Normal(3.44, 0.16**2)
fem_before = normal.Normal(1.91, 0.32**2)
fem_after = normal.Normal(3.18, 0.16**2)
diff_before = fem_before - male_before
#thinkplot.Cdf(diff_before)
diff_after = fem_after - male_after
# thinkplot.Cdf(diff_after)
print "before: mean, p-value", diff_before.mu, 1-diff_before.Prob(0)
print "after: mean, p-value", diff_after.mu, 1-diff_after.Prob(0)
diff = diff_after - diff_before
# thinkplot.Cdf(diff)
print "diff in gender gap: mean, p-value", diff.mu, diff.Prob(0)
"""
Explanation: standard error is equal to the the square root of the variance in the distribution of differences
Exercise 14.3
End of explanation
"""
|
napsternxg/gensim | docs/notebooks/nmf_tutorial.ipynb | gpl-3.0 | import logging
import time
from contextlib import contextmanager
import os
from multiprocessing import Process
import psutil
import numpy as np
import pandas as pd
from numpy.random import RandomState
from sklearn import decomposition
from sklearn.cluster import MiniBatchKMeans
from sklearn.datasets import fetch_olivetti_faces
from sklearn.decomposition.nmf import NMF as SklearnNmf
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import f1_score
import gensim.downloader
from gensim import matutils, utils
from gensim.corpora import Dictionary
from gensim.models import CoherenceModel, LdaModel, TfidfModel
from gensim.models.basemodel import BaseTopicModel
from gensim.models.nmf import Nmf as GensimNmf
from gensim.parsing.preprocessing import preprocess_string
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
"""
Explanation: Gensim Tutorial on Online Non-Negative Matrix Factorization
This notebooks explains basic ideas behind the open source NMF implementation in Gensim, including code examples for applying NMF to text processing.
What's in this tutorial?
Introduction: Why NMF?
Code example on 20 Newsgroups
Benchmarks against Sklearn's NMF and Gensim's LDA
Large-scale NMF training on the English Wikipedia (sparse text vectors)
NMF on face decomposition (dense image vectors)
1. Introduction to NMF
What's in a name?
Gensim's Online Non-Negative Matrix Factorization (NMF, NNMF, ONMF) implementation is based on Renbo Zhao, Vincent Y. F. Tan: Online Nonnegative Matrix Factorization with Outliers, 2016 and is optimized for extremely large, sparse, streamed inputs. Such inputs happen in NLP with unsupervised training on massive text corpora.
Why Online? Because corpora and datasets in modern ML can be very large, and RAM is limited. Unlike batch algorithms, online algorithms learn iteratively, streaming through the available training examples, without loading the entire dataset into RAM or requiring random-access to the data examples.
Why Non-Negative? Because non-negativity leads to more interpretable, sparse "human-friendly" topics. This is in contrast to e.g. SVD (another popular matrix factorization method with super-efficient implementation in Gensim), which produces dense negative factors and thus harder-to-interpret topics.
Matrix factorizations are the corner stone of modern machine learning. They can be used either directly (recommendation systems, bi-clustering, image compression, topic modeling…) or as internal routines in more complex deep learning algorithms.
How ONNMF works
Terminology:
- corpus is a stream of input documents = training examples
- batch is a chunk of input corpus, a word-document matrix mini-batch that fits in RAM
- W is a word-topic matrix (to be learned; stored in the resulting model)
- h is a topic-document matrix (to be learned; not stored, but rather inferred for documents on-the-fly)
- A, B - matrices that accumulate information from consecutive chunks. A = h.dot(ht), B = v.dot(ht).
The idea behind the algorithm is as follows:
```
Initialize W, A and B matrices
for batch in input corpus batches:
infer h:
do coordinate gradient descent step to find h that minimizes ||batch - Wh|| in L2 norm
bound h so that it is non-negative
update A and B:
A = h.dot(ht)
B = batch.dot(ht)
update W:
do gradient descent step to find W that minimizes ||0.5*trace(WtWA) - trace(WtB)|| in L2 norm
```
2. Code example: NMF on 20 Newsgroups
Preprocessing
Let's import the models we'll be using throughout this tutorial (numpy==1.14.2, matplotlib==3.0.2, pandas==0.24.1, sklearn==0.19.1, gensim==3.7.1) and set up logging at INFO level.
Gensim uses logging generously to inform users what's going on. Eyeballing the logs is a good sanity check, to make sure everything is working as expected.
Only numpy and gensim are actually needed to train and use NMF. The other imports are used only to make our life a little easier in this tutorial.
End of explanation
"""
newsgroups = gensim.downloader.load('20-newsgroups')
categories = [
'alt.atheism',
'comp.graphics',
'rec.motorcycles',
'talk.politics.mideast',
'sci.space'
]
categories = {name: idx for idx, name in enumerate(categories)}
"""
Explanation: Dataset preparation
Let's load the notorious 20 Newsgroups dataset from Gensim's repository of pre-trained models and corpora:
End of explanation
"""
random_state = RandomState(42)
trainset = np.array([
{
'data': doc['data'],
'target': categories[doc['topic']],
}
for doc in newsgroups
if doc['topic'] in categories and doc['set'] == 'train'
])
random_state.shuffle(trainset)
testset = np.array([
{
'data': doc['data'],
'target': categories[doc['topic']],
}
for doc in newsgroups
if doc['topic'] in categories and doc['set'] == 'test'
])
random_state.shuffle(testset)
"""
Explanation: Create a train/test split:
End of explanation
"""
train_documents = [preprocess_string(doc['data']) for doc in trainset]
test_documents = [preprocess_string(doc['data']) for doc in testset]
"""
Explanation: We'll use very simple preprocessing with stemming to tokenize each document. YMMV; in your application, use whatever preprocessing makes sense in your domain. Correctly preparing the input has major impact on any subsequent ML training.
End of explanation
"""
dictionary = Dictionary(train_documents)
dictionary.filter_extremes(no_below=5, no_above=0.5, keep_n=20000) # filter out too in/frequent tokens
"""
Explanation: Dictionary compilation
Let's create a mapping between tokens and their ids. Another option would be a HashDictionary, saving ourselves one pass over the training documents.
End of explanation
"""
tfidf = TfidfModel(dictionary=dictionary)
train_corpus = [
dictionary.doc2bow(document)
for document
in train_documents
]
test_corpus = [
dictionary.doc2bow(document)
for document
in test_documents
]
train_corpus_tfidf = list(tfidf[train_corpus])
test_corpus_tfidf = list(tfidf[test_corpus])
"""
Explanation: Create training corpus
Let's vectorize the training corpus into the bag-of-words format. We'll train LDA on a BOW and NMFs on an TF-IDF corpus:
End of explanation
"""
%%time
nmf = GensimNmf(
corpus=train_corpus_tfidf,
num_topics=5,
id2word=dictionary,
chunksize=1000,
passes=5,
eval_every=10,
minimum_probability=0,
random_state=0,
kappa=1,
)
"""
Explanation: Here we simply stored the bag-of-words vectors into a list, but Gensim accepts any iterable as input, including streamed ones. To learn more about memory-efficient input iterables, see our Data Streaming in Python: Generators, Iterators, Iterables tutorial.
NMF Model Training
The API works in the same way as other Gensim models, such as LdaModel or LsiModel.
Notable model parameters:
kappa float, optional
Gradient descent step size.
Larger value makes the model train faster, but could lead to non-convergence if set too large.
w_max_iter int, optional
Maximum number of iterations to train W per each batch.
w_stop_condition float, optional
If the error difference gets smaller than this, training of W stops for the current batch.
h_r_max_iter int, optional
Maximum number of iterations to train h per each batch.
h_r_stop_condition float, optional
If the error difference gets smaller than this, training of h stops for the current batch.
Learn an NMF model with 5 topics:
End of explanation
"""
nmf.show_topics()
"""
Explanation: View the learned topics
End of explanation
"""
CoherenceModel(
model=nmf,
corpus=test_corpus_tfidf,
coherence='u_mass'
).get_coherence()
"""
Explanation: Evaluation measure: Coherence
Topic coherence measures how often do most frequent tokens from each topic co-occur in one document. Larger is better.
End of explanation
"""
print(testset[0]['data'])
print('=' * 100)
print("Topics: {}".format(nmf[test_corpus[0]]))
"""
Explanation: Topic inference on new documents
With the NMF model trained, let's fetch one news document not seen during training, and infer its topic vector.
End of explanation
"""
word = dictionary[0]
print("Word: {}".format(word))
print("Topics: {}".format(nmf.get_term_topics(word)))
"""
Explanation: Word topic inference
Similarly, we can inspect the topic distribution assigned to a vocabulary term:
End of explanation
"""
def density(matrix):
return (matrix > 0).mean()
"""
Explanation: Internal NMF state
Density is a fraction of non-zero elements in a matrix.
End of explanation
"""
print("Density: {}".format(density(nmf._W)))
"""
Explanation: Term-topic matrix of shape (words, topics).
End of explanation
"""
print("Density: {}".format(density(nmf._h)))
"""
Explanation: Topic-document matrix for the last batch of shape (topics, batch)
End of explanation
"""
fixed_params = dict(
chunksize=1000,
num_topics=5,
id2word=dictionary,
passes=5,
eval_every=10,
minimum_probability=0,
random_state=0,
)
@contextmanager
def measure_ram(output, tick=5):
def _measure_ram(pid, output, tick=tick):
py = psutil.Process(pid)
with open(output, 'w') as outfile:
while True:
memory = py.memory_info().rss
outfile.write("{}\n".format(memory))
outfile.flush()
time.sleep(tick)
pid = os.getpid()
p = Process(target=_measure_ram, args=(pid, output, tick))
p.start()
yield
p.terminate()
def get_train_time_and_ram(func, name, tick=5):
memprof_filename = "{}.memprof".format(name)
start = time.time()
with measure_ram(memprof_filename, tick=tick):
result = func()
elapsed_time = pd.to_timedelta(time.time() - start, unit='s').round('s')
memprof_df = pd.read_csv(memprof_filename, squeeze=True)
mean_ram = "{} MB".format(
int(memprof_df.mean() // 2 ** 20),
)
max_ram = "{} MB".format(int(memprof_df.max() // 2 ** 20))
return elapsed_time, mean_ram, max_ram, result
def get_f1(model, train_corpus, X_test, y_train, y_test):
if isinstance(model, SklearnNmf):
dense_train_corpus = matutils.corpus2dense(
train_corpus,
num_terms=model.components_.shape[1],
)
X_train = model.transform(dense_train_corpus.T)
else:
X_train = np.zeros((len(train_corpus), model.num_topics))
for bow_id, bow in enumerate(train_corpus):
for topic_id, word_count in model.get_document_topics(bow):
X_train[bow_id, topic_id] = word_count
log_reg = LogisticRegressionCV(multi_class='multinomial', cv=5)
log_reg.fit(X_train, y_train)
pred_labels = log_reg.predict(X_test)
return f1_score(y_test, pred_labels, average='micro')
def get_sklearn_topics(model, top_n=5):
topic_probas = model.components_.T
topic_probas = topic_probas / topic_probas.sum(axis=0)
sparsity = np.zeros(topic_probas.shape[1])
for row in topic_probas:
sparsity += (row == 0)
sparsity /= topic_probas.shape[1]
topic_probas = topic_probas[:, sparsity.argsort()[::-1]][:, :top_n]
token_indices = topic_probas.argsort(axis=0)[:-11:-1, :]
topic_probas.sort(axis=0)
topic_probas = topic_probas[:-11:-1, :]
topics = []
for topic_idx in range(topic_probas.shape[1]):
tokens = [
model.id2word[token_idx]
for token_idx
in token_indices[:, topic_idx]
]
topic = (
'{}*"{}"'.format(round(proba, 3), token)
for proba, token
in zip(topic_probas[:, topic_idx], tokens)
)
topic = " + ".join(topic)
topics.append((topic_idx, topic))
return topics
def get_metrics(model, test_corpus, train_corpus=None, y_train=None, y_test=None, dictionary=None):
if isinstance(model, SklearnNmf):
model.get_topics = lambda: model.components_
model.show_topics = lambda top_n: get_sklearn_topics(model, top_n)
model.id2word = dictionary
W = model.get_topics().T
dense_test_corpus = matutils.corpus2dense(
test_corpus,
num_terms=W.shape[0],
)
if isinstance(model, SklearnNmf):
H = model.transform(dense_test_corpus.T).T
else:
H = np.zeros((model.num_topics, len(test_corpus)))
for bow_id, bow in enumerate(test_corpus):
for topic_id, word_count in model.get_document_topics(bow):
H[topic_id, bow_id] = word_count
l2_norm = None
if not isinstance(model, LdaModel):
pred_factors = W.dot(H)
l2_norm = np.linalg.norm(pred_factors - dense_test_corpus)
l2_norm = round(l2_norm, 4)
f1 = None
if train_corpus and y_train and y_test:
f1 = get_f1(model, train_corpus, H.T, y_train, y_test)
f1 = round(f1, 4)
model.normalize = True
coherence = CoherenceModel(
model=model,
corpus=test_corpus,
coherence='u_mass'
).get_coherence()
coherence = round(coherence, 4)
topics = model.show_topics(5)
model.normalize = False
return dict(
coherence=coherence,
l2_norm=l2_norm,
f1=f1,
topics=topics,
)
"""
Explanation: 3. Benchmarks
Gensim NMF vs Sklearn NMF vs Gensim LDA
We'll run these three unsupervised models on the 20newsgroups dataset.
20 Newsgroups also contains labels for each document, which will allow us to evaluate the trained models on an "upstream" classification task, using the unsupervised document topics as input features.
Metrics
We'll track these metrics as we train and test NMF on the 20-newsgroups corpus we created above:
- train time - time to train a model
- mean_ram - mean RAM consumption during training
- max_ram - maximum RAM consumption during training
- train time - time to train a model.
- coherence - coherence score (larger is better).
- l2_norm - L2 norm of v - Wh (less is better, not defined for LDA).
- f1 - F1 score on the task of news topic classification (larger is better).
End of explanation
"""
tm_metrics = pd.DataFrame(columns=['model', 'train_time', 'coherence', 'l2_norm', 'f1', 'topics'])
y_train = [doc['target'] for doc in trainset]
y_test = [doc['target'] for doc in testset]
# LDA metrics
row = {}
row['model'] = 'lda'
row['train_time'], row['mean_ram'], row['max_ram'], lda = get_train_time_and_ram(
lambda: LdaModel(
corpus=train_corpus,
**fixed_params,
),
'lda',
1,
)
row.update(get_metrics(
lda, test_corpus, train_corpus, y_train, y_test,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
# Sklearn NMF metrics
row = {}
row['model'] = 'sklearn_nmf'
train_dense_corpus_tfidf = matutils.corpus2dense(train_corpus_tfidf, len(dictionary)).T
row['train_time'], row['mean_ram'], row['max_ram'], sklearn_nmf = get_train_time_and_ram(
lambda: SklearnNmf(n_components=5, random_state=42).fit(train_dense_corpus_tfidf),
'sklearn_nmf',
1,
)
row.update(get_metrics(
sklearn_nmf, test_corpus_tfidf, train_corpus_tfidf, y_train, y_test, dictionary,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
# Gensim NMF metrics
row = {}
row['model'] = 'gensim_nmf'
row['train_time'], row['mean_ram'], row['max_ram'], gensim_nmf = get_train_time_and_ram(
lambda: GensimNmf(
normalize=False,
corpus=train_corpus_tfidf,
**fixed_params
),
'gensim_nmf',
0.5,
)
row.update(get_metrics(
gensim_nmf, test_corpus_tfidf, train_corpus_tfidf, y_train, y_test,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
tm_metrics.replace(np.nan, '-', inplace=True)
"""
Explanation: Run the models
End of explanation
"""
tm_metrics.drop('topics', axis=1)
"""
Explanation: Benchmark results
End of explanation
"""
def compare_topics(tm_metrics):
for _, row in tm_metrics.iterrows():
print('\n{}:'.format(row.model))
print("\n".join(str(topic) for topic in row.topics))
compare_topics(tm_metrics)
"""
Explanation: Main insights
Gensim NMF is ridiculously fast and leaves both LDA and Sklearn far behind in terms of training time and quality on downstream task (F1 score), though coherence is the lowest among all models.
Gensim NMF beats Sklearn NMF in RAM consumption, but L2 norm is a bit worse.
Gensim NMF consumes a bit more RAM than LDA.
Learned topics
Let's inspect the 5 topics learned by each of the three models:
End of explanation
"""
# Re-import modules from scratch, so that this Section doesn't rely on any previous cells.
import itertools
import json
import logging
import time
import os
from smart_open import smart_open
import psutil
import numpy as np
import scipy.sparse
from contextlib import contextmanager, contextmanager, contextmanager
from multiprocessing import Process
from tqdm import tqdm, tqdm_notebook
import joblib
import pandas as pd
from sklearn.decomposition.nmf import NMF as SklearnNmf
import gensim.downloader
from gensim import matutils
from gensim.corpora import MmCorpus, Dictionary
from gensim.models import LdaModel, LdaMulticore, CoherenceModel
from gensim.models.nmf import Nmf as GensimNmf
from gensim.utils import simple_preprocess
tqdm.pandas()
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
"""
Explanation: Subjectively, Gensim and Sklearn NMFs are on par with each other, LDA looks a bit worse.
4. NMF on English Wikipedia
This section shows how to train an NMF model on a large text corpus, the entire English Wikipedia: 2.6 billion words, in 23.1 million article sections across 5 million Wikipedia articles.
The data preprocessing takes a while, and we'll be comparing multiple models, so reserve about FIXME hours and some 20 GB of disk space to go through the following notebook cells in full. You'll need gensim>=3.7.1, numpy, tqdm, pandas, psutils, joblib and sklearn.
End of explanation
"""
data = gensim.downloader.load("wiki-english-20171001")
"""
Explanation: Load the Wikipedia dump
We'll use the gensim.downloader to download a parsed Wikipedia dump (6.1 GB disk space):
End of explanation
"""
data = gensim.downloader.load("wiki-english-20171001")
article = next(iter(data))
print("Article: %r\n" % article['title'])
for section_title, section_text in zip(article['section_titles'], article['section_texts']):
print("Section title: %r" % section_title)
print("Section text: %s…\n" % section_text[:100].replace('\n', ' ').strip())
"""
Explanation: Print the titles and sections of the first Wikipedia article, as a little sanity check:
End of explanation
"""
def wikidump2tokens(articles):
"""Stream through the Wikipedia dump, yielding a list of tokens for each article."""
for article in articles:
article_section_texts = [
" ".join([title, text])
for title, text
in zip(article['section_titles'], article['section_texts'])
]
article_tokens = simple_preprocess(" ".join(article_section_texts))
yield article_tokens
"""
Explanation: Let's create a Python generator function that streams through the downloaded Wikipedia dump and preprocesses (tokenizes, lower-cases) each article:
End of explanation
"""
if os.path.exists('wiki.dict'):
# If we already stored the Dictionary in a previous run, simply load it, to save time.
dictionary = Dictionary.load('wiki.dict')
else:
dictionary = Dictionary(wikidump2tokens(data))
# Keep only the 30,000 most frequent vocabulary terms, after filtering away terms
# that are too frequent/too infrequent.
dictionary.filter_extremes(no_below=5, no_above=0.5, keep_n=30000)
dictionary.save('wiki.dict')
"""
Explanation: Create a word-to-id mapping, in order to vectorize texts. Makes a full pass over the Wikipedia corpus, takes ~3.5 hours:
End of explanation
"""
vector_stream = (dictionary.doc2bow(article) for article in wikidump2tokens(data))
"""
Explanation: Store preprocessed Wikipedia as bag-of-words sparse matrix in MatrixMarket format
When training NMF with a single pass over the input corpus ("online"), we simply vectorize each raw text straight from the input storage:
End of explanation
"""
class RandomSplitCorpus(MmCorpus):
"""
Use the fact that MmCorpus supports random indexing, and create a streamed
corpus in shuffled order, including a train/test split for evaluation.
"""
def __init__(self, random_seed=42, testset=False, testsize=1000, *args, **kwargs):
super().__init__(*args, **kwargs)
random_state = np.random.RandomState(random_seed)
self.indices = random_state.permutation(range(self.num_docs))
test_nnz = sum(len(self[doc_idx]) for doc_idx in self.indices[:testsize])
if testset:
self.indices = self.indices[:testsize]
self.num_docs = testsize
self.num_nnz = test_nnz
else:
self.indices = self.indices[testsize:]
self.num_docs -= testsize
self.num_nnz -= test_nnz
def __iter__(self):
for doc_id in self.indices:
yield self[doc_id]
if not os.path.exists('wiki.mm'):
MmCorpus.serialize('wiki.mm', vector_stream, progress_cnt=100000)
if not os.path.exists('wiki_tfidf.mm'):
MmCorpus.serialize('wiki_tfidf.mm', tfidf[MmCorpus('wiki.mm')], progress_cnt=100000)
# Load back the vectors as two lazily-streamed train/test iterables.
train_corpus = RandomSplitCorpus(
random_seed=42, testset=False, testsize=10000, fname='wiki.mm',
)
test_corpus = RandomSplitCorpus(
random_seed=42, testset=True, testsize=10000, fname='wiki.mm',
)
train_corpus_tfidf = RandomSplitCorpus(
random_seed=42, testset=False, testsize=10000, fname='wiki_tfidf.mm',
)
test_corpus_tfidf = RandomSplitCorpus(
random_seed=42, testset=True, testsize=10000, fname='wiki_tfidf.mm',
)
"""
Explanation: For the purposes of this tutorial though, we'll serialize ("cache") the vectorized bag-of-words vectors to disk, to wiki.mm file in MatrixMarket format. The reason is, we'll be re-using the vectorized articles multiple times, for different models for our benchmarks, and also shuffling them, so it makes sense to amortize the vectorization time by persisting the resulting vectors to disk.
So, let's stream through the preprocessed sparse Wikipedia bag-of-words matrix while storing it to disk. This step takes about 3 hours and needs 38 GB of disk space:
End of explanation
"""
if not os.path.exists('wiki_train_csr.npz'):
scipy.sparse.save_npz(
'wiki_train_csr.npz',
matutils.corpus2csc(train_corpus_tfidf, len(dictionary)).T,
)
"""
Explanation: Save preprocessed Wikipedia in scipy.sparse format
This is only needed to run the Sklearn NMF on Wikipedia, for comparison in the benchmarks below. Sklearn expects in-memory scipy sparse input, not on-the-fly vector streams. Needs additional ~2 GB of disk space.
Skip this step if you don't need the Sklearn's NMF benchmark, and only want to run Gensim's NMF.
End of explanation
"""
tm_metrics = pd.DataFrame(columns=[
'model', 'train_time', 'mean_ram', 'max_ram', 'coherence', 'l2_norm', 'topics',
])
"""
Explanation: Metrics
We'll track these metrics as we train and test NMF on the Wikipedia corpus we created above:
- train time - time to train a model
- mean_ram - mean RAM consumption during training
- max_ram - maximum RAM consumption during training
- train time - time to train a model.
- coherence - coherence score (larger is better).
- l2_norm - L2 norm of v - Wh (less is better, not defined for LDA).
Define a dataframe in which we'll store the recorded metrics:
End of explanation
"""
params = dict(
chunksize=2000,
num_topics=50,
id2word=dictionary,
passes=1,
eval_every=10,
minimum_probability=0,
random_state=42,
)
"""
Explanation: Define common parameters, to be shared by all evaluated models:
End of explanation
"""
row = {}
row['model'] = 'gensim_nmf'
row['train_time'], row['mean_ram'], row['max_ram'], nmf = get_train_time_and_ram(
lambda: GensimNmf(normalize=False, corpus=train_corpus_tfidf, **params),
'gensim_nmf',
1,
)
print(row)
nmf.save('gensim_nmf.model')
nmf = GensimNmf.load('gensim_nmf.model')
row.update(get_metrics(nmf, test_corpus_tfidf))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
"""
Explanation: Wikipedia training
Train Gensim NMF model and record its metrics
End of explanation
"""
row = {}
row['model'] = 'lda'
row['train_time'], row['mean_ram'], row['max_ram'], lda = get_train_time_and_ram(
lambda: LdaModel(corpus=train_corpus, **params),
'lda',
1,
)
print(row)
lda.save('lda.model')
lda = LdaModel.load('lda.model')
row.update(get_metrics(lda, test_corpus))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
"""
Explanation: Train Gensim LDA and record its metrics
End of explanation
"""
row = {}
row['model'] = 'sklearn_nmf'
sklearn_nmf = SklearnNmf(n_components=50, tol=1e-2, random_state=42)
row['train_time'], row['mean_ram'], row['max_ram'], sklearn_nmf = get_train_time_and_ram(
lambda: sklearn_nmf.fit(scipy.sparse.load_npz('wiki_train_csr.npz')),
'sklearn_nmf',
10,
)
print(row)
joblib.dump(sklearn_nmf, 'sklearn_nmf.joblib')
sklearn_nmf = joblib.load('sklearn_nmf.joblib')
row.update(get_metrics(
sklearn_nmf, test_corpus_tfidf, dictionary=dictionary,
))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
"""
Explanation: Train Sklearn NMF and record its metrics
Careful! Sklearn loads the entire input Wikipedia matrix into RAM. Even though the matrix is sparse, you'll need FIXME GB of free RAM to run the cell below.
End of explanation
"""
tm_metrics.replace(np.nan, '-', inplace=True)
tm_metrics.drop(['topics', 'f1'], axis=1)
"""
Explanation: Wikipedia results
End of explanation
"""
def compare_topics(tm_metrics):
for _, row in tm_metrics.iterrows():
print('\n{}:'.format(row.model))
print("\n".join(str(topic) for topic in row.topics))
compare_topics(tm_metrics)
"""
Explanation: Insights
Gensim's online NMF outperforms Sklearn's NMF in terms of speed and RAM consumption:
2x faster.
Uses ~20x less memory.
About 8GB of Sklearn's RAM comes from the in-memory input matrices, which, in contrast to Gensim NMF, cannot be streamed iteratively. But even if we forget about the huge input size, Sklearn NMF uses about 2-8 GB of RAM – significantly more than Gensim NMF or LDA.
L2 norm and coherence are a bit worse.
Compared to Gensim's LDA, Gensim NMF also gives superior results:
3x faster
Coherence is worse than LDA's though.
Learned Wikipedia topics
End of explanation
"""
import logging
import time
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from numpy.random import RandomState
from sklearn import decomposition
from sklearn.cluster import MiniBatchKMeans
from sklearn.datasets import fetch_olivetti_faces
from sklearn.decomposition.nmf import NMF as SklearnNmf
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import f1_score
from sklearn.model_selection import ParameterGrid
import gensim.downloader
from gensim import matutils
from gensim.corpora import Dictionary
from gensim.models import CoherenceModel, LdaModel, LdaMulticore
from gensim.models.nmf import Nmf as GensimNmf
from gensim.parsing.preprocessing import preprocess_string
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
from sklearn.base import BaseEstimator, TransformerMixin
import scipy.sparse as sparse
class NmfWrapper(BaseEstimator, TransformerMixin):
def __init__(self, bow_matrix, **kwargs):
self.corpus = sparse.csc.csc_matrix(bow_matrix)
self.nmf = GensimNmf(**kwargs)
def fit(self, X):
self.nmf.update(self.corpus)
@property
def components_(self):
return self.nmf.get_topics()
"""
Explanation: It seems all three models successfully learned useful topics from the Wikipedia corpus.
5. And now for something completely different: Face decomposition from images
The NMF algorithm in Gensim is optimized for extremely large (sparse) text corpora, but it will also work on vectors from other domains!
Let's compare our model to other factorization algorithms on dense image vectors and check out the results.
To do that we'll patch sklearn's Faces Dataset Decomposition.
Sklearn wrapper
Let's create an Scikit-learn wrapper in order to run Gensim NMF on images.
End of explanation
"""
gensim.models.nmf.logger.propagate = False
"""
============================
Faces dataset decompositions
============================
This example applies to :ref:`olivetti_faces` different unsupervised
matrix decomposition (dimension reduction) methods from the module
:py:mod:`sklearn.decomposition` (see the documentation chapter
:ref:`decompositions`) .
"""
print(__doc__)
# Authors: Vlad Niculae, Alexandre Gramfort
# License: BSD 3 claus
n_row, n_col = 2, 3
n_components = n_row * n_col
image_shape = (64, 64)
rng = RandomState(0)
# #############################################################################
# Load faces data
dataset = fetch_olivetti_faces(shuffle=True, random_state=rng)
faces = dataset.data
n_samples, n_features = faces.shape
# global centering
faces_centered = faces - faces.mean(axis=0)
# local centering
faces_centered -= faces_centered.mean(axis=1).reshape(n_samples, -1)
print("Dataset consists of %d faces" % n_samples)
def plot_gallery(title, images, n_col=n_col, n_row=n_row):
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
plt.suptitle(title, size=16)
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
vmax = max(comp.max(), -comp.min())
plt.imshow(comp.reshape(image_shape), cmap=plt.cm.gray,
interpolation='nearest',
vmin=-vmax, vmax=vmax)
plt.xticks(())
plt.yticks(())
plt.subplots_adjust(0.01, 0.05, 0.99, 0.93, 0.04, 0.)
# #############################################################################
# List of the different estimators, whether to center and transpose the
# problem, and whether the transformer uses the clustering API.
estimators = [
('Eigenfaces - PCA using randomized SVD',
decomposition.PCA(n_components=n_components, svd_solver='randomized',
whiten=True),
True),
('Non-negative components - NMF (Sklearn)',
decomposition.NMF(n_components=n_components, init='nndsvda', tol=5e-3),
False),
('Non-negative components - NMF (Gensim)',
NmfWrapper(
bow_matrix=faces.T,
chunksize=3,
eval_every=400,
passes=2,
id2word={idx: idx for idx in range(faces.shape[1])},
num_topics=n_components,
minimum_probability=0,
random_state=42,
),
False),
('Independent components - FastICA',
decomposition.FastICA(n_components=n_components, whiten=True),
True),
('Sparse comp. - MiniBatchSparsePCA',
decomposition.MiniBatchSparsePCA(n_components=n_components, alpha=0.8,
n_iter=100, batch_size=3,
random_state=rng),
True),
('MiniBatchDictionaryLearning',
decomposition.MiniBatchDictionaryLearning(n_components=15, alpha=0.1,
n_iter=50, batch_size=3,
random_state=rng),
True),
('Cluster centers - MiniBatchKMeans',
MiniBatchKMeans(n_clusters=n_components, tol=1e-3, batch_size=20,
max_iter=50, random_state=rng),
True),
('Factor Analysis components - FA',
decomposition.FactorAnalysis(n_components=n_components, max_iter=2),
True),
]
# #############################################################################
# Plot a sample of the input data
plot_gallery("First centered Olivetti faces", faces_centered[:n_components])
# #############################################################################
# Do the estimation and plot it
for name, estimator, center in estimators:
print("Extracting the top %d %s..." % (n_components, name))
t0 = time.time()
data = faces
if center:
data = faces_centered
estimator.fit(data)
train_time = (time.time() - t0)
print("done in %0.3fs" % train_time)
if hasattr(estimator, 'cluster_centers_'):
components_ = estimator.cluster_centers_
else:
components_ = estimator.components_
# Plot an image representing the pixelwise variance provided by the
# estimator e.g its noise_variance_ attribute. The Eigenfaces estimator,
# via the PCA decomposition, also provides a scalar noise_variance_
# (the mean of pixelwise variance) that cannot be displayed as an image
# so we skip it.
if (hasattr(estimator, 'noise_variance_') and
estimator.noise_variance_.ndim > 0): # Skip the Eigenfaces case
plot_gallery("Pixelwise variance",
estimator.noise_variance_.reshape(1, -1), n_col=1,
n_row=1)
plot_gallery('%s - Train time %.1fs' % (name, train_time),
components_[:n_components])
plt.show()
"""
Explanation: Modified face decomposition notebook
Adapted from the excellent Scikit-learn tutorial (BSD license):
Turn off the logger due to large number of info messages during training
End of explanation
"""
|
c22n/ion-channel-ABC | docs/examples/getting_started.ipynb | gpl-3.0 | # Importing standard libraries
import numpy as np
import pandas as pd
"""
Explanation: Getting Started
This notebook gives a whirlwind overview of the ionchannelABC library and can be used for testing purposes of a first installation. The notebook follows the workflow for parameter inference of a generic T-type Ca2+ channel model.
It is recommended to have some understanding of ion channel models, voltage clamp protocols and fundamentals of the Approximate Bayesian Computation algorithm before working through this notebook. Wikipedia and the pyabc documentation will likely be sufficient.
End of explanation
"""
from ionchannelABC import IonChannelModel
icat = IonChannelModel('icat',
'models/Generic_iCaT.mmt',
vvar='membrane.V',
logvars=['environment.time',
'icat.G_CaT',
'icat.i_CaT'])
"""
Explanation: Setting up an ion channel model and experiments
First we need to load in a cell model. We use IonChannelModel, which is a wrapper around the myokit simulation functionality which handles compilation of the model for use with the pyabc library. The model loads a MMT file which is a description of the mathematics behind the opening/closing of activation/inactivation gates in myokit format (see https://myokit.readthedocs.io/syntax/model.html). We also need to specify the independent variable name in the MMT file (generally transmembrane voltage) and a list of variables we want to log from simulations.
End of explanation
"""
import data.icat.data_icat as data
from ionchannelABC import (Experiment,
ExperimentData,
ExperimentStimProtocol)
vsteps, peak_curr, errs, N = data.IV_Nguyen()
nguyen_data = ExperimentData(x=vsteps, y=peak_curr,
N=N, errs=errs,
err_type='SEM') # this flag is currently not used but may change in future version
"""
Explanation: Now that we have loaded a cell model, we need to specify how we will test it to compare with experimental data. We use the ExperimentData and ExperimentStimProtocol classes to specify the experimental dataset and experimental protocol respectively. These are then combined in the Experiment class. The data is specified in a separate .py file with functions to return the x, y and, if available, error bars extracted from graphs.
We show an example using T-type Ca2+ channel peak current density at a range of activating voltage steps in HL-1 myocytes from Nguyen et al, STIM1 participates in the contractile rhythmicity of HL-1 cells by moderating T-type Ca(2+) channel activity, 2013.
End of explanation
"""
stim_times = [5000, 300, 500] # describes the course of one voltage step in time
stim_levels = [-75, vsteps, -75] # each entry of levels corresponds to the time above
"""
Explanation: The stimulation protocol is defined from the experimental methods of the data source. It should be replicated as close as possible to reproduce experimental conditions. This example shows a standard 'I-V curve' testing peak current density at different voltage steps from a resting potential. The transmembrane potential is held at a resting potential of -75mV for sufficient time for the channel to reach its steady-state (we assume 5000ms here), it is stepped to each test potential for 300ms and then returned to the resting potential.
End of explanation
"""
def max_icat(data):
return max(data[0]['icat.i_CaT'], key=abs)
nguyen_protocol = ExperimentStimProtocol(stim_times,
stim_levels,
measure_index=1, # index from `stim_times` and `stim_levels`
measure_fn=max_icat)
"""
Explanation: Having defined what we are doing with the model, we need to define what we do with the simulation data and which part of the protocol (i.e. index of stim_times and stim_levels) we are interested in extracting the data from. The simulation will return a list of pandas.Dataframe containing each of logvars defined in the ion channel model declaration. Here, we want to reduce this data to just the peak current density at the step potential (i.e. index 1 in stim_times and stim_levels). Our list will only have length 1 because we are only interested in data from this point in the protocol, but more complex protocols may return longer lists.
End of explanation
"""
nguyen_conditions = dict(Ca_o=5000, # extracellular Ca2+ concentration of 5000uM
Ca_subSL=0.2, # sub-sarcolemmal (i.e. intracellular) Ca2+ concentration of 0.2uM
T=295) # experiment temperature of 295K
nguyen_experiment = Experiment(nguyen_protocol, nguyen_data, nguyen_conditions)
"""
Explanation: The final key part of defining the experiment is the experimental conditions, which includes extra/intracellular ion concentrations and temperature reported in the data source. Here, the dictionary keys refer to variables in the [membrane] field of the MMT ion channel definition file.
We can then combine the previous steps in a single Experiment.
End of explanation
"""
icat.add_experiments([nguyen_experiment])
test = icat.sample({}) # empty dictionary as we are not overwriting any of the parameters in the model definition yet
"""
Explanation: We then add the experiment to the IonChannelModel defined previously. We can test it runs using the sample method with default parameters to debug any problems at this stage.
End of explanation
"""
import matplotlib.pyplot as plt
import seaborn as sns
from ionchannelABC import plot_sim_results
%matplotlib inline
plot_sim_results(test, obs=icat.get_experiment_data())
"""
Explanation: The plot_sim_results function makes it easy to plot the output of simulations.
End of explanation
"""
from channels.icat_generic import icat as model
test = model.sample({})
plot_sim_results(test, obs=model.get_experiment_data())
"""
Explanation: Clearly the default parameters in the MMT file are not quite right, but we are able to run the simulation and compare to the results.
In practice, the ion channel setup and model experiments can be defined in a separate .py file and loaded in a single step, which we will do below for the next step. Examples are contained in the channel examples folder. By plotting, we can see that 6 separate experiments have been defined.
End of explanation
"""
from pyabc import (RV, Distribution) # we use two classes from the pyabc library for this definition
limits = dict(g_CaT=(0, 2), # these parameter keys are specific to the icat model being investigated
v_offset=(0, 500),
Vhalf_b=(-100, 100),
k_b=(0, 10),
c_bb=(0, 10),
c_ab=(0, 100),
sigma_b=(0, 100),
Vmax_b=(-100, 100),
Vhalf_g=(-100, 100),
k_g=(-10, 0),
c_bg=(0, 50),
c_ag=(0, 500),
sigma_g=(0, 100),
Vmax_g=(-100, 100))
prior = Distribution(**{key: RV("uniform", a, b - a)
for key, (a,b) in limits.items()})
"""
Explanation: Setting up parameter inference for the defined model
Next we need to specify which parameters in our ion channel model should be varied during the parameter inference step. We do this by defining a prior distribution for each parameter in the MMT file we want to vary. The width of the prior distribution should be sufficient to reduce bias while incorporating specific knowledge about the model structure (i.e. if a parameter should be defined positive or in a reasonable range). A good rule-of-thumb is to use an order of magnitude around a parameter value in a previously published model of the channel, but the width can be increased in future runs of the ABC algorithm.
End of explanation
"""
from ionchannelABC import (IonChannelDistance, plot_distance_weights)
measurements = model.get_experiment_data()
obs = measurements.to_dict()['y']
exp = measurements.to_dict()['exp']
errs = measurements.to_dict()['errs']
distance_fn = IonChannelDistance(obs=obs, exp_map=exp, err_bars=errs, err_th=0.1)
plot_distance_weights(model, distance_fn)
"""
Explanation: We can now define additional requirements for the ABC-SMC algorithm. We need a distance function to measure how well our model can approximate experimental data.
The IonChannelDistance class implements a weighted Euclidean distance function. The weight assigned to each data point accounts for the separate experiments (i.e. we do not want to over-fit to behaviour of an experiment just because it has a greater number of data points), the scale of the dependent variable in each experiment, and the size of errors bars in the experimental data (i.e. if we prefer the model to reproduce more closely data points with a lower level of uncertainty).
We can see how this corresponds to the data we are using in this example by plotting the data points using plot_distance_weights.
End of explanation
"""
import tempfile, os
db_path = ("sqlite:///" +
os.path.join(tempfile.gettempdir(), "example.db"))
print(db_path)
"""
Explanation: We also need to assign a database file for the pyabc implementation of the ABC-SMC algorithm to store information about the ABC particles at intermediate steps as it runs. A temporary location with sufficient storage is a good choice as these files can become quite large for long ABC runs. This can be defined by setting the $TMPDIR environment variable as described in the installation instructions.
The "sqlite:///" at the start of the path is necessary for database access.
End of explanation
"""
import logging
logging.basicConfig()
abc_logger = logging.getLogger('ABC')
abc_logger.setLevel(logging.DEBUG)
eps_logger = logging.getLogger('Epsilon')
eps_logger.setLevel(logging.DEBUG)
cv_logger = logging.getLogger('CV Estimation')
cv_logger.setLevel(logging.DEBUG)
"""
Explanation: Running the ABC algorithm
We are now ready to run parameter inference on our ion channel model.
Before starting the algorithm, it is good practice to enable logging options to help any debugging which may be necessary. The default options below should be sufficient.
End of explanation
"""
from pyabc import ABCSMC
from pyabc.epsilon import MedianEpsilon
from pyabc.populationstrategy import ConstantPopulationSize
from pyabc.sampler import MulticoreEvalParallelSampler
from ionchannelABC import (ion_channel_sum_stats_calculator,
IonChannelAcceptor,
IonChannelDistance,
EfficientMultivariateNormalTransition)
abc = ABCSMC(models=model,
parameter_priors=prior,
distance_function=IonChannelDistance(
obs=obs,
exp_map=exp,
err_bars=errs,
err_th=0.1),
population_size=ConstantPopulationSize(1000),
summary_statistics=ion_channel_sum_stats_calculator,
transitions=EfficientMultivariateNormalTransition(),
eps=MedianEpsilon(),
sampler=MulticoreEvalParallelSampler(n_procs=12),
acceptor=IonChannelAcceptor())
"""
Explanation: ABCSMC from the pyabc library is the main class used for the algorithm. It initialises with a number of options which are well described in the pyabc documentation. Note we initialize some of the passed objects at this stage and do not pass in pre-initialised variables, particulary for the distance function.
A brief description is given below to key options:
* population_size: Number of particles to use in the ABC algorithm. pyabc ConstantPopulationSize and AdaptivePopulationSize have been tested. Unless adaptive population size is explicitly required, it is recommended to use a constant particle population with sufficient population for the size of the model being tested to avoid parameter distributions collapsing on single point estimates. For this example, we will use 2000, however up to 5000 particles has been tested on more complex models. Larger particle populations will increase algorithm run times.
* summary_statistics: Function to convert raw output from the model into an appropriate format for calculating distance. Use the custom implementation of ion_channel_sum_stats_calculator.
* transitions: pyabc Transition object for pertubation of particles at each algorithm step. Use custom implementation of EfficientMultivariateNormalTransition.
* eps: pyabc Epsilon object defining how acceptance threshold is adapted over iterations. Generally use MedianEpsilon for the median distance of the previous iterations accepted particles.
* sampler: Can be used to specify the number of parallel processes to initiate. Only pyabc MulticoreEvalParallelSampler has been tested. If on local machine, initiate with default parameters. If using computing cluster, the parameter n_procs can specify how many processes to initiate (12 is a good starting point). Warning: increasing the number of processes will not necessarily speed up the algorithm.
* acceptor: pyabc Acceptor object decides which particles to allow to pass to the next iteration. Use custom implementation IonChannelAcceptor.
End of explanation
"""
from pyabc import History
history = History('sqlite:///results/icat-generic/hl-1_icat-generic.db')
history.all_runs()
df, w = history.get_distribution(m=0)
"""
Explanation: The algorithm is initialised and run as specified in pyabc documentation. These lines are not set to run as the algorithm can take several hours to days to finish for large models. Following steps will use a previous run example.
abc_id = abc.new(db_path, obs)
history = abc.run(minimum_epsilon=0.1,
max_nr_populations=20,
min_acceptance_rate=0.01)
Analysing the results
Once the ABC run is complete, we have a number of custom plotting function to analyse the History object output from the running the ABC algorithm.
A compressed example database file is can be found here. On Linux, this can be extracted to the original .db format using tar -xcvf hl-1_icat-generic.tgz.
Firstly, we can load a previously run example file.
End of explanation
"""
evolution = history.get_all_populations()
sns.relplot(x='t', y='epsilon', size='samples', data=evolution[evolution.t>=0])
"""
Explanation: First we can check the convergence of the epsilon value over iterations of the ABC algorithm.
End of explanation
"""
from ionchannelABC import plot_parameters_kde
plot_parameters_kde(df, w, limits, aspect=12, height=0.8)
"""
Explanation: We can check the posterior distribution of parameters for this model using the plot_parameters_kde function. This can highlight any parameters which were unidentifiable given the available experimental data.
End of explanation
"""
n_samples = 10 # increasing this number will produce a better approximation to the true output, recommended: >= 100
# we keep 10 to keep running time low
parameter_samples = df.sample(n=n_samples, weights=w, replace=True)
parameter_samples.head()
parameter_samples = parameter_samples.to_dict(orient='records')
samples = pd.DataFrame({})
for i, theta in enumerate(parameter_samples):
output = model.sample(pars=theta, n_x=50) # n_x changes the resolution of the independent variable
# sometimes this can cause problems with output tending to zero/inf at
# (e.g.) exact reversal potential of the channel model
output['sample'] = i
output['distribution'] = 'posterior'
samples = samples.append(output, ignore_index=True)
g = plot_sim_results(samples, obs=measurements)
xlabels = ["voltage, mV", "voltage, mV", "voltage, mV", "time, ms", "time, ms","voltage, mV"]
ylabels = ["current density, pA/pF", "activation", "inactivation", "recovery", "normalised current","current density, pA/pF"]
for ax, xl in zip(g.axes.flatten(), xlabels):
ax.set_xlabel(xl)
for ax, yl in zip(g.axes.flatten(), ylabels):
ax.set_ylabel(yl)
"""
Explanation: We can generate some samples of model output using the posterior distribution of parameters to observe the effect on model output. We first create a sampling dataset then use the plot_sim_results function.
End of explanation
"""
peak_curr_mean = np.mean(samples[samples.exp==0].groupby('sample').min()['y'])
peak_curr_std = np.std(samples[samples.exp==0].groupby('sample').min()['y'])
print('Peak current density: {0:4.2f} +/- {1:4.2f} pA/pF'.format(peak_curr_mean, peak_curr_std))
"""
Explanation: In this example, we see low variation of the model output around the experimental data across experiments. However, are all parameters well identified? (Consider the KDE posterior parameter distribution plot).
Finally, if we want to output quantitative measurements of the channel model we can interrogate out sampled dataset. For example, we can find the peak current density from the first experiment.
End of explanation
"""
peak_curr_V_indices = samples[samples.exp==0].groupby('sample').idxmin()['y']
peak_curr_V_mean = np.mean(samples.iloc[peak_curr_V_indices]['x'])
peak_curr_V_std = np.std(samples.iloc[peak_curr_V_indices]['x'])
print('Voltage of peak current density: {0:4.2f} +/- {1:4.2f} mV'.format(peak_curr_V_mean, peak_curr_V_std))
"""
Explanation: Or if we are interested in the voltage at which the peak current occurs.
End of explanation
"""
distance_fn = IonChannelDistance(obs=obs,
exp_map=exp,
err_bars=errs,
err_th=0.1)
parameters = ['icat.'+k for k in limits.keys()]
print(parameters)
"""
Explanation: That concludes the main portion of this introduction. Further functionality is included below. For further examples of using the library, see the additional notebooks included for multiple HL-1 cardiac myocyte ion channels in the docs/examples folder.
Extra: Parameter sensitivity
The ionchannelABC library also includes functionality to test the sensitivity of a model to its parameters. This could be used to test which parameters we may expect to be unidentifiable in the ABC algorithm and would generally be carried out before the ABC algorithm is run.
The parameter sensitivity analysis is based on Sobie et al, Parameter sensitivity analysis in electrophysiological models using multivariable regression, 2009.
First, we need to define the distance function used and a list of the full name (including field in the MMT file) of parameters being passed to ABC.
End of explanation
"""
from ionchannelABC import (calculate_parameter_sensitivity,
plot_parameter_sensitivity,
plot_regression_fit)
fitted, regression_fit, r2 = calculate_parameter_sensitivity(
model,
parameters,
distance_fn,
sigma=0.05, # affects how far parameters are perturbed from original values to test sensitivity
n_samples=20) # set to reduced value for demonstration, typically around 1000 in practical use
"""
Explanation: The calculate_parameter_sensitivity function carries out the calculations, and the output can be analysed using the plot_parameter_sensitivity and plot_regression_fit functions.
End of explanation
"""
plot_parameter_sensitivity(fitted, plot_cutoff=0.05)
plot_regression_fit(regression_fit, r2)
"""
Explanation: See Sobie et al, 2009 for an interpretation of the beta values and goodness-of-fit plots. In summary, a high beta value indicates the model has high sensitivity to changes in that parameter for a particular experiment protocol. However, this is conditional on a reasonable goodness-of-fit indicating the multivariable regression model is valid within this small pertubation space.
End of explanation
"""
|
sandrofsousa/Resolution | Pysegreg/Pysegreg_notebook_distance.ipynb | mit | # Imports
import numpy as np
np.seterr(all='ignore')
import pandas as pd
from decimal import Decimal
import time
# Import python script with Pysegreg functions
from segregationMetrics import Segreg
# Instantiate segreg as cc
cc = Segreg()
"""
Explanation: Pysegreg run - Distance based
Instructions
For fast processing, you can just change the following variables before running:
* path/name at Input file cell (select the file you want to use)
* bandwidth and weigth method at compute population intensity cell
* file name in the variable fname at section Save results to a local file (the file you want to save results)
make sure you don't use a name already used or the file will be replaced
With the previous steps in mind, just click on Cell menu and select Run All
End of explanation
"""
cc.readAttributesFile('/Users/sandrofsousa/Downloads/valid/Segreg sample.csv')
"""
Explanation: Input file
Attention to the new data structure for input !!!
Change your input file with path/name in the cell below to be processed.
Data Format
ID | X | Y | group 1 | group 2 | group n
End of explanation
"""
start_time = time.time()
cc.locality = cc.cal_localityMatrix(bandwidth=700, weightmethod=1)
print("--- %s seconds for processing ---" % (time.time() - start_time))
"""
Explanation: Measures
Compute Population Intensity
For non spatial result, please comment the function call at: "cc.locality= ..."
to comment a code use # in the begining of the line
Distance matrix is calculated at this step. Change the parameters for the population
intensity according to your needs. Parameters are:
bandwidth - is set to be 5000m by default, you can change it here
weightmethod - 1 for gaussian, 2 for bi-square and empty for moving window
End of explanation
"""
# np.set_printoptions(threshold=np.inf)
# print('Location (coordinates from data):\n', cc.location)
# print()
# print('Population intensity for all groups:\n', cc.locality)
'''To select locality for a specific line (validation), use the index in[x,:]'''
# where x is the number of the desired line
# cc.locality[5,:]
"""
Explanation: For validation only
Remove the comment (#) if you want to see the values and validate
End of explanation
"""
diss_local = cc.cal_localDissimilarity()
diss_local = np.asmatrix(diss_local).transpose()
"""
Explanation: Compute local Dissimilarity
End of explanation
"""
diss_global = cc.cal_globalDissimilarity()
"""
Explanation: Compute global Dissimilarity
End of explanation
"""
expo_local = cc.cal_localExposure()
"""
Explanation: Compute local Exposure/Isolation
expo is a matrix of n_group * n_group therefore, exposure (m,n) = rs[m,n]
the columns are exporsure m1 to n1, to n2... n5, m2 to n1....n5
- m,m = isolation index of group m
- m,n = expouse index of group m to n
Result of all combinations of local groups expousure/isolation
To select a specific line of m to n, use the index [x]
Each value is a result of the combinations m,n
e.g.: g1xg1, g1xg2, g2,g1, g2xg2 = isolation, expousure, // , isolation
End of explanation
"""
expo_global = cc.cal_globalExposure()
"""
Explanation: Compute global Exposure/Isolation
End of explanation
"""
entro_local = cc.cal_localEntropy()
"""
Explanation: Compute local Entropy
End of explanation
"""
entro_global = cc.cal_globalEntropy()
"""
Explanation: Compute global Entropy
End of explanation
"""
idxh_local = cc.cal_localIndexH()
"""
Explanation: Compute local Index H
End of explanation
"""
idxh_global = cc.cal_globalIndexH()
"""
Explanation: Compute global Index H
End of explanation
"""
# Concatenate local values from measures
if len(cc.locality) == 0:
results = np.concatenate((expo_local, diss_local, entro_local, idxh_local), axis=1)
else:
results = np.concatenate((cc.locality, expo_local, diss_local, entro_local, idxh_local), axis=1)
# Concatenate the results with original data
output = np.concatenate((cc.tract_id, cc.attributeMatrix, results),axis = 1)
names = ['id','x','y']
for i in range(cc.n_group):
names.append('group_'+str(i))
if len(cc.locality) == 0:
for i in range(cc.n_group):
for j in range(cc.n_group):
if i == j:
names.append('iso_' + str(i) + str(j))
else:
names.append('exp_' + str(i) + str(j))
names.append('dissimil')
names.append('entropy')
names.append('indexh')
else:
for i in range(cc.n_group):
names.append('intens_'+str(i))
for i in range(cc.n_group):
for j in range(cc.n_group):
if i == j:
names.append('iso_' + str(i) + str(j))
else:
names.append('exp_' + str(i) + str(j))
names.append('dissimil')
names.append('entropy')
names.append('indexh')
"""
Explanation: Results
Prepare data for saving on a local file
End of explanation
"""
fname = "/Users/sandrofsousa/Downloads/valid/result"
output = pd.DataFrame(output, columns=names)
output.to_csv("%s_local.csv" % fname, sep=",", index=False)
with open("%s_global.txt" % fname, "w") as f:
f.write('Global dissimilarity: ' + str(diss_global))
f.write('\nGlobal entropy: ' + str(entro_global))
f.write('\nGlobal Index H: ' + str(idxh_global))
f.write('\nGlobal isolation/exposure: \n')
f.write(str(expo_global))
# code to save data as a continuous string - Marcus request for R use
# names2 = ['dissimil', 'entropy', 'indexh']
# for i in range(cc.n_group):
# for j in range(cc.n_group):
# if i == j:
# names2.append('iso_' + str(i) + str(j))
# else:
# names2.append('exp_' + str(i) + str(j))
# values = [diss_global, entro_global, idxh_global]
# for i in expo_global: values.append(i)
# file2 = "/Users/sandrofsousa/Downloads/"
# with open("%s_global.csv" % file2, "w") as f:
# f.write(', '.join(names2) + '\n')
# f.write(', '.join(str(i) for i in values))
"""
Explanation: Save Local and global results to a file
The paramenter fname corresponds to the folder/filename, change it as you want.
To save on a diferent folder, use the "/" to pass the directory.
The local results will be saved using the name defined and adding the "_local" postfix to file's name.
The global results are automatically saved using the same name with the addiction of the postfix "_globals".
It's recommended to save on a different folder from the code, e.g.: a folder named result.
The fname value should be changed for any new executions or the local file will be overwrited!
End of explanation
"""
|
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies | ex27-Wind Rose.ipynb | mit | %matplotlib inline
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import matplotlib.cm as cm
import numpy as np
from math import pi
from windrose import WindroseAxes, WindAxes, plot_windrose
from pylab import rcParams
rcParams['figure.figsize'] = 6, 6
"""
Explanation: Wind Rose
A wind rose is a graphic tool used by meteorologists to give a succinct view of how wind speed and direction are typically distributed at a particular location. Historically, wind roses were predecessors of the compass rose, as there was no differentiation between a cardinal direction and the wind which blew from such a direction. Using a polar coordinate system of gridding, the frequency of winds over a time period is plotted by wind direction, with color bands showing wind speed ranges. The direction of the longest spoke shows the wind direction with the greatest frequency (https://en.wikipedia.org/wiki/Wind_rose).
This notebook is extracted from the python package of windrose with a little bit modification. Please refer to the original windrose’s documentation for more information.
1. Load all needed libraries
End of explanation
"""
df = pd.read_csv("data/sample_wind_poitiers.csv", parse_dates=['Timestamp'])
df = df.set_index('Timestamp')
df.head()
df.index[0:5]
"""
Explanation: 2. Load wind time series data
End of explanation
"""
df['speed_x'] = df['speed'] * np.sin(df['direction'] * pi / 180.0)
df['speed_y'] = df['speed'] * np.cos(df['direction'] * pi / 180.0)
"""
Explanation: 2.1 Convert wind from speed and direction to u-wind and v-wind
End of explanation
"""
fig, ax = plt.subplots()
x0, x1 = ax.get_xlim()
y0, y1 = ax.get_ylim()
ax.set_aspect('equal')
_ = df.plot(kind='scatter', x='speed_x', y='speed_y', alpha=0.25, ax=ax)
Vw = 60
_ = ax.set_xlim([-Vw, Vw])
_ = ax.set_ylim([-Vw, Vw])
"""
Explanation: 3. Visualization
3.1 Have a quick shot
End of explanation
"""
ax = WindroseAxes.from_ax()
ax.bar(df.direction.values, df.speed.values, normed=True, bins=np.arange(0.01,8,1), cmap=cm.RdYlBu_r, lw=3)
ax.set_legend()
"""
Explanation: 3.2 Stacked histogram with normed (displayed in percent)
End of explanation
"""
_ = plot_windrose(df, kind='contour', normed=True, bins=np.arange(0.01,8,1), cmap=cm.RdYlBu_r, lw=3)
"""
Explanation: 3.3 Contour representation with normed (displayed in percent)
End of explanation
"""
bins = np.arange(0,30+1,1)
bins = bins[1:]
bins
_ = plot_windrose(df, kind='pdf', bins=np.arange(0.01,30,1))
data = np.histogram(df['speed'], bins=bins)[0]
data
"""
Explanation: 3.4 Probability density function (pdf) and fitting Weibull distribution
End of explanation
"""
def plot_month(df, t_year_month, *args, **kwargs):
by = 'year_month'
df[by] = df.index.map(lambda x: x.year*100+x.month)
df_month = df[df[by] == t_year_month[0]*100+t_year_month[1]]
ax = plot_windrose(df_month, *args, **kwargs)
return ax
"""
Explanation: 3.5 Wind rose for a specific month
End of explanation
"""
plot_month(df, (2014, 7), kind='contour', normed=True,bins=np.arange(0, 10, 1), cmap=cm.RdYlBu_r)
"""
Explanation: 3.5.1 July 2014
End of explanation
"""
plot_month(df, (2014, 8), kind='contour', normed=True,bins=np.arange(0, 10, 1), cmap=cm.RdYlBu_r)
"""
Explanation: 3.5.2 August 2014
End of explanation
"""
plot_month(df, (2014, 9), kind='contour', normed=True, bins=np.arange(0, 10, 1), cmap=cm.RdYlBu_r)
"""
Explanation: 3.5.3 Septemer 2014
End of explanation
"""
|
andreyf/machine-learning-examples | linear_models/Logistic Gradient Descent.ipynb | gpl-3.0 | from sklearn import datasets
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(style='ticks', palette='Set2')
import pandas as pd
import numpy as np
import math
from __future__ import division
data = datasets.load_iris()
X = data.data[:100, :2]
y = data.target[:100]
X_full = data.data[:100, :]
setosa = plt.scatter(X[:50,0], X[:50,1], c='b')
versicolor = plt.scatter(X[50:,0], X[50:,1], c='r')
plt.xlabel("Sepal Length")
plt.ylabel("Sepal Width")
plt.legend((setosa, versicolor), ("Setosa", "Versicolor"))
sns.despine()
"""
Explanation: Logistic Regression and Gradient Descent
Logistic regression is an excellent tool to know for classification problems. Classification problems are problems where you are trying to classify observations into groups. To make our examples more concrete, we will consider the Iris dataset. The iris dataset contains 4 attributes for 3 types of iris plants. The purpose is to classify which plant you have just based on the attributes. To simplify things, we will only consider 2 attributes and 2 classes. Here are the data visually:
End of explanation
"""
x_values = np.linspace(-5, 5, 100)
y_values = [1 / (1 + math.e**(-x)) for x in x_values]
plt.plot(x_values, y_values)
plt.axhline(.5)
plt.axvline(0)
sns.despine()
"""
Explanation: Wow! This is nice - the two classes are completely separate. Now this obviously is a toy example, but let's now think about how to create a learning algorithm to give us the probability that given Sepal Width and Sepal Length the plant is Setosa. So if our algorithm returns .9 we place 90% probability on the plant being Setosa and 10% probability on it being Versicolor.
Logisitic Function
So we want to return a value between 0 and 1 to make sure we are actually representing a probability. To do this we will make use of the logistic function. The logistic function mathematically looks like this: $$y = \frac{1}{1 + e^{-x}}$$ Let's take a look at the plot:
End of explanation
"""
from IPython.display import Image
Image(url="http://www.me.utexas.edu/~jensen/ORMM/models/unit/nonlinear/subunits/terminology/graphics/convex1.gif")
"""
Explanation: You can see why this is a great function for a probability measure. The y-value represents the probability and only ranges between 0 and 1. Also, for an x value of zero you get a .5 probability and as you get more positive x values you get a higher probability and more negative x values a lower probability.
Make use of your data
Okay - so this is nice, but how the heck do we use it? Well we know we have two attributes - Sepal length and Sepal width - that we need to somehow use in our logistic function. One pretty obvious thing we could do is:
$$x = \beta_{0} + \beta{1}SW + \beta_{2}SL $$
Where SW is our value for sepal width and SL is our value for sepal length. For those of you familiar with Linear Regression this looks very familiar. Basically we are assuming that x is a linear combination of our data plus an intercept. For example, say we have a plant with a sepal width of 3.5 and a sepal length of 5 and some oracle tells us that $\beta_{0} = 1$, $\beta_{1} = 2$, and $\beta_{2} = 4$. This would imply:
$$x = 1 + (2 * 3.5) + (4 * 5) = 28$$
Plugging this into our logistic function gives:
$$\frac{1}{1 + e^{-28}} = .99$$
So we would give a 99% probability to a plant with those dimensions as being Setosa.
Learning
Okay - makes sense. But who is this oracle giving us our $\beta$ values? Good question! This is where the learning in machine learning comes in :). We will learn our $\beta$ values.
Step 1 - Define your cost function
If you have been around machine learning, you probably hear the phrase "cost function" thrown around. Before we get to that, though, let's do some thinking. We are trying to choose $\beta$ values in order to maximize the probability of correctly classifying our plants. That is just the definition of our problem. Let's say someone did give us some $\beta$ values, how would we determine if they were good values or not? We saw above how to get the probability for one example. Now imagine we did this for all our plant observations - all 100. We would now have 100 probability scores. What we would hope is that for the Setosa plants, the probability values are close to 1 and for the Versicolor plants the probability is close to 0.
But we don't care about getting the correct probability for just one observation, we want to correctly classify all our observations. If we assume our data are independent and identically distributed, we can just take the product of all our individually calculated probabilities and that is the value we want to maximize. So in math: $$\prod_{Setosa}\frac{1}{1 + e^{-(\beta_{0} + \beta{1}SW + \beta_{2}SL)}}\prod_{Versicolor}1 - \frac{1}{1 + e^{-(\beta_{0} + \beta{1}SW + \beta_{2}SL)}}$$ If we define the logistic function as: $$h(x) = \frac{1}{1 + e^{-x}}$$ and x as: $$x = \beta_{0} + \beta{1}SW + \beta_{2}SL$$ This can be simplified to: $$\prod_{Setosa}h(x)\prod_{Versicolor}1 - h(x)$$
The $\prod$ symbol means take the product for the observations classified as that plant. Here we are making use of the fact that are data are labeled, so this is called supervised learning. Also, you will notice that for Versicolor observations we are taking 1 minus the logistic function. That is because we are trying to find a value to maximize, and since Versicolor observations should have a probability close to zero, 1 minus the probability should be close to 1. So now we know that we want to maximize the following: $$\prod_{Setosa}h(x)\prod_{Versicolor}1 - h(x)$$
So we now have a value we are trying to maximize. Typically people switch this to minimization by making it negative: $$-\prod_{Setosa}h(x)\prod_{Versicolor}1 - h(x)$$ Note: minimizing the negative is the same as maximizing the positive. The above formula would be called our cost function.
Step 2 - Gradients
So now we have a value to minimize, but how do we actually find the $\beta$ values that minimize our cost function? Do we just try a bunch? That doesn't seem like a good idea...
This is where convex optimization comes into play. We know that the logistic cost function is convex - just trust me on this. And since it is convex, it has a single global minimum which we can converge to using gradient descent.
Here is an image of a convex function:
End of explanation
"""
def logistic_func(theta, x):
return float(1) / (1 + math.e**(-x.dot(theta)))
def log_gradient(theta, x, y):
first_calc = logistic_func(theta, x) - np.squeeze(y)
final_calc = first_calc.T.dot(x)
return final_calc
def cost_func(theta, x, y):
log_func_v = logistic_func(theta,x)
y = np.squeeze(y)
step1 = y * np.log(log_func_v)
step2 = (1-y) * np.log(1 - log_func_v)
final = -step1 - step2
return np.mean(final)
def grad_desc(theta_values, X, y, lr=.001, converge_change=.001):
#normalize
X = (X - np.mean(X, axis=0)) / np.std(X, axis=0)
#setup cost iter
cost_iter = []
cost = cost_func(theta_values, X, y)
cost_iter.append([0, cost])
change_cost = 1
i = 1
while(change_cost > converge_change):
old_cost = cost
theta_values = theta_values - (lr * log_gradient(theta_values, X, y))
cost = cost_func(theta_values, X, y)
cost_iter.append([i, cost])
change_cost = old_cost - cost
i+=1
return theta_values, np.array(cost_iter)
def pred_values(theta, X, hard=True):
#normalize
X = (X - np.mean(X, axis=0)) / np.std(X, axis=0)
pred_prob = logistic_func(theta, X)
pred_value = np.where(pred_prob >= .5, 1, 0)
if hard:
return pred_value
return pred_prob
"""
Explanation: Now you can imagine, that this curve is our cost function defined above and that if we just pick a point on the curve, and then follow it down to the minimum we would eventually reach the minimum, which is our goal. Here is an animation of that. That is the idea behind gradient descent.
So the way we follow the curve is by calculating the gradients or the first derivatives of the cost function with respect to each $\beta$. So lets do some math. First realize that we can also define the cost function as:
$$-\sum_{i=1}^{100}y_{i}log(h(x_{i})) + (1-y_{i})log(1-h(x_{i}))$$
This is because when we take the log our product becomes a sum. See log rules. And if we define $y_{i}$ to be 1 when the observation is Setosa and 0 when Versicolor, then we only do h(x) for Setosa and 1 - h(x) for Versicolor. So lets take the derivative of this new version of our cost function with respect to $\beta_{0}$. Remember that our $\beta_{0}$ is in our x value. So remember that the derivative of log(x) is $\frac{1}{x}$, so we get (for each observation):
$$\frac{y_{i}}{h(x_{i})} + \frac{1-y_{i}}{1-h(x_{i})}$$
And using the quotient rule we see that the derivative of h(x) is:
$$\frac{e^{-x}}{(1+e^{-x})^{2}} = \frac{1}{1+e^{-x}}(1 - \frac{1}{1+e^{-x}}) = h(x)(1-h(x))$$
And the derivative of x with respect to $\beta_{0}$ is just 1. Putting it all together we get:
$$\frac{y_{i}h(x_{i})(1-h(x_{i}))}{h(x_{i})} - \frac{(1-y_{i})h(x_{i})(1-h(x_{i}))}{1-h(x_{i})}$$
Simplify to:
$$y_{i}(1-h(x_{i})) - (1 - y_{i})h(x_{i}) = y_{i}-y_{i}h(x_{i}) - h(x_{i})+y_{i}h(x_{i}) = y_{i} - h(x_{i})$$
Bring in the neative and sum and we get the partial derivative with respect to $\beta_0$ to be:
$$\sum_{i=1}^{100}h(x_{i}) - y_{i}$$
Now the other partial derivaties are easy. The only change is now the derivative for $x_{i}$ is no longer 1. For $\beta_{1}$ it is $SW_{i}$ and for $\beta_{2}$ it is $SL_{i}$. So the partial derivative for $\beta_{1}$ is:
$$\sum_{i=1}^{100}(h(x_{i}) - y_{i})SW_{i}$$
For $\beta_{2}$:
$$\sum_{i=1}^{100}(h(x_{i}) - y_{i})SL_{i}$$
Step 3 - Gradient Descent
So now that we have our gradients, we can use the gradient descent algorithm to find the values for our $\beta$s that minimize our cost function. The gradient descent algorithm is very simple:
* Initially guess any values for your $\beta$ values
* Repeat until converge:
* $\beta_{i} = \beta_{i} - (\alpha *$ gradient with respect to $\beta_{i})$ for $i = 0, 1, 2$ in our case
Here $\alpha$ is our learning rate. Basically how large of steps to take on our cost curve. What we are doing is taking our current $\beta$ value and then subtracting some fraction of the gradient. We subtract because the gradient is the direction of greatest increase, but we want the direction of greatest decrease, so we subtract. In other words, we pick a random point on our cost curve, check to see which direction we need to go to get closer to the minimum by using the negative of the gradient, and then update our $\beta$ values to move closer to the minimum. Repeat until converge means keep updating our $\beta$ values until our cost value converges - or stops decreasing - meaning we have reached the minimum. Also, it is important to update all the $\beta$ values at the same time. Meaning that you use the same previous $\beta$ values to update all the next $\beta$ values.
Gradient Descent Tricks
I think most of this are from Andrew Ng's machine learning course
* Normalize variables:
* This means for each variable subtract the mean and divide by standard deviation.
* Learning rate:
* If not converging, the learning rate needs to be smaller - but will take longer to converge
* Good values to try ..., .001, .003, .01, .03, .1, .3, 1, 3, ...
* Declare converges if cost decreases by less than $10^{-3}$ (this is just a decent suggestion)
* Plot convergence as a check
Lets see some code
Below is code that implements everything we discussed. It is vectorized, though, so things are represented as vectors and matricies. It should still be fairly clear what is going on (I hope...if not, please let me know and I can put out a version closer to the math). Also, I didn't implement an intercept (so no $\beta_{0}$) feel free to add this if you wish :)
End of explanation
"""
shape = X.shape[1]
y_flip = np.logical_not(y) #flip Setosa to be 1 and Versicolor to zero to be consistent
betas = np.zeros(shape)
fitted_values, cost_iter = grad_desc(betas, X, y_flip)
print(fitted_values)
"""
Explanation: Put it to the test
So here I will use the above code for our toy example. I initalize our $\beta$ values to all be zero, then run gradient descent to learn the $\beta$ values.
End of explanation
"""
predicted_y = pred_values(fitted_values, X)
predicted_y
"""
Explanation: So I get a value of -1.5 for $\beta_1$ and a value of 1.4 for $\beta_2$. Remember that $\beta_1$ is my coefficient for Sepal Length and $\beta_2$ for Sepal Width. Meaning that as sepal width becomes larger I would have a stronger prediction for Setosa and as Sepal Length becomes larger I have more confidence it the plant being Versicolor. Which makes sense when looking at our earlier plot.
Now let's make some predictions (Note: since we are returning a probability, if the probability is greater than or equal to 50% then I assign the value to Setosa - or a value of 1):
End of explanation
"""
np.sum(y_flip == predicted_y)
"""
Explanation: And let's see how accurate we are:
End of explanation
"""
plt.plot(cost_iter[:,0], cost_iter[:,1])
plt.ylabel("Cost")
plt.xlabel("Iteration")
sns.despine()
"""
Explanation: Cool - we got all but 1 right. So that is pretty good. But again note: this is a very simple example, where getting all correct is actually pretty easy and we are looking at training accuracy. But that is not the point - we just want to make sure our algorithm is working.
We can do another check by taking a look at how our gradient descent converged:
End of explanation
"""
from sklearn import linear_model
logreg = linear_model.LogisticRegression()
logreg.fit(X, y_flip)
sum(y_flip == logreg.predict(X))
"""
Explanation: You can see that as we ran our algorithm, we continued to decrease our cost function and we stopped right at about when we see the decrease in cost to level out. Nice - everything seems to be working!
Lastly, another nice check is to see how well a packaged version of the algorithm does:
End of explanation
"""
from scipy.optimize import fmin_l_bfgs_b
#normalize data
norm_X = (X_full - np.mean(X_full, axis=0)) / np.std(X_full, axis=0)
myargs = (norm_X, y_flip)
betas = np.zeros(norm_X.shape[1])
lbfgs_fitted = fmin_l_bfgs_b(cost_func, x0=betas, args=myargs, fprime=log_gradient)
lbfgs_fitted[0]
"""
Explanation: Cool - they also get 99 / 100 correct. Looking good :)
Advanced Optimization
So gradient descent is one way to learn our $\beta$ values, but there are some other ways too. Basically these are more advanced algorithms that I won't explain, but that can be easily run in Python once you have defined your cost function and your gradients. These algorithms are:
BFGS
http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.fmin_bfgs.html
L-BFGS: Like BFGS but uses limited memory
http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_l_bfgs_b.html
Conjugate Gradient
http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_cg.html
Here are the very high level advantages / disadvantages of using one of these algorithms over gradient descent:
Advantages
Don't need to pick learning rate
Often run faster (not always the case)
Can numerically approximate gradient for you (doesn't always work out well)
Disadvantages
More complex
More of a black box unless you learn the specifics
The one I hear most about these days is L-BFGS, so I will use it as my example. To use the others, all you do is replace the scipy function with the one in the links above. All the arguments remain the same. Also, I will now use all 4 features as opposed to just 2.
L-BFGS
End of explanation
"""
lbfgs_predicted = pred_values(lbfgs_fitted[0], norm_X, hard=True)
sum(lbfgs_predicted == y_flip)
"""
Explanation: Above are the $\beta$ values we have learned. Now let's make some predictions.
End of explanation
"""
from sklearn import linear_model
logreg = linear_model.LogisticRegression()
logreg.fit(norm_X, y_flip)
sum(y_flip == logreg.predict(norm_X))
"""
Explanation: A perfect 100 - not bad.
Compare with Scikit-Learn
End of explanation
"""
fitted_values, cost_iter = grad_desc(betas, norm_X, y_flip)
predicted_y = pred_values(fitted_values, norm_X)
sum(predicted_y == y_flip)
"""
Explanation: Compare with our implementation
End of explanation
"""
|
jllanfranchi/playground | test_sqlite/test_sqlite.ipynb | mit | def adapt_array(arr):
out = io.BytesIO()
np.save(out, arr)
out.seek(0)
return sqlite3.Binary(out.read())
def convert_array(text):
out = io.BytesIO(text)
out.seek(0)
return np.load(out)
# Converts np.array to TEXT when inserting
sqlite3.register_adapter(np.ndarray, adapt_array)
# Converts TEXT to np.array when selecting
sqlite3.register_converter("array", convert_array)
x = np.arange(12).reshape(2,6)
print x
con = sqlite3.connect(":memory:", detect_types=sqlite3.PARSE_DECLTYPES)
cur = con.cursor()
cur.execute("create table test (arr array)")
cur.execute("insert into test (arr) values (?)", (x, ))
cur.execute("select arr from test")
data = cur.fetchone()[0]
print(data)
print type(data)
"""
Explanation: Insert / read whole numpy arrays
http://stackoverflow.com/questions/18621513/python-insert-numpy-array-into-sqlite3-database
End of explanation
"""
def create_or_open_db(db_file):
db_is_new = not os.path.exists(db_file)
con = sqlite3.connect(db_file, detect_types=sqlite3.PARSE_DECLTYPES)
if db_is_new:
print 'Creating results schema'
sql = '''CREATE TABLE IF NOT EXISTS results(
run_id TEXT,
run_step_num INTEGER,
theta23 REAL,
deltam31 REAL,
metric REAL,
minimizer_steps array,
PRIMARY KEY (run_id, run_step_num)
);'''
with con:
con.execute(sql)
print 'Creating config schema'
sql = '''CREATE TABLE IF NOT EXISTS config(
run_id TEXT PRIMARY KEY,
template_settings TEXT,
minimizer_settings TEXT,
grid_settings TEXT
);'''
with con:
con.execute(sql)
else:
print 'Schema exists\n'
return con
"""
Explanation: Create a DB/table for storing results
http://www.numericalexpert.com/blog/sqlite_blob_time/sqlite_blob.html
End of explanation
"""
rm ./test.db
np.random.seed(0)
con = create_or_open_db('./test.db')
sql_insert_data = '''INSERT INTO results VALUES (?,?,?,?,?,?);'''
n_inserts = 100
n_mod = 10
t0 = time.time()
for n in xrange(n_inserts):
if n % n_mod == 0:
GUTIL.wstdout('.')
input_data = (
'msu_0',
n,
1139.389,
0.723,
2e-3,
np.random.rand(100,6)
)
try:
with con:
con.execute(sql_insert_data, input_data)
except sqlite3.IntegrityError as e:
if not 'UNIQUE constraint failed' in e.args[0]:
raise
elif n % n_mod == 0:
GUTIL.wstdout('x')
dt = time.time()-t0
con.close()
GUTIL.wstdout(
'\n%s total (%s/insert)' %
(GUTIL.timediffstamp(dt), GUTIL.timediffstamp(dt/float(n_inserts)))
)
e.message
ls -hl ./test.db
rm ./test2.db
np.random.seed(0)
con = create_or_open_db('./test2.db')
sql_insert = '''INSERT INTO results VALUES (?,?,?,?,?,?);'''
t0=time.time()
with con:
for n in xrange(n_inserts):
if n % n_mod == 0:
GUTIL.wstdout('.')
input_data = (
'msu_0',
n,
1139.389,
0.723,
2e-3,
np.random.rand(100,6)
)
try:
con.execute(sql_insert, input_data)
except sqlite3.IntegrityError as e:
if not 'UNIQUE constraint failed' in e.args[0]:
raise
elif n % n_mod == 0:
GUTIL.wstdout('o')
dt = time.time()-t0
con.close()
GUTIL.wstdout(
'\n%s total (%s/insert)' %
(GUTIL.timediffstamp(dt), GUTIL.timediffstamp(dt/float(n_inserts)))
)
dt/n_inserts
ls -hl ./test2.db
"""
Explanation: Insert a single row into the results table
Each insert is synchronous
This is safest, but is about 20 times (or more) slower than syncing once after all the inserts are performed (see below).
End of explanation
"""
con = create_or_open_db('./test2.db')
con.row_factory = sqlite3.Row
sql = '''SELECT
metric, theta23, deltam31, run_id, run_step_num, minimizer_steps
FROM results'''
cursor = con.execute(sql)
for row in cursor:
print row.keys()[:-1]
print [x for x in row][:-1]
print 'shape of', row.keys()[-1], row['minimizer_steps'].shape
break
ls -hl ./test.db
a = row[-1]
"""
Explanation: Read the row back to ensure the data is correct
End of explanation
"""
|
yunfeiz/py_learnt | sample_code/date_utils.ipynb | apache-2.0 | col_show = ['name', 'open', 'pre_close', 'price', 'high', 'low', 'volume', 'amount', 'time', 'code']
initial_letter = ['HTGD','OFKJ','CDKJ','ZJXC','GXKJ','FHTX','DZJG']
code =[]
for letter in initial_letter:
code.append(df[df['UP']==letter].code[0])
#print(code)
if code != '': #not empty != ''
df_price = ts.get_realtime_quotes(code)
#print(df_price)
#df_price.columns.values.tolist()
df_price[col_show]
"""
Explanation: code,代码
name,名称
industry,所属行业
area,地区
pe,市盈率
outstanding,流通股本(亿)
totals,总股本(亿)
totalAssets,总资产(万)
liquidAssets,流动资产
fixedAssets,固定资产
reserved,公积金
reservedPerShare,每股公积金
esp,每股收益
bvps,每股净资
pb,市净率
timeToMarket,上市日期
undp,未分利润
perundp, 每股未分配
rev,收入同比(%)
profit,利润同比(%)
gpr,毛利率(%)
npr,净利润率(%)
holders,股东人数
['name', 'pe', 'outstanding', 'totals', 'totalAssets', 'liquidAssets', 'fixedAssets', 'esp', 'bvps', 'pb', 'perundp', 'rev', 'profit', 'gpr', 'npr', 'holders']
End of explanation
"""
from matplotlib.mlab import csv2rec
df=ts.get_k_data("002456",start='2018-01-05',end='2018-01-09')
df.to_csv("temp.csv")
r=csv2rec("temp.csv")
#r.date
import time, datetime
#str = df[df.code == '600487'][clommun_show].name.values
#print(str)
today=datetime.date.today()
yesterday = today - datetime.timedelta(1)
#print(today, yesterday)
i = datetime.datetime.now()
print ("当前的日期和时间是 %s" % i)
print ("ISO格式的日期和时间是 %s" % i.isoformat() )
print ("当前的年份是 %s" %i.year)
print ("当前的月份是 %s" %i.month)
print ("当前的日期是 %s" %i.day)
print ("dd/mm/yyyy 格式是 %s/%s/%s" % (i.day, i.month, i.year) )
print ("当前小时是 %s" %i.hour)
print ("当前分钟是 %s" %i.minute)
print ("当前秒是 %s" %i.second)
import time
localtime = time.localtime(time.time())
print("本地时间为 :", localtime)
# 格式化成2016-03-20 11:45:39形式
print(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
# 格式化成Sat Mar 28 22:24:24 2016形式
print(time.strftime("%a %b %d %H:%M:%S %Y", time.localtime()))
#!/usr/bin/python
# -*- coding: UTF-8 -*-
import calendar
cal = calendar.month(2019, 3)
#print (cal)
"""
Explanation: TO-DO
Add the map from initial to code
build up a dataframe with fundamental and indicotors
For Leadings, need cache more data for the begining data
End of explanation
"""
|
pelodelfuego/word2vec-toolbox | notebook/classification/antonyms.ipynb | gpl-3.0 | summaryDf = pd.DataFrame([extractSummaryLine(l) for l in open('../../data/learnedModel/anto/summary.txt').readlines()],
columns=['bidirectional', 'strict', 'clf', 'feature', 'post', 'precision', 'recall', 'f1'])
summaryDf.sort_values('f1', ascending=False)[:10]
"""
Explanation: Experience
Based on wordnet as ground truth, we tried to learn a classifier to detect antonymics relations between words (small != big / good != bad)
To do so we will explore the carthesian product of:
* simple / bidi: consider each adjective to have only one antonyms or not
* strict: try to compose missing concept
* randomForest / knn: knn allow us to check if there is anything consistent to learn, randomForest is a basic model as a first approach to learn the function
* feature: one of the feature presented in the guided tour
* postFeature: any extra processing to apply to the feature extraction (like normalise)
We use a 10 K-Fold cross validation.
Negative sampling is generating by shuffling pairs.
Once you downloaded the files, you can use this script reproduce the experience at home:
python experiment/trainAll_antoClf.py > ../data/learnedModel/anto/log.txt
Results
Here is the summary of the results we gathered,
You can find details reports in logs.
End of explanation
"""
!python ../../toolbox/script/detailConceptPairClfError.py ../../data/voc/npy/wikiEn-skipgram.npy ../../data/learnedModel/anto/bidi__RandomForestClassifier_pCosSim_postNormalize.dill ../../data/wordPair/wordnetAnto.txt anto ../../data/wordPair/wordnetAnto_fake.txt notAnto
"""
Explanation: We can observe quite good f1-score on RandomForest with normalised projected cosine similarity.
Results are even better with not bidirectional relations (bidi). It makes sense since we can find several antonyms for one word:
* small != big
* small != tall
Allowing to compose concept also seems to have a positive impact.
Study errors
Here is the detail of:
* False positive - ie: pairs considered as antonyms but not included in wordnet
* False negative - ie: not detected antonyms
The false positives are especially interresting here...
End of explanation
"""
|
AlphaSmartDog/DeepLearningNotes | Note-3 Tensor Ridge Regression/global dimensionality reduction (GDR) algorithm Ver 1.0.ipynb | mit | import numpy as np
import pandas as pd
from scipy import linalg
from scipy import optimize
import functools
import tensorly
from tensorly.decomposition import partial_tucker
from tensorly.decomposition import tucker
tensorly.set_backend('numpy')
"""
Explanation: global dimensionality-reduction (GDR) algorithm 示例
End of explanation
"""
tensor_steam_length = 300
factors_tensor_list = []
for i in np.arange(tensor_steam_length):
a = np.random.normal(size=[69], scale=0.5)
b = np.random.normal(size=[16], scale=0.5)
c = np.random.normal(size=[32], scale=0.5)
x = np.zeros([1, 69, 16, 32])
x[0,:,0,0] = a
x[0,1,:,1] = b
x[0,2,2,:] = c
factors_tensor_list.append(x)
factors_tensor = np.concatenate(factors_tensor_list)
targets = np.random.normal(scale=0.01, size=[300,1])
"""
Explanation: 生成数据集
End of explanation
"""
def get_weighting_of_geometric_structure (targets):
W = (targets - targets.T) / targets.T # 广播
W = np.abs(W) - 0.05 # 转换绝对值并判断相似度
W[W>0.0]=0
W[W<0.0]=1
upper_traingular_matrix = np.eye(W.shape[0]).cumsum(1) # 上三角矩阵掩码
return W * upper_traingular_matrix
def get_var_of_adjusting_geometric_structure(targets):
W = get_weighting_of_geometric_structure(targets)
return np.expand_dims(W.sum(0),axis=0)
D = get_var_of_adjusting_geometric_structure(targets)
W = get_weighting_of_geometric_structure(targets)
"""
Explanation: 3.3 $\sum$ 循环向量化
$U^i_k\in\mathbb{R}^{batch \times I_k \times J_k}$
$U^{iT}_k\in\mathbb{R}^{batch \times J_k \times I_k}$
$M^i_k = U^i_k U^{iT}_k\in \mathbb{R}^{batch \times I_k \times I_k}$
$M^i_{k(0)} \in \mathbb{R}^{batch \times I_kI_k}$
令 $N = batch$
$D_{U_k} = \sum\limits ^N_{i=1}d_{i,i}U^{i}kU^{iT}_k \=\sum\limits ^N{i=1}d_{i,i}M^i_k\= mat {(vec(diag(D)))^T M_{k(0)}}$
$W_{U_k} = \sum\limits^N_{i=1}\sum\limits ^N_{j=1}w_{i,j}U^i_kU^{jT}k\= \sum\limits^N{i=1}(\sum\limits ^N_{j=i}w_{i,j})U^i_kU^{jT}k\= \sum\limits^N{i=1}(\sum\limits ^N_{j=i}w_{i,j})U^i_kU^{jT}k\=\sum\limits^N{i=1}w_iU^i_kU^{jT}k\=\sum\limits^N{i=1}w_i M^i_{k}\=mat{w M_{k(0)} }$
其中
$w_{i,j} = \begin{cases}1,\ if\ \ i \le j \ and \ \|y_i -y_j \| \le 5\% \0, \ otherwise \end{cases} $
$\sum\limits ^N_{j=1}w_{i,j} = w_i= sum(w_{i,:}, axis=1)$
End of explanation
"""
factors_tensor = tensorly.tensor(factors_tensor)
core_list = []
mode_factors_list = []
for i in range(factors_tensor.shape[0]):
print (i)
core, mode_factors= tucker(factors_tensor[i])
core = np.expand_dims(core, axis=0)
core_list.append(core)
mode_factors_list.append(mode_factors)
"""
Explanation: 循环tucker分解
计划迁移PyTorch构建批次处理
End of explanation
"""
batch_length = tensor_steam_length
a_list = []
b_list = []
c_list = []
for i in range(batch_length):
a = np.expand_dims(mode_factors_list[i][0], axis=0)
b = np.expand_dims(mode_factors_list[i][1], axis=0)
c = np.expand_dims(mode_factors_list[i][2], axis=0)
a_list.append(a)
b_list.append(b)
c_list.append(c)
U1 = np.concatenate(a_list)
U2 = np.concatenate(b_list)
U3 = np.concatenate(c_list)
M1 = U1 * np.transpose(U1, axes=[0,2,1])
M2 = U2 * np.transpose(U2, axes=[0,2,1])
M3 = U3 * np.transpose(U3, axes=[0,2,1])
"""
Explanation: 连接张量流
End of explanation
"""
D_U1_core = np.matmul(D, tensorly.base.unfold(M1, mode=0))
I_k = np.int(np.sqrt(D_U1_core.shape[1]))
D_U1 = tensorly.base.fold(D_U1_core, mode=0, shape=[I_k, I_k])
D_U2_core = np.matmul(D, tensorly.base.unfold(M2, mode=0))
I_k = np.int(np.sqrt(D_U2_core.shape[1]))
D_U2 = tensorly.base.fold(D_U2_core, mode=0, shape=[I_k, I_k])
D_U3_core = np.matmul(D, tensorly.base.unfold(M3, mode=0))
I_k = np.int(np.sqrt(D_U3_core.shape[1]))
D_U3 = tensorly.base.fold(D_U3_core, mode=0, shape=[I_k, I_k])
vec_W = np.expand_dims(W.sum(axis=0), axis=0)
W_U1_core = np.matmul(vec_W, tensorly.base.unfold(M1, mode=0))
I_k = np.int(np.sqrt(W_U1_core.shape[1]))
W_U1 = tensorly.base.fold(W_U1_core, mode=0, shape=[I_k, I_k])
W_U2_core = np.matmul(vec_W, tensorly.base.unfold(M2, mode=0))
I_k = np.int(np.sqrt(W_U2_core.shape[1]))
W_U2 = tensorly.base.fold(W_U2_core, mode=0, shape=[I_k, I_k])
W_U3_core = np.matmul(vec_W, tensorly.base.unfold(M3, mode=0))
I_k = np.int(np.sqrt(W_U3_core.shape[1]))
W_U3 = tensorly.base.fold(W_U3_core, mode=0, shape=[I_k, I_k])
"""
Explanation: 暂时没有找到较好的批次处理tucker分解的方法,这里特例处理
End of explanation
"""
def objective_function(V, D_U, W_U, J_K):
newshape = [D_U.shape[0], J_K]
V = np.reshape(V,newshape=newshape)
left = np.matmul(np.matmul(V.T, D_U),V)
right = np.matmul(np.matmul(V.T, W_U),V)
return np.trace(left + right)
def constraints(V, D_U):
newshape = [D_U.shape[0], J_K]
V = np.reshape(V,newshape=newshape)
left = np.matmul(np.matmul(V.T, D_U),V)
return np.trace(left)- 1.0
"""
Explanation: 二次规划优化估计修正矩阵$V_k$
添加约束条件 $trace(V^TD_UV)=1$ 获得目标函数的唯一优化结果,这是获取因子矩阵对应的修正矩阵的优化问题转化为:
$min\ J(V) = trace(V^TD_UV-V^TW_UV)$
$s.t. \ trace(V^TD_UV)=1$
End of explanation
"""
objective_function_1 = functools.partial(
objective_function, D_U = D_U1, W_U = W_U1, J_K = 10)
constraints_1 = functools.partial(constraints, D_U = D_U1)
cons_1 = ({'type':'ineq', 'fun':constraints_1})
initial_1 = np.random.normal(scale=0.1, size=D_U1.shape[0]*10).reshape(D_U1.shape[0],10)
V1 = optimize.minimize(objective_function_1, initial_1).x.reshape(D_U1.shape[0],10)
"""
Explanation: $V_1$
End of explanation
"""
objective_function_2 = functools.partial(
objective_function, D_U = D_U2, W_U = W_U2, J_K = 10)
constraints_2 = functools.partial(constraints, D_U = D_U2)
cons_2 = ({'type':'ineq', 'fun':constraints_2})
initial_2 = np.random.normal(scale=0.1, size=D_U2.shape[0]*10).reshape(D_U2.shape[0],10)
V2 = optimize.minimize(objective_function_2, initial_2).x.reshape(D_U2.shape[0],10)
"""
Explanation: $V_2$
End of explanation
"""
objective_function_3 = functools.partial(
objective_function, D_U = D_U3, W_U = W_U3, J_K = 10)
constraints_3 = functools.partial(constraints, D_U = D_U3)
cons_3 = ({'type':'ineq', 'fun':constraints_3})
initial_3 = np.random.normal(scale=0.1, size=D_U3.shape[0]*10).reshape(D_U3.shape[0],10)
V3 = optimize.minimize(objective_function_3, initial_3).x.reshape(D_U3.shape[0],10)
"""
Explanation: $V_3$
End of explanation
"""
new_U1 = np.matmul(V1.T, U1)
new_U2 = np.matmul(V2.T, U2)
new_U3 = np.matmul(V3.T, U3)
unfold_mode1 = tensorly.base.partial_unfold(factors_tensor, mode=0, skip_begin=1)
times_mode1 = np.matmul(new_U1, unfold_mode1)
times1_shape = (factors_tensor.shape[0] ,new_U1.shape[1], factors_tensor.shape[2], factors_tensor.shape[3])
times1 = tensorly.base.partial_fold(times_mode1, 0, times1_shape, skip_begin=1, skip_end=0)
unfold_mode2 = tensorly.base.partial_unfold(times1, mode=1, skip_begin=1)
times_mode2 = np.matmul(new_U2, unfold_mode2)
times2_shape = (factors_tensor.shape[0] ,new_U1.shape[1] ,new_U2.shape[1], factors_tensor.shape[3])
times2 = tensorly.base.partial_fold(times_mode2, 1, times2_shape, skip_begin=1, skip_end=0)
unfold_mode3 = tensorly.base.partial_unfold(times2, mode=2, skip_begin=1)
times_mode3 = np.matmul(new_U3, unfold_mode3)
times3_shape = (factors_tensor.shape[0] ,new_U1.shape[1], new_U2.shape[1], new_U3.shape[1])
times3 = tensorly.base.partial_fold(times_mode3, 2, times3_shape, skip_begin=1, skip_end=0)
new_factors_tensor = times3
"""
Explanation: $\bar{\mathcal{X}_i} = \mathcal{C}_i \times_1 (V^T_1U^i_1) \times_2 (V^T_2U^i_2) \times_3 (V^T_3U^i_3)$
End of explanation
"""
factors_tensor.shape
new_factors_tensor.shape
new_factors_tensor
"""
Explanation: 动态关系捕获和降维 $\mathcal{X}\in\mathbb{R}^{I_1 \times I_2 \times I_3} \to \bar{\mathcal{X}} \in \mathbb{R}^{J_1 \times J_2 \times J_3}$
End of explanation
"""
|
daniestevez/jupyter_notebooks | Lucy/Lucy frames Bochum 2021-10-24.ipynb | gpl-3.0 | def timestamps(packets):
epoch = np.datetime64('2000-01-01T12:00:00')
t = np.array([struct.unpack('>I', p[ccsds.SpacePacketPrimaryHeader.sizeof():][:4])[0]
for p in packets], 'uint32')
return epoch + t * np.timedelta64(1, 's')
def load_frames(path):
frame_size = 223 * 5 - 2
frames = np.fromfile(path, dtype = 'uint8')
frames = frames[:frames.size//frame_size*frame_size].reshape((-1, frame_size))
return frames
frames = load_frames('lucy_frames_bochum_20211024_214614.u8')
frames.shape[0]
"""
Explanation: Timestamps are contained in the Space Packet secondary header time code field. They are encoded as big-endian 32-bit integers counting the number of seconds elapsed since the J2000 epoch (2000-01-01T12:00:00).
Looking at the idle APID packets, the next byte might indicate fractional seconds (since it is still part of the secondary header rather than idle data), but it is difficult to be sure.
End of explanation
"""
aos = [AOSFrame.parse(f) for f in frames]
collections.Counter([a.primary_header.transfer_frame_version_number for a in aos])
collections.Counter([a.primary_header.spacecraft_id for a in aos])
collections.Counter([a.primary_header.virtual_channel_id for a in aos])
"""
Explanation: AOS frames
Telemetry is in Virtual Channel 1. Virtual channel 63 contains Only Idle Data.
End of explanation
"""
vc63 = [a for a in aos if a.primary_header.virtual_channel_id == 63]
[a.primary_header for a in vc63[:10]]
vc63[0]
vc63_frames = np.array([f for f, a in zip(frames, aos) if a.primary_header.virtual_channel_id == 63])
np.unique(vc63_frames[:, 6:8], axis = 0)
bytes(vc63_frames[0, 6:8]).hex()
np.unique(vc63_frames[:, 8:])
hex(170)
fc = np.array([a.primary_header.virtual_channel_frame_count for a in vc63])
plt.figure(figsize = (10, 5), facecolor = 'w')
plt.plot(fc[1:], np.diff(fc)-1, '.')
plt.title("Lucy virtual channel 63 (OID) frame loss")
plt.xlabel('Virtual channel frame counter')
plt.ylabel('Lost frames');
fc.size/(fc[-1]-fc[0]+1)
"""
Explanation: Virtual Channel 63 (Only Idle Data)
Virtual channel 63 corresponds to Only Idle Data. The transfer frame data field includes an M_PDU header with a first header pointer equal to 0x7fe, which indicates that the packet zone contains only idle data. The packet zone is filled with 0xaa's.
End of explanation
"""
vc0 = [a for a in aos if a.primary_header.virtual_channel_id == 0]
[a.primary_header for a in vc0[:10]]
fc = np.array([a.primary_header.virtual_channel_frame_count for a in vc0])
plt.figure(figsize = (10, 5), facecolor = 'w')
plt.plot(fc[1:], np.diff(fc)-1, '.')
plt.title("Lucy virtual channel 0 (telemetry) frame loss")
plt.xlabel('Virtual channel frame counter')
plt.ylabel('Lost frames');
fc.size/(fc[-1]-fc[0]+1)
vc0_packets = list(ccsds.extract_space_packets(vc0, 49, 0))
vc0_t = timestamps(vc0_packets)
vc0_sp_headers = [ccsds.SpacePacketPrimaryHeader.parse(p) for p in vc0_packets]
vc0_apids = collections.Counter([p.APID for p in vc0_sp_headers])
vc0_apids
apid_axis = {a : k for k, a in enumerate(sorted(vc0_apids))}
plt.figure(figsize = (10, 5), facecolor = 'w')
plt.plot(vc0_t, [apid_axis[p.APID] for p in vc0_sp_headers], '.')
plt.yticks(ticks=range(len(apid_axis)), labels=apid_axis)
plt.xlabel('Space Packet timestamp')
plt.ylabel('APID')
plt.title('Lucy Virtual Channel 0 APID distribution');
vc0_by_apid = {apid : [p for h,p in zip(vc0_sp_headers, vc0_packets)
if h.APID == apid] for apid in vc0_apids}
plot_apids(vc0_by_apid)
"""
Explanation: Virtual channel 0
Virtual channel 0 contains telemetry. There are a few active APIDs sending CCSDS Space Packets using the AOS M_PDU protocol.
End of explanation
"""
tags = {2: Int16ub, 3: Int16ub, 15: Int32ub, 31: Int16ub, 32: Int16ub, 1202: Float64b,
1203: Float64b, 1204: Float64b, 1205: Float64b, 1206: Float64b, 1208: Float32b,
1209: Float32b, 1210: Float32b, 1601: Float32b, 1602: Float32b, 1603: Float32b,
1630: Float32b, 1631: Float32b, 1632: Float32b, 17539: Float32b, 17547: Float32b,
17548: Float32b, 21314: Int32sb, 21315: Int32sb, 21316: Int32sb, 21317: Int32sb,
46555: Int32sb, 46980: Int16ub, 46981: Int16ub, 46982: Int16ub, 47090: Int16ub,
47091: Int16ub, 47092: Int16ub,
}
values = list()
for packet in vc0_by_apid[5]:
t = timestamps([packet])[0]
packet = packet[6+5:] # skip primary and secondary headers
while True:
tag = Int16ub.parse(packet)
packet = packet[2:]
value = tags[tag].parse(packet)
packet = packet[tags[tag].sizeof():]
values.append((tag, value, t))
if len(packet) == 0:
break
values_keys = {v[0] for v in values}
values = {k: [(v[2], v[1]) for v in values if v[0] == k] for k in values_keys}
for k in sorted(values_keys):
vals = values[k]
plt.figure()
plt.title(f'Key {k}')
plt.plot([v[0] for v in vals], [v[1] for v in vals], '.')
"""
Explanation: APID 5
As found by r00t this APID has frames of fixed size containing a number of fields in tag-value format. Tags are 2 bytes, and values have different formats and sizes depending on the tag.
End of explanation
"""
|
fmfn/BayesianOptimization | examples/basic-tour.ipynb | mit | def black_box_function(x, y):
"""Function with unknown internals we wish to maximize.
This is just serving as an example, for all intents and
purposes think of the internals of this function, i.e.: the process
which generates its output values, as unknown.
"""
return -x ** 2 - (y - 1) ** 2 + 1
"""
Explanation: Basic tour of the Bayesian Optimization package
This is a constrained global optimization package built upon bayesian inference and gaussian process, that attempts to find the maximum value of an unknown function in as few iterations as possible. This technique is particularly suited for optimization of high cost functions, situations where the balance between exploration and exploitation is important.
Bayesian optimization works by constructing a posterior distribution of functions (gaussian process) that best describes the function you want to optimize. As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in parameter space are worth exploring and which are not, as seen in the picture below.
As you iterate over and over, the algorithm balances its needs of exploration and exploitation taking into account what it knows about the target function. At each step a Gaussian Process is fitted to the known samples (points previously explored), and the posterior distribution, combined with a exploration strategy (such as UCB (Upper Confidence Bound), or EI (Expected Improvement)), are used to determine the next point that should be explored (see the gif below).
This process is designed to minimize the number of steps required to find a combination of parameters that are close to the optimal combination. To do so, this method uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore Bayesian Optimization is most adequate for situations where sampling the function to be optimized is a very expensive endeavor. See the references for a proper discussion of this method.
1. Specifying the function to be optimized
This is a function optimization package, therefore the first and most important ingreedient is, of course, the function to be optimized.
DISCLAIMER: We know exactly how the output of the function below depends on its parameter. Obviously this is just an example, and you shouldn't expect to know it in a real scenario. However, it should be clear that you don't need to. All you need in order to use this package (and more generally, this technique) is a function f that takes a known set of parameters and outputs a real number.
End of explanation
"""
from bayes_opt import BayesianOptimization
# Bounded region of parameter space
pbounds = {'x': (2, 4), 'y': (-3, 3)}
optimizer = BayesianOptimization(
f=black_box_function,
pbounds=pbounds,
verbose=2, # verbose = 1 prints only when a maximum is observed, verbose = 0 is silent
random_state=1,
)
"""
Explanation: 2. Getting Started
All we need to get started is to instanciate a BayesianOptimization object specifying a function to be optimized f, and its parameters with their corresponding bounds, pbounds. This is a constrained optimization technique, so you must specify the minimum and maximum values that can be probed for each parameter in order for it to work
End of explanation
"""
optimizer.maximize(
init_points=2,
n_iter=3,
)
"""
Explanation: The BayesianOptimization object will work out of the box without much tuning needed. The main method you should be aware of is maximize, which does exactly what you think it does.
There are many parameters you can pass to maximize, nonetheless, the most important ones are:
- n_iter: How many steps of bayesian optimization you want to perform. The more steps the more likely to find a good maximum you are.
- init_points: How many steps of random exploration you want to perform. Random exploration can help by diversifying the exploration space.
End of explanation
"""
print(optimizer.max)
"""
Explanation: The best combination of parameters and target value found can be accessed via the property bo.max.
End of explanation
"""
for i, res in enumerate(optimizer.res):
print("Iteration {}: \n\t{}".format(i, res))
"""
Explanation: While the list of all parameters probed and their corresponding target values is available via the property bo.res.
End of explanation
"""
optimizer.set_bounds(new_bounds={"x": (-2, 3)})
optimizer.maximize(
init_points=0,
n_iter=5,
)
"""
Explanation: 2.1 Changing bounds
During the optimization process you may realize the bounds chosen for some parameters are not adequate. For these situations you can invoke the method set_bounds to alter them. You can pass any combination of existing parameters and their associated new bounds.
End of explanation
"""
optimizer.probe(
params={"x": 0.5, "y": 0.7},
lazy=True,
)
"""
Explanation: 3. Guiding the optimization
It is often the case that we have an idea of regions of the parameter space where the maximum of our function might lie. For these situations the BayesianOptimization object allows the user to specify specific points to be probed. By default these will be explored lazily (lazy=True), meaning these points will be evaluated only the next time you call maximize. This probing process happens before the gaussian process takes over.
Parameters can be passed as dictionaries such as below:
End of explanation
"""
print(optimizer.space.keys)
optimizer.probe(
params=[-0.3, 0.1],
lazy=True,
)
optimizer.maximize(init_points=0, n_iter=0)
"""
Explanation: Or as an iterable. Beware that the order has to be alphabetical. You can usee optimizer.space.keys for guidance
End of explanation
"""
from bayes_opt.logger import JSONLogger
from bayes_opt.event import Events
"""
Explanation: 4. Saving, loading and restarting
By default you can follow the progress of your optimization by setting verbose>0 when instanciating the BayesianOptimization object. If you need more control over logging/alerting you will need to use an observer. For more information about observers checkout the advanced tour notebook. Here we will only see how to use the native JSONLogger object to save to and load progress from files.
4.1 Saving progress
End of explanation
"""
logger = JSONLogger(path="./logs.json")
optimizer.subscribe(Events.OPTIMIZATION_STEP, logger)
optimizer.maximize(
init_points=2,
n_iter=3,
)
"""
Explanation: The observer paradigm works by:
1. Instantiating an observer object.
2. Tying the observer object to a particular event fired by an optimizer.
The BayesianOptimization object fires a number of internal events during optimization, in particular, everytime it probes the function and obtains a new parameter-target combination it will fire an Events.OPTIMIZATION_STEP event, which our logger will listen to.
Caveat: The logger will not look back at previously probed points.
End of explanation
"""
from bayes_opt.util import load_logs
new_optimizer = BayesianOptimization(
f=black_box_function,
pbounds={"x": (-2, 2), "y": (-2, 2)},
verbose=2,
random_state=7,
)
print(len(new_optimizer.space))
load_logs(new_optimizer, logs=["./logs.json"]);
print("New optimizer is now aware of {} points.".format(len(new_optimizer.space)))
new_optimizer.maximize(
init_points=0,
n_iter=10,
)
"""
Explanation: 4.2 Loading progress
Naturally, if you stored progress you will be able to load that onto a new instance of BayesianOptimization. The easiest way to do it is by invoking the load_logs function, from the util submodule.
End of explanation
"""
|
rajul/tvb-library | tvb/simulator/demos/surface_deterministic_stimulus.ipynb | gpl-2.0 | from tvb.datatypes.cortex import Cortex
from tvb.simulator.lab import *
"""
Explanation: Demonstrate using the simulator for a surface simulation.
Run time: approximately 2 min (workstation circa 2010).
Memory requirement: ~ 1 GB
End of explanation
"""
LOG.info("Configuring...")
#Initialise a Model, Coupling, and Connectivity.
oscillator = models.Generic2dOscillator()
white_matter = connectivity.Connectivity(load_default=True)
white_matter.speed = numpy.array([4.0])
white_matter_coupling = coupling.Linear(a=2 ** -7)
#Initialise an Integrator
heunint = integrators.HeunDeterministic(dt=2 ** -4)
#Initialise some Monitors with period in physical time
mon_tavg = monitors.TemporalAverage(period=2 ** -2)
mon_savg = monitors.SpatialAverage(period=2 ** -2)
mon_eeg = monitors.EEG(period=2 ** -2)
#Bundle them
what_to_watch = (mon_tavg, mon_savg, mon_eeg)
#Initialise a surface
local_coupling_strength = numpy.array([2 ** -6])
default_cortex = Cortex(load_default=True)
default_cortex.coupling_strength = local_coupling_strength
##NOTE: THIS IS AN EXAMPLE OF DESCRIBING A SURFACE STIMULUS AT REGIONS LEVEL.
# SURFACES ALSO SUPPORT STIMULUS SPECIFICATION BY A SPATIAL FUNCTION
# CENTRED AT A VERTEX (OR VERTICES).
#Define the stimulus
#Specify a weighting for regions to receive stimuli...
white_matter.configure() # Because we want access to number_of_regions
nodes = [0, 7, 13, 33, 42]
#NOTE: here, we specify space at region level simulator will map to surface
#Specify a weighting for regions to receive stimuli...
weighting = numpy.zeros((white_matter.number_of_regions, 1))
weighting[nodes] = numpy.array([2.0 ** -2, 2.0 ** -3, 2.0 ** -4, 2.0 ** -5, 2.0 ** -6])[:, numpy.newaxis]
eqn_t = equations.Gaussian()
eqn_t.parameters["midpoint"] = 8.0
stimulus = patterns.StimuliRegion(temporal=eqn_t,
connectivity=white_matter,
weight=weighting)
#Initialise Simulator -- Model, Connectivity, Integrator, Monitors, and surface.
sim = simulator.Simulator(model=oscillator,
connectivity=white_matter,
coupling=white_matter_coupling,
integrator=heunint,
monitors=what_to_watch,
surface=default_cortex, stimulus=stimulus)
sim.configure()
#Clear the initial transient, so that the effect of the stimulus is clearer.
#NOTE: this is ignored, stimuli are defined relative to each simulation call.
LOG.info("Initial integration to clear transient...")
for _, _, _ in sim(simulation_length=128):
pass
LOG.info("Starting simulation...")
#Perform the simulation
tavg_data = []
tavg_time = []
savg_data = []
savg_time = []
eeg_data = []
eeg_time = []
for tavg, savg, eeg in sim(simulation_length=2 ** 5):
if not tavg is None:
tavg_time.append(tavg[0])
tavg_data.append(tavg[1])
if not savg is None:
savg_time.append(savg[0])
savg_data.append(savg[1])
if not eeg is None:
eeg_time.append(eeg[0])
eeg_data.append(eeg[1])
LOG.info("finished simulation.")
"""
Explanation: Perform the simulation
End of explanation
"""
#Plot the stimulus
plot_pattern(sim.stimulus)
if IMPORTED_MAYAVI:
surface_pattern(sim.surface, sim.stimulus.spatial_pattern)
#Make the lists numpy.arrays for easier use.
TAVG = numpy.array(tavg_data)
SAVG = numpy.array(savg_data)
EEG = numpy.array(eeg_data)
#Plot region averaged time series
figure(3)
plot(savg_time, SAVG[:, 0, :, 0])
title("Region average")
#Plot EEG time series
figure(4)
plot(eeg_time, EEG[:, 0, :, 0])
title("EEG")
#Show them
show()
#Surface movie, requires mayavi.malb
if IMPORTED_MAYAVI:
st = surface_timeseries(sim.surface, TAVG[:, 0, :, 0])
"""
Explanation: Plot pretty pictures of what we just did
End of explanation
"""
|
TomTranter/OpenPNM | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | mit | import warnings
import numpy as np
import scipy as sp
import openpnm as op
%matplotlib inline
np.random.seed(10)
ws = op.Workspace()
ws.settings['loglevel'] = 40
np.set_printoptions(precision=4)
pn = op.network.Cubic(shape=[10, 10, 10], spacing=0.00006, name='net')
"""
Explanation: Tutorial 3 of 3: Advanced Topics and Usage
Learning Outcomes
Use different methods to add boundary pores to a network
Manipulate network topology by adding and removing pores and throats
Explore the ModelsDict design, including copying models between objects, and changing model parameters
Write a custom pore-scale model and a custom Phase
Access and manipulate objects associated with the network
Combine multiple algorithms to predict relative permeability
Build and Manipulate Network Topology
For the present tutorial, we'll keep the topology simple to help keep the focus on other aspects of OpenPNM.
End of explanation
"""
pn.add_boundary_pores(labels=['top', 'bottom'])
"""
Explanation: Adding Boundary Pores
When performing transport simulations it is often useful to have 'boundary' pores attached to the surface(s) of the network where boundary conditions can be applied. When using the Cubic class, two methods are available for doing this: add_boundaries, which is specific for the Cubic class, and add_boundary_pores, which is a generic method that can also be used on other network types and which is inherited from GenericNetwork. The first method automatically adds boundaries to ALL six faces of the network and offsets them from the network by 1/2 of the value provided as the network spacing. The second method provides total control over which boundary pores are created and where they are positioned, but requires the user to specify to which pores the boundary pores should be attached to. Let's explore these two options:
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
fig = op.topotools.plot_connections(pn, c='r')
fig = op.topotools.plot_coordinates(pn, c='b', fig=fig)
fig.set_size_inches([10, 10])
"""
Explanation: Let's quickly visualize this network with the added boundaries:
End of explanation
"""
Ts = np.random.rand(pn.Nt) < 0.1 # Create a mask with ~10% of throats labeled True
op.topotools.trim(network=pn, throats=Ts) # Use mask to indicate which throats to trim
"""
Explanation: Adding and Removing Pores and Throats
OpenPNM uses a list-based data storage scheme for all properties, including topological connections. One of the benefits of this approach is that adding and removing pores and throats from the network is essentially as simple as adding or removing rows from the data arrays. The one exception to this 'simplicity' is that the 'throat.conns' array must be treated carefully when trimming pores, so OpenPNM provides the extend and trim functions for adding and removing, respectively. To demonstrate, let's reduce the coordination number of the network to create a more random structure:
End of explanation
"""
a = pn.check_network_health()
print(a)
"""
Explanation: When the trim function is called, it automatically checks the health of the network afterwards, so logger messages might appear on the command line if problems were found such as isolated clusters of pores or pores with no throats. This health check is performed by calling the Network's check_network_health method which returns a HealthDict containing the results of the checks:
End of explanation
"""
op.topotools.trim(network=pn, pores=a['trim_pores'])
"""
Explanation: The HealthDict contains several lists including things like duplicate throats and isolated pores, but also a suggestion of which pores to trim to return the network to a healthy state. Also, the HealthDict has a health attribute that is False is any checks fail.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
fig = op.topotools.plot_connections(pn, c='r')
fig = op.topotools.plot_coordinates(pn, c='b', fig=fig)
fig.set_size_inches([10, 10])
"""
Explanation: Let's take another look at the network to see the trimmed pores and throats:
End of explanation
"""
Ps = pn.pores('*boundary', mode='not')
Ts = pn.throats('*boundary', mode='not')
geom = op.geometry.StickAndBall(network=pn, pores=Ps, throats=Ts, name='intern')
Ps = pn.pores('*boundary')
Ts = pn.throats('*boundary')
boun = op.geometry.Boundary(network=pn, pores=Ps, throats=Ts, name='boun')
"""
Explanation: Define Geometry Objects
The boundary pores we've added to the network should be treated a little bit differently. Specifically, they should have no volume or length (as they are not physically representative of real pores). To do this, we create two separate Geometry objects, one for internal pores and one for the boundaries:
End of explanation
"""
air = op.phases.Air(network=pn)
water = op.phases.Water(network=pn)
water['throat.contact_angle'] = 110
water['throat.surface_tension'] = 0.072
"""
Explanation: The StickAndBall class is preloaded with the pore-scale models to calculate all the necessary size information (pore diameter, pore.volume, throat lengths, throat.diameter, etc). The Boundary class is speciall and is only used for the boundary pores. In this class, geometrical properties are set to small fixed values such that they don't affect the simulation results.
Define Multiple Phase Objects
In order to simulate relative permeability of air through a partially water-filled network, we need to create each Phase object. OpenPNM includes pre-defined classes for each of these common fluids:
End of explanation
"""
from openpnm.phases import GenericPhase
class Oil(GenericPhase):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.add_model(propname='pore.viscosity',
model=op.models.misc.polynomial,
prop='pore.temperature',
a=[1.82082e-2, 6.51E-04, -3.48E-7, 1.11E-10])
self['pore.molecular_weight'] = 116 # g/mol
"""
Explanation: Aside: Creating a Custom Phase Class
In many cases you will want to create your own fluid, such as an oil or brine, which may be commonly used in your research. OpenPNM cannot predict all the possible scenarios, but luckily it is easy to create a custom Phase class as follows:
End of explanation
"""
oil = Oil(network=pn)
print(oil)
"""
Explanation: Creating a Phase class basically involves placing a series of self.add_model commands within the __init__ section of the class definition. This means that when the class is instantiated, all the models are added to itself (i.e. self).
**kwargs is a Python trick that captures all arguments in a dict called kwargs and passes them to another function that may need them. In this case they are passed to the __init__ method of Oil's parent by the super function. Specifically, things like name and network are expected.
The above code block also stores the molecular weight of the oil as a constant value
Adding models and constant values in this way could just as easily be done in a run script, but the advantage of defining a class is that it can be saved in a file (i.e. 'my_custom_phases') and reused in any project.
End of explanation
"""
phys_water_internal = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom)
phys_air_internal = op.physics.GenericPhysics(network=pn, phase=air, geometry=geom)
phys_water_boundary = op.physics.GenericPhysics(network=pn, phase=water, geometry=boun)
phys_air_boundary = op.physics.GenericPhysics(network=pn, phase=air, geometry=boun)
"""
Explanation: Define Physics Objects for Each Geometry and Each Phase
In the tutorial #2 we created two Physics object, one for each of the two Geometry objects used to handle the stratified layers. In this tutorial, the internal pores and the boundary pores each have their own Geometry, but there are two Phases, which also each require a unique Physics:
End of explanation
"""
def mason_model(target, diameter='throat.diameter', theta='throat.contact_angle',
sigma='throat.surface_tension', f=0.6667):
proj = target.project
network = proj.network
phase = proj.find_phase(target)
Dt = network[diameter]
theta = phase[theta]
sigma = phase[sigma]
Pc = 4*sigma*np.cos(f*np.deg2rad(theta))/Dt
return Pc[phase.throats(target.name)]
"""
Explanation: To reiterate, one Physics object is required for each Geometry AND each Phase, so the number can grow to become annoying very quickly Some useful tips for easing this situation are given below.
Create a Custom Pore-Scale Physics Model
Perhaps the most distinguishing feature between pore-network modeling papers is the pore-scale physics models employed. Accordingly, OpenPNM was designed to allow for easy customization in this regard, so that you can create your own models to augment or replace the ones included in the OpenPNM models libraries. For demonstration, let's implement the capillary pressure model proposed by Mason and Morrow in 1994. They studied the entry pressure of non-wetting fluid into a throat formed by spheres, and found that the converging-diverging geometry increased the capillary pressure required to penetrate the throat. As a simple approximation they proposed $P_c = -2 \sigma \cdot cos(2/3 \theta) / R_t$
Pore-scale models are written as basic function definitions:
End of explanation
"""
mod = op.models.physics.hydraulic_conductance.hagen_poiseuille
phys_water_internal.add_model(propname='throat.hydraulic_conductance',
model=mod)
phys_water_internal.add_model(propname='throat.entry_pressure',
model=mason_model)
"""
Explanation: Let's examine the components of above code:
The function receives a target object as an argument. This indicates which object the results will be returned to.
The f value is a scale factor that is applied to the contact angle. Mason and Morrow suggested a value of 2/3 as a decent fit to the data, but we'll make this an adjustable parameter with 2/3 as the default.
Note the pore.diameter is actually a Geometry property, but it is retrieved via the network using the data exchange rules outlined in the second tutorial.
All of the calculations are done for every throat in the network, but this pore-scale model may be assigned to a target like a Physics object, that is a subset of the full domain. As such, the last line extracts values from the Pc array for the location of target and returns just the subset.
The actual values of the contact angle, surface tension, and throat diameter are NOT sent in as numerical arrays, but rather as dictionary keys to the arrays. There is one very important reason for this: if arrays had been sent, then re-running the model would use the same arrays and hence not use any updated values. By having access to dictionary keys, the model actually looks up the current values in each of the arrays whenever it is run.
It is good practice to include the dictionary keys as arguments, such as sigma = 'throat.contact_angle'. This way the user can control where the contact angle could be stored on the target object.
Copy Models Between Physics Objects
As mentioned above, the need to specify a separate Physics object for each Geometry and Phase can become tedious. It is possible to copy the pore-scale models assigned to one object onto another object. First, let's assign the models we need to phys_water_internal:
End of explanation
"""
phys_water_boundary.models = phys_water_internal.models
"""
Explanation: Now make a copy of the models on phys_water_internal and apply it all the other water Physics objects:
End of explanation
"""
phys_water_boundary.regenerate_models()
phys_air_internal.regenerate_models()
phys_air_internal.regenerate_models()
"""
Explanation: The only 'gotcha' with this approach is that each of the Physics objects must be regenerated in order to place numerical values for all the properties into the data arrays:
End of explanation
"""
phys_water_internal.models['throat.entry_pressure']['f'] = 0.75 # Change value
phys_water_internal.regenerate_models() # Regenerate model with new 'f' value
"""
Explanation: Adjust Pore-Scale Model Parameters
The pore-scale models are stored in a ModelsDict object that is itself stored under the models attribute of each object. This arrangement is somewhat convoluted, but it enables integrated storage of models on the object's wo which they apply. The models on an object can be inspected with print(phys_water_internal), which shows a list of all the pore-scale properties that are computed by a model, and some information about the model's regeneration mode.
Each model in the ModelsDict can be individually inspected by accessing it using the dictionary key corresponding to pore-property that it calculates, i.e. print(phys_water_internal)['throat.capillary_pressure']). This shows a list of all the parameters associated with that model. It is possible to edit these parameters directly:
End of explanation
"""
inv = op.algorithms.Porosimetry(network=pn)
inv.setup(phase=water)
inv.set_inlets(pores=pn.pores(['top', 'bottom']))
inv.run()
"""
Explanation: More details about the ModelsDict and ModelWrapper classes can be found in :ref:models.
Perform Multiphase Transport Simulations
Use the Built-In Drainage Algorithm to Generate an Invading Phase Configuration
End of explanation
"""
Pi = inv['pore.invasion_pressure'] < 5000
Ti = inv['throat.invasion_pressure'] < 5000
"""
Explanation: The inlet pores were set to both 'top' and 'bottom' using the pn.pores method. The algorithm applies to the entire network so the mapping of network pores to the algorithm pores is 1-to-1.
The run method automatically generates a list of 25 capillary pressure points to test, but you can also specify more pores, or which specific points to tests. See the methods documentation for the details.
Once the algorithm has been run, the resulting capillary pressure curve can be viewed with plot_drainage_curve. If you'd prefer a table of data for plotting in your software of choice you can use get_drainage_data which prints a table in the console.
Set Pores and Throats to Invaded
After running, the mip object possesses an array containing the pressure at which each pore and throat was invaded, stored as 'pore.inv_Pc' and 'throat.inv_Pc'. These arrays can be used to obtain a list of which pores and throats are invaded by water, using Boolean logic:
End of explanation
"""
Ts = phys_water_internal.map_throats(~Ti, origin=water)
phys_water_internal['throat.hydraulic_conductance'][Ts] = 1e-20
"""
Explanation: The resulting Boolean masks can be used to manually adjust the hydraulic conductivity of pores and throats based on their phase occupancy. The following lines set the water filled throats to near-zero conductivity for air flow:
End of explanation
"""
water_flow = op.algorithms.StokesFlow(network=pn, phase=water)
water_flow.set_value_BC(pores=pn.pores('left'), values=200000)
water_flow.set_value_BC(pores=pn.pores('right'), values=100000)
water_flow.run()
Q_partial, = water_flow.rate(pores=pn.pores('right'))
"""
Explanation: The logic of these statements implicitly assumes that transport between two pores is only blocked if the throat is filled with the other phase, meaning that both pores could be filled and transport is still permitted. Another option would be to set the transport to near-zero if either or both of the pores are filled as well.
The above approach can get complicated if there are several Geometry objects, and it is also a bit laborious. There is a pore-scale model for this under Physics.models.multiphase called conduit_conductance. The term conduit refers to the path between two pores that includes 1/2 of each pores plus the connecting throat.
Calculate Relative Permeability of Each Phase
We are now ready to calculate the relative permeability of the domain under partially flooded conditions. Instantiate an StokesFlow object:
End of explanation
"""
phys_water_internal.regenerate_models()
water_flow.run()
Q_full, = water_flow.rate(pores=pn.pores('right'))
"""
Explanation: The relative permeability is the ratio of the water flow through the partially water saturated media versus through fully water saturated media; hence we need to find the absolute permeability of water. This can be accomplished by regenerating the phys_water_internal object, which will recalculate the 'throat.hydraulic_conductance' values and overwrite our manually entered near-zero values from the inv simulation using phys_water_internal.models.regenerate(). We can then re-use the water_flow algorithm:
End of explanation
"""
K_rel = Q_partial/Q_full
print(f"Relative permeability: {K_rel:.5f}")
"""
Explanation: And finally, the relative permeability can be found from:
End of explanation
"""
|
team-hdnet/hdnet | examples/demoValidation.ipynb | gpl-3.0 | true_spikes = hdnet.spikes.Spikes(spikes=hdnet.sampling.sample_from_ising_gibbs(J=Js[0,:15,:15], theta=thetas[0,:15],
num_samples = 10**3, burn_in = 5*10**2, sampling_steps = 10**2))
print(true_spikes._spikes.shape)
"""
Explanation: Generating Spike Train from First Trial, and only first 15 Neurons, of the fitted parameters
End of explanation
"""
print(true_spikes.get_frequencies())
"""
Explanation: Using Method in Spikes for finding frequencies of codewords in sampled spiketrain
End of explanation
"""
pred_spikes = hdnet.spikes.Spikes(spikes=hdnet.sampling.sample_from_ising_gibbs(J=Js[2,:15,:15], theta=thetas[2,:15],
num_samples = 10**3, burn_in = 5*10**2, sampling_steps = 10**2))
print(pred_spikes._spikes.shape)
"""
Explanation: Generating Spike Train from Second Trial, and only first 15 Neurons, of the fitted parameters
End of explanation
"""
print(pred_spikes.get_frequencies())
"""
Explanation: Using Method in Spikes for finding frequencies of codewords in sampled spiketrain
End of explanation
"""
logp = hdnet.spikes_model_validation.LogProbabilityRatio(
true_spikes,pred_spikes)
print(logp.call())
"""
Explanation: Using LogProbabilityRatio Validation between fitted spiketrains
End of explanation
"""
freqc = hdnet.spikes_model_validation.MostFrequentCommonCode(
true_spikes,pred_spikes)
print("Common Codes: "+str(len(freqc.call(50)))+" in Top 50 Codes")
"""
Explanation: Using MostFrequentCommonCode Validation between fitted spiketrains
End of explanation
"""
first_order = pred_spikes.NOrderInteractions(N=1)
print(first_order.shape)
print(first_order[0])
"""
Explanation: Calculating Means, Correlation and Higher Order Interactions between a single predicted spiketrain
First Order
End of explanation
"""
print(pred_spikes.mean_activity())
"""
Explanation: Test
End of explanation
"""
second_order = pred_spikes.NOrderInteractions(N=2)
print(np.shape(second_order))
plt.matshow(second_order[0])
plt.show()
"""
Explanation: Second Order
End of explanation
"""
scaled_spikes = hdnet.spikes.Spikes(pred_spikes.scale_and_center())
print(np.mean(scaled_spikes._spikes,axis=2))
print(np.std(scaled_spikes._spikes,axis=2))
print(np.shape(scaled_spikes._spikes))
plt.matshow(scaled_spikes.covariance()[0])
"""
Explanation: Test
End of explanation
"""
third_order = pred_spikes.NOrderInteractions(N=3)
print(np.shape(third_order))
plt.matshow(third_order[0][1])
plt.show()
"""
Explanation: Third Order
End of explanation
"""
NEURONS = 15
NUM_SAMPLES = 10**4
true_spikes = hdnet.spikes.Spikes(spikes=hdnet.sampling.sample_from_ising_gibbs(J=Js[0,:NEURONS,:NEURONS], theta=thetas[0,:NEURONS],
num_samples = NUM_SAMPLES, burn_in = 5*10**2, sampling_steps = 10**2))
pred_spikes = hdnet.spikes.Spikes(spikes=hdnet.sampling.sample_from_ising_gibbs(J=Js[2,:NEURONS,:NEURONS], theta=thetas[2,:NEURONS],
num_samples = NUM_SAMPLES, burn_in = 5*10**2, sampling_steps = 10**2))
freqs_true = true_spikes.get_frequencies()
logp = hdnet.spikes_model_validation.LogProbabilityRatio(
true_spikes,pred_spikes)
ratios = logp.call()
x = []
y = []
for code,ratio in ratios.items():
x.append(ratio)
y.append(freqs_true[code])
plt.scatter(y,x)
plt.show()
"""
Explanation: Validation Examples
1. Plot Log Probability Ratios of codewords in true_spikes and pred_spikes against frequencies in true_spikes
End of explanation
"""
freqc = hdnet.spikes_model_validation.MostFrequentCommonCode(
true_spikes,pred_spikes)
print("Common Codes: "+str(len(freqc.call(500)))+" in Top 500 Codes")
"""
Explanation: 2. Compare intersection of most frequently occurring codewords in true_spikes and pred_spikes
End of explanation
"""
true_second_order = true_spikes.NOrderInteractions(N=2)[0]
pred_second_order = pred_spikes.NOrderInteractions(N=2)[0]
x = []
y = []
for i in range(NEURONS):
for j in range(i+1,NEURONS):
x.append(true_second_order[i][j])
y.append(pred_second_order[i][j])
fig, ax = plt.subplots()
ax.scatter(x, y)
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]),
np.max([ax.get_xlim(), ax.get_ylim()]),
]
ax.plot(lims, lims, 'k-')
ax.set_aspect('equal')
ax.set_xlim(lims)
ax.set_ylim(lims)
plt.show()
"""
Explanation: 3. Plot C<sub>ij</sub> for neurons in true_spikes and pred_spikes
End of explanation
"""
|
MatteusDeloge/opengrid | notebooks/Demo_caching.ipynb | apache-2.0 | import pandas as pd
from opengrid.library import misc
from opengrid.library import houseprint
from opengrid.library import caching
from opengrid.library import analysis
import charts
hp = houseprint.Houseprint()
"""
Explanation: Demo caching
This notebook shows how caching of daily results is organised. First we show the low-level approach, then a high-level function is used.
Low-level approach
End of explanation
"""
cache_water = caching.Cache(variable='water_daily_min')
df_cache = cache_water.get(sensors=hp.get_sensors(sensortype='water'))
charts.plot(df_cache.ix[-8:], stock=True, show='inline')
"""
Explanation: We demonstrate the caching for the minimal daily water consumption (should be close to zero unless there is a water leak). We create a cache object by specifying what we like to store and retrieve through this object. The cached data is saved as a single csv per sensor in a folder specified in the opengrid.cfg. Add the path to a folder where you want these csv-files to be stored as follows to your opengrid.cfg
[data]
folder: path_to_folder
End of explanation
"""
hp.sync_tmpos()
start = pd.Timestamp('now') - pd.Timedelta(weeks=1)
df_water = hp.get_data(sensortype='water', head=start, ).diff()
df_water.info()
"""
Explanation: If this is the first time you run this demo, no cached data will be found, and you get an empty graph.
Let's store some results in this cache. We start from the water consumption of last week.
End of explanation
"""
daily_min = analysis.daily_min(df_water)
daily_min.info()
daily_min
cache_water.update(daily_min)
"""
Explanation: We use the method daily_min() from the analysis module to obtain a dataframe with daily minima for each sensor.
End of explanation
"""
sensors = hp.get_sensors(sensortype='water') # sensor objects
charts.plot(cache_water.get(sensors=sensors, start=start, end=None), show='inline', stock=True)
"""
Explanation: Now we can get the daily water minima from the cache directly. Pass a start or end date to limit the returned dataframe.
End of explanation
"""
import pandas as pd
from opengrid.library import misc
from opengrid.library import houseprint
from opengrid.library import caching
from opengrid.library import analysis
import charts
hp = houseprint.Houseprint()
#hp.sync_tmpos()
sensors = hp.get_sensors(sensortype='water')
caching.cache_results(hp=hp, sensors=sensors, function='daily_min', resultname='water_daily_min')
cache = caching.Cache('water_daily_min')
daily_min = cache.get(sensors = sensors, start = '20151201')
charts.plot(daily_min, stock=True, show='inline')
"""
Explanation: A high-level cache function
The caching of daily results is very similar for all kinds of results. Therefore, a high-level function is defined that can be parametrised to cache a lot of different things.
End of explanation
"""
|
zero323/spark | python/docs/source/getting_started/quickstart_df.ipynb | apache-2.0 | from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
"""
Explanation: Quickstart: DataFrame
This is a short introduction and quickstart for the PySpark DataFrame API. PySpark DataFrames are lazily evaluated. They are implemented on top of RDDs. When Spark transforms data, it does not immediately compute the transformation but plans how to compute later. When actions such as collect() are explicitly called, the computation starts.
This notebook shows the basic usages of the DataFrame, geared mainly for new users. You can run the latest version of these examples by yourself in 'Live Notebook: DataFrame' at the quickstart page.
There is also other useful information in Apache Spark documentation site, see the latest version of Spark SQL and DataFrames, RDD Programming Guide, Structured Streaming Programming Guide, Spark Streaming Programming Guide and Machine Learning Library (MLlib) Guide.
PySpark applications start with initializing SparkSession which is the entry point of PySpark as below. In case of running it in PySpark shell via <code>pyspark</code> executable, the shell automatically creates the session in the variable <code>spark</code> for users.
End of explanation
"""
from datetime import datetime, date
import pandas as pd
from pyspark.sql import Row
df = spark.createDataFrame([
Row(a=1, b=2., c='string1', d=date(2000, 1, 1), e=datetime(2000, 1, 1, 12, 0)),
Row(a=2, b=3., c='string2', d=date(2000, 2, 1), e=datetime(2000, 1, 2, 12, 0)),
Row(a=4, b=5., c='string3', d=date(2000, 3, 1), e=datetime(2000, 1, 3, 12, 0))
])
df
"""
Explanation: DataFrame Creation
A PySpark DataFrame can be created via pyspark.sql.SparkSession.createDataFrame typically by passing a list of lists, tuples, dictionaries and pyspark.sql.Rows, a pandas DataFrame and an RDD consisting of such a list.
pyspark.sql.SparkSession.createDataFrame takes the schema argument to specify the schema of the DataFrame. When it is omitted, PySpark infers the corresponding schema by taking a sample from the data.
Firstly, you can create a PySpark DataFrame from a list of rows
End of explanation
"""
df = spark.createDataFrame([
(1, 2., 'string1', date(2000, 1, 1), datetime(2000, 1, 1, 12, 0)),
(2, 3., 'string2', date(2000, 2, 1), datetime(2000, 1, 2, 12, 0)),
(3, 4., 'string3', date(2000, 3, 1), datetime(2000, 1, 3, 12, 0))
], schema='a long, b double, c string, d date, e timestamp')
df
"""
Explanation: Create a PySpark DataFrame with an explicit schema.
End of explanation
"""
pandas_df = pd.DataFrame({
'a': [1, 2, 3],
'b': [2., 3., 4.],
'c': ['string1', 'string2', 'string3'],
'd': [date(2000, 1, 1), date(2000, 2, 1), date(2000, 3, 1)],
'e': [datetime(2000, 1, 1, 12, 0), datetime(2000, 1, 2, 12, 0), datetime(2000, 1, 3, 12, 0)]
})
df = spark.createDataFrame(pandas_df)
df
"""
Explanation: Create a PySpark DataFrame from a pandas DataFrame
End of explanation
"""
rdd = spark.sparkContext.parallelize([
(1, 2., 'string1', date(2000, 1, 1), datetime(2000, 1, 1, 12, 0)),
(2, 3., 'string2', date(2000, 2, 1), datetime(2000, 1, 2, 12, 0)),
(3, 4., 'string3', date(2000, 3, 1), datetime(2000, 1, 3, 12, 0))
])
df = spark.createDataFrame(rdd, schema=['a', 'b', 'c', 'd', 'e'])
df
"""
Explanation: Create a PySpark DataFrame from an RDD consisting of a list of tuples.
End of explanation
"""
# All DataFrames above result same.
df.show()
df.printSchema()
"""
Explanation: The DataFrames created above all have the same results and schema.
End of explanation
"""
df.show(1)
"""
Explanation: Viewing Data
The top rows of a DataFrame can be displayed using DataFrame.show().
End of explanation
"""
spark.conf.set('spark.sql.repl.eagerEval.enabled', True)
df
"""
Explanation: Alternatively, you can enable spark.sql.repl.eagerEval.enabled configuration for the eager evaluation of PySpark DataFrame in notebooks such as Jupyter. The number of rows to show can be controlled via spark.sql.repl.eagerEval.maxNumRows configuration.
End of explanation
"""
df.show(1, vertical=True)
"""
Explanation: The rows can also be shown vertically. This is useful when rows are too long to show horizontally.
End of explanation
"""
df.columns
df.printSchema()
"""
Explanation: You can see the DataFrame's schema and column names as follows:
End of explanation
"""
df.select("a", "b", "c").describe().show()
"""
Explanation: Show the summary of the DataFrame
End of explanation
"""
df.collect()
"""
Explanation: DataFrame.collect() collects the distributed data to the driver side as the local data in Python. Note that this can throw an out-of-memory error when the dataset is too large to fit in the driver side because it collects all the data from executors to the driver side.
End of explanation
"""
df.take(1)
"""
Explanation: In order to avoid throwing an out-of-memory exception, use DataFrame.take() or DataFrame.tail().
End of explanation
"""
df.toPandas()
"""
Explanation: PySpark DataFrame also provides the conversion back to a pandas DataFrame to leverage pandas API. Note that toPandas also collects all data into the driver side that can easily cause an out-of-memory-error when the data is too large to fit into the driver side.
End of explanation
"""
df.a
"""
Explanation: Selecting and Accessing Data
PySpark DataFrame is lazily evaluated and simply selecting a column does not trigger the computation but it returns a Column instance.
End of explanation
"""
from pyspark.sql import Column
from pyspark.sql.functions import upper
type(df.c) == type(upper(df.c)) == type(df.c.isNull())
"""
Explanation: In fact, most of column-wise operations return Columns.
End of explanation
"""
df.select(df.c).show()
"""
Explanation: These Columns can be used to select the columns from a DataFrame. For example, DataFrame.select() takes the Column instances that returns another DataFrame.
End of explanation
"""
df.withColumn('upper_c', upper(df.c)).show()
"""
Explanation: Assign new Column instance.
End of explanation
"""
df.filter(df.a == 1).show()
"""
Explanation: To select a subset of rows, use DataFrame.filter().
End of explanation
"""
import pandas
from pyspark.sql.functions import pandas_udf
@pandas_udf('long')
def pandas_plus_one(series: pd.Series) -> pd.Series:
# Simply plus one by using pandas Series.
return series + 1
df.select(pandas_plus_one(df.a)).show()
"""
Explanation: Applying a Function
PySpark supports various UDFs and APIs to allow users to execute Python native functions. See also the latest Pandas UDFs and Pandas Function APIs. For instance, the example below allows users to directly use the APIs in a pandas Series within Python native function.
End of explanation
"""
def pandas_filter_func(iterator):
for pandas_df in iterator:
yield pandas_df[pandas_df.a == 1]
df.mapInPandas(pandas_filter_func, schema=df.schema).show()
"""
Explanation: Another example is DataFrame.mapInPandas which allows users directly use the APIs in a pandas DataFrame without any restrictions such as the result length.
End of explanation
"""
df = spark.createDataFrame([
['red', 'banana', 1, 10], ['blue', 'banana', 2, 20], ['red', 'carrot', 3, 30],
['blue', 'grape', 4, 40], ['red', 'carrot', 5, 50], ['black', 'carrot', 6, 60],
['red', 'banana', 7, 70], ['red', 'grape', 8, 80]], schema=['color', 'fruit', 'v1', 'v2'])
df.show()
"""
Explanation: Grouping Data
PySpark DataFrame also provides a way of handling grouped data by using the common approach, split-apply-combine strategy.
It groups the data by a certain condition applies a function to each group and then combines them back to the DataFrame.
End of explanation
"""
df.groupby('color').avg().show()
"""
Explanation: Grouping and then applying the avg() function to the resulting groups.
End of explanation
"""
def plus_mean(pandas_df):
return pandas_df.assign(v1=pandas_df.v1 - pandas_df.v1.mean())
df.groupby('color').applyInPandas(plus_mean, schema=df.schema).show()
"""
Explanation: You can also apply a Python native function against each group by using pandas API.
End of explanation
"""
df1 = spark.createDataFrame(
[(20000101, 1, 1.0), (20000101, 2, 2.0), (20000102, 1, 3.0), (20000102, 2, 4.0)],
('time', 'id', 'v1'))
df2 = spark.createDataFrame(
[(20000101, 1, 'x'), (20000101, 2, 'y')],
('time', 'id', 'v2'))
def asof_join(l, r):
return pd.merge_asof(l, r, on='time', by='id')
df1.groupby('id').cogroup(df2.groupby('id')).applyInPandas(
asof_join, schema='time int, id int, v1 double, v2 string').show()
"""
Explanation: Co-grouping and applying a function.
End of explanation
"""
df.write.csv('foo.csv', header=True)
spark.read.csv('foo.csv', header=True).show()
"""
Explanation: Getting Data in/out
CSV is straightforward and easy to use. Parquet and ORC are efficient and compact file formats to read and write faster.
There are many other data sources available in PySpark such as JDBC, text, binaryFile, Avro, etc. See also the latest Spark SQL, DataFrames and Datasets Guide in Apache Spark documentation.
CSV
End of explanation
"""
df.write.parquet('bar.parquet')
spark.read.parquet('bar.parquet').show()
"""
Explanation: Parquet
End of explanation
"""
df.write.orc('zoo.orc')
spark.read.orc('zoo.orc').show()
"""
Explanation: ORC
End of explanation
"""
df.createOrReplaceTempView("tableA")
spark.sql("SELECT count(*) from tableA").show()
"""
Explanation: Working with SQL
DataFrame and Spark SQL share the same execution engine so they can be interchangeably used seamlessly. For example, you can register the DataFrame as a table and run a SQL easily as below:
End of explanation
"""
@pandas_udf("integer")
def add_one(s: pd.Series) -> pd.Series:
return s + 1
spark.udf.register("add_one", add_one)
spark.sql("SELECT add_one(v1) FROM tableA").show()
"""
Explanation: In addition, UDFs can be registered and invoked in SQL out of the box:
End of explanation
"""
from pyspark.sql.functions import expr
df.selectExpr('add_one(v1)').show()
df.select(expr('count(*)') > 0).show()
"""
Explanation: These SQL expressions can directly be mixed and used as PySpark columns.
End of explanation
"""
|
kbrose/article-tagging | lib/notebooks/explorations.ipynb | mit | print('# total articles :', df.shape[0])
print('# tagged articles :', df.loc[:, 'OEMC':'TASR'].any(1).sum())
print('# not relevant articles:', (~df['relevant']).sum())
print('# w/ no information :', df.shape[0] - df.loc[:, 'OEMC':'TASR'].any(1).sum() - (~df['relevant']).sum())
print('# w/ location info :', df['locations'].apply(bool).sum())
# this number should be 0, but it isn't...
print('\n# articles tagged but not relevant :', (~df['relevant'] & df.loc[:, 'OEMC':'TASR'].any(1)).sum())
categories_df = ld.load_categories()
categories_df = categories_df.loc[:, ['abbreviation', 'category_name']]
categories_df.set_index('abbreviation', drop=True, inplace=True)
categories_df['counts'] = df.loc[:, 'OEMC':'TASR'].apply(sum, reduce=True)
categories_df.sort_values(by='counts')
df.loc[:, 'OEMC':'TASR'].apply(sum, reduce=True).sort_values().plot(kind='bar')
g = plt.gca().yaxis.grid(True)
corrs = df.loc[:, 'OEMC':'TASR'].corr()
for i in range(corrs.shape[0]):
corrs.iloc[i, i] = np.nan
cmap = matplotlib.cm.viridis
cmap.set_bad((.3, .3, .3),1.)
fig, ax = plt.subplots(figsize=(8, 8))
ax.matshow(np.ma.masked_invalid(corrs.values), cmap=cmap)
ax.grid(True, color=(.9, .9, .9), alpha=.1)
plt.xticks(range(len(corrs.columns)), corrs.columns, rotation=90);
plt.yticks(range(len(corrs.columns)), corrs.columns);
"""
Explanation: Article Tags Exploration
End of explanation
"""
# Print a random article just to see what they look like.
i = np.random.choice(df.shape[0])
print('ARTICLE ID:', df.index[i], '\n------------------')
print(df.iloc[i]['bodytext'])
"""
Explanation: Text Contents Exploration
End of explanation
"""
|
patryk-oleniuk/emotion_recognition | temp/emotion_recognition_Oleniuk_Galotta_v1.ipynb | gpl-3.0 | import random
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import csv
import scipy.misc
import time
import collections
import os
import utils as ut
import importlib
import copy
importlib.reload(ut)
%matplotlib inline
plt.rcParams['figure.figsize'] = (20.0, 20.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Load the CSV data
emotions_dataset_dir = 'fer2013_full.csv'
#obtaining the number of line of the csv file
file = open(emotions_dataset_dir)
numline = len(file.readlines())
print ('Number of data in the dataset:',numline)
"""
Explanation: A Network Tour of Data Science, EPFL 2016
Project: Facial Emotion Recognition
Dataset taken from: kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge
<br>
<br>
students: Patryk Oleniuk, Carmen Galotta
The project presented here is an algorithm to recognize and detect emotions from a face picture.
Of course, the task of recognize face emotions is very easy for humans to do even if somethimes is really hard to understand how a person feels, but what can be easily understood thanks to human brain, is difficult to emulate by a machine.
The aim of this project is to classify faces in discrete human emotions. Due to the success of Convolutional Neural Network in images classification tasks it has been tought that employing it could be a good idea in face emotion as well.
The dataset has been taken from the kaggle competition and consists of 35k grey images with size 48x48 pixels already labeled with a number coding for classes of emotions, namely:
0-Angry<br>
1-Disgust<br>
2-Fear<br>
3-Happy<br>
4-Sad<br>
5-Surprise<br>
6-Neutral<br>
The faces are mostly centered in the image.
Configuration, dataset file
End of explanation
"""
#Load the file in csv
ifile = open(emotions_dataset_dir, "rt")
reader = csv.reader(ifile)
hist_threshold = 350 # images above this threshold will be removed
hist_div = 100 #parameter of the histogram
print('Loading Images. It may take a while, depending on the database size.')
images, emotions, strange_im, num_strange, num_skipped = ut.load_dataset(reader, numline, hist_div, hist_threshold)
ifile.close()
print('Skipped', num_skipped, 'happy class images.')
print(str( len(images) ) + ' are left after \'strange images\' removal.')
print('Deleted ' + str( num_strange ) + ' strange images. Images are shown below')
# showing strange images
plt.rcParams['figure.figsize'] = (5.0, 5.0) # set default size of plots
idxs = np.random.choice(range(1,num_strange ), 6, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i
plt.subplot(1, 6, plt_idx+1)
plt.imshow(strange_im[idx])
plt.axis('off')
if(i == 0):
plt.title('Some of the images removed from dataset (max(histogram) thresholded)')
plt.show()
"""
Explanation: Load the data from *.csv file
The first step is to load the data from the .csv file. <br> The format of the csv line is<br>
class{0,1,2,3,4,5,6},pix0 pix2304,DataUsage(not used)<br>
e.g.<br>
2,234 1 34 23 ..... 234 256 0,Training<br>
The picture is always 48x48 pixels, 0-255 greyscale.
Data cleaning:
1.Remove strange data
In the database there are some images thar are not good (e.g. some images are pixelated, unrelevant, from animations).
It has been tried to filter them by looking at the maximum of the histogram. If the image is very homogenous, the maximum value of the histogram will be very high (that is to say above a certain threshold) then this image is filtered out. Of course in this way are also removed some relevant information, but it's better for the CNN not to consider these images.
2.Merge class 0 and 1
We discovered that class 1 has a very small amount of occurance in the test data et. This class, (disgust) is very similar to anger and that is why we merger class 0 and 1 together.
Therefore, the recognized emotions and labels are reduced to 6:<br>
0-(Angry + Disgust)<br>
1-Fear<br>
2-Happy<br>
3-Sad<br>
4-Surprise<br>
5-Neutral<br>
End of explanation
"""
classes = [0,1,2,3,4,5]
str_emotions = ['angry','scared','happy','sad','surprised','normal']
num_classes = len(classes)
samples_per_class = 6
plt.rcParams['figure.figsize'] = (10.0, 10.0) # set default size of plots
for y, cls in enumerate(classes):
idxs = np.flatnonzero(emotions == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(images[idx])
y_h, x_h = np.histogram( images[idx], hist_div );
plt.axis('off')
if(i == 0):
plt.title(str_emotions[y] )
plt.show()
"""
Explanation: Explore the correct data
Here there are plotted some random pictures from each class. As we can see some of them are with glasses or covered with long hair as well as some pictures taken from sideways. This is, of course, an additional challenge when training the neural network.
End of explanation
"""
print('number of clean data:' + str(images.shape[0]) + ' 48x48 pix , 0-255 greyscale images')
n_all = images.shape[0];
n_train = 64; # number of data for training and for batch
# dividing the input data
train_data_orig = images[0:n_all-n_train,:,:]
train_labels = emotions[0:n_all-n_train]
test_data_orig = images[n_all-n_train:n_all,:,:]
test_labels = emotions[n_all-n_train:n_all]
# Convert to float
train_data_orig = train_data_orig.astype('float32')
y_train = train_labels.astype('float32')
test_data_orig = test_data_orig.astype('float32')
y_test = test_labels.astype('float32')
print('orig train data ' + str(train_data_orig.shape))
print('orig train labels ' + str(train_labels.shape) + 'from ' + str(train_labels.min()) + ' to ' + str(train_labels.max()) )
print('orig test data ' + str(test_data_orig.shape))
print('orig test labels ' + str(test_labels.shape)+ 'from ' + str(test_labels.min()) + ' to ' + str(test_labels.max()) )
for i in range (0, 5):
print('TRAIN: number of' , i, 'labels',len(train_labels[train_labels == i]))
for i in range (0, 5):
print('TEST: number of', i, 'labels',len(test_labels[test_labels == i]))
As
"""
Explanation: Prepare the Data for CNN
Here the initial data have been divided to create train and test data. <br>
This two subsets have both an associated label to train the neural network and to test its accuracy with the test data.
The number of images used for each category of emotions is shown both for the train as for the test data.<br>
The size of each batch it has been chosen to 64 because, after analyzing the performances, we discovered that decreasing the batch size actually improved the accuracy.
End of explanation
"""
# Data pre-processing
n = train_data_orig.shape[0];
train_data = np.zeros([n,48**2])
for i in range(n):
xx = train_data_orig[i,:,:]
xx -= np.mean(xx)
xx /= np.linalg.norm(xx)
train_data[i,:] = xx.reshape(2304); #np.reshape(xx,[-1])
n = test_data_orig.shape[0]
test_data = np.zeros([n,48**2])
for i in range(n):
xx = test_data_orig[i,:,:]
xx -= np.mean(xx)
xx /= np.linalg.norm(xx)
test_data[i] = np.reshape(xx,[-1])
#print(train_data.shape)
#print(test_data.shape)
#print(train_data_orig[0][2][2])
#print(test_data[0][2])
plt.rcParams['figure.figsize'] = (2.0, 2.0) # set default size of plots
plt.imshow(train_data[4].reshape([48,48]));
plt.title('example image after processing');
# Convert label values to one_hot vector
train_labels = ut.convert_to_one_hot(train_labels,num_classes)
test_labels = ut.convert_to_one_hot(test_labels,num_classes)
print('train labels shape',train_labels.shape)
print('test labels shape',test_labels.shape)
"""
Explanation: As we can see, the number of training images for each class is different, that is why we decided to skip 3095 "happy" class images.<br>
In fact, the non homogeneous number of images per class could lead the network to be excessively accurate on one single class rather than on each one.
Prepare the data for CNN
Here the data are a little bit modified to be correctly fed into the CNN. <br>
What has been done is convert, normalize and subtract the const mean value from the data images.<br>
Finally the label values of the classes are converted to a binary one_hot vector.
End of explanation
"""
# Define computational graph (CG)
batch_size = n_train # batch size
d = train_data.shape[1] # data dimensionality
nc = 6 # number of classes
# Inputs
xin = tf.placeholder(tf.float32,[batch_size,d]);
y_label = tf.placeholder(tf.float32,[batch_size,nc]);
#Size and number of filters
K0 = 8 # size of the patch
F0 = 64 # number of filters
ncl0 = K0*K0*F0
Wcl0 = tf.Variable(tf.truncated_normal([K0,K0,1,F0], stddev=tf.sqrt(2./tf.to_float(ncl0)) )); print('Wcl=',Wcl0.get_shape())
bcl0 = bias_variable([F0]); print('bcl0=',bcl0.get_shape()) #in ReLu case, small positive bias added to prevent killing of gradient when input is negative.
#Reshaping the input to size 48x48
x_2d0 = tf.reshape(xin, [-1,48,48,1]); print('x_2d=',x_2d0.get_shape())
# Convolutional layer
x = tf.nn.conv2d(x_2d0, Wcl0, strides=[1, 1, 1, 1], padding='SAME')
x += bcl0; print('x2=',x.get_shape())
# ReLU activation
x = tf.nn.relu(x)
# Fully Connected layer
nfc = 48*48*F0
x = tf.reshape(x, [batch_size,-1]); print('x3=',x.get_shape())
Wfc = tf.Variable(tf.truncated_normal([nfc,nc], stddev=tf.sqrt(2./tf.to_float(nfc+nc)) )); print('Wfc=',Wfc.get_shape())
bfc = tf.Variable(tf.zeros([nc])); print('bfc=',bfc.get_shape())
y = tf.matmul(x, Wfc); print('y1=',y.get_shape())
y += bfc; print('y2=',y.get_shape())
# Softmax
y = tf.nn.softmax(y); print('y3(SOFTMAX)=',y.get_shape())
# Loss
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))
total_loss = cross_entropy
# Optimization scheme
train_step = tf.train.AdamOptimizer(0.004).minimize(total_loss)
# Accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Run Computational Graph
n = train_data.shape[0]
indices = collections.deque()
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(1001):
# Batch extraction
if len(indices) < batch_size:
indices.extend(np.random.permutation(n))
idx = [indices.popleft() for i in range(batch_size)]
batch_x, batch_y = train_data[idx,:], train_labels[idx]
#print(batch_x.shape,batch_y.shape)
# Run CG for vao to increase the test acriable training
_,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y})
# Run CG for test set
if not i%100:
print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)
acc_test = sess.run(accuracy, feed_dict={xin: test_data, y_label: test_labels})
print('test accuracy=',acc_test)
"""
Explanation: Model 1 - Overfitting the data TODO not overfitting with 35k data
In the first model it has been implemented a baseline softmax classifier using a single convolutional layer and a one fully connected layer. For the initial baseline
it has not be used any regularization, dropout, or batch normalization.
The equation of the classifier is simply:
$$
y=\textrm{softmax}(ReLU( x \ast W_1+b_1)W_2+b_2)
$$
For this first attempt have been applied 64 filters with size 8x8.
The optimization scheme chosen if the AdamOptimizer with a learning rate of 0.004 s.
End of explanation
"""
#Definition of function that have been used in the CNN
d = train_data.shape[1]
def weight_variable2(shape, nc10):
initial2 = tf.random_normal(shape, stddev=tf.sqrt(2./tf.to_float(ncl0)) )
return tf.Variable(initial2)
def conv2dstride2(x,W):
return tf.nn.conv2d(x,W,strides=[1, 2, 2, 1], padding='SAME')
def conv2d(x,W):
return tf.nn.conv2d(x,W,strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=1/np.sqrt(d/2) )
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.01,shape=shape)
return tf.Variable(initial)
"""
Explanation: Comments
Nevertheless, we discarded this fist model because it overfits the training data already at iteration 400, while getting a test accuracy of only 28%.
In order to prevent overfitting in the following models have been applied different techniques such as dropout and pool, as well as tried to implement a neural network of more layers.
This should help and improve the model since the first convolutional layer will just extract some simplest characteristics of the image such as edges, lines and curves. Adding layers will improve the performances because they will detect some high level feature which in this case could be really relevant since it's about face expressions.
Techniques
To prevent the overfitting problem observed with the first model the following techniques have been studied and used.
1.Choosing Hyperparameters
One of the most challenging choise to be done while constructing a convolutional neural network is the choice of the number and dimension of the filter to be used as well as the number of layers to employ.<br>
Of course there is not a standard design, because it depends on the dataset and the features of the different images and on the complexity of the task.
For our purpose we decided to start with a higher size of filter and then decrease it.
Besides, the activation layer that has been employed is the ReLU one (Rectified Linear Units) which is the most used and most efficient since it helps to alleviate the vanishing gradient problem.
The ReLU layer applies the non linear function f(x) = max(0, x) to the input basically just eliminating all the negative activations.
This design choice also explain why we initialized the bias vector b to a small positive value that is to prevent killing of gradient when input is negative.
2.Pooling Layers
Pool layer could be added after the ReLu ones. This layer is also known as a downsampling layer.
The intuitive reasoning behind this layer is that once we know that a specific feature is in the original input volume, its exact location is not as important as its relative location to the other features.
This layer drastically reduces the spatial dimensions of the input volume and is used to reduce the computation cost and to control overfitting.
3.Dropout Layers
Finally, dropout layers can be used to deactivate with a defined probability a random set of activations in that specific layer which in the forward pass is then considered as set to zero.
Also this technique can help to prevent the overfitting problem.
Ref: https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks-Part-2/
Advanced computational graphs - functions
End of explanation
"""
tf.reset_default_graph()
# Define computational graph (CG)
batch_size = n_train # batch size
d = train_data.shape[1] # data dimensionality
nc = 6 # number of classes
# Inputs
xin = tf.placeholder(tf.float32,[batch_size,d])
y_label = tf.placeholder(tf.float32,[batch_size,nc])
# Convolutional layer
K0 = 7 # size of the patch
F0 = 16 # number of filters
ncl0 = K0*K0*F0
K1 = 5 # size of the patch
F1 = 16 # number of filters
ncl0 = K1*K1*F1
K2 = 3 # size of the patch
F2 = 2 # number of filters
ncl0 = K2*K2*F2
nfc = int(48*48*F0/4)
nfc1 = int(48*48*F1/4)
nfc2 = int(48*48*F2/4)
keep_prob_input=tf.placeholder(tf.float32)
#First set of conv followed by conv stride 2 operation and dropout 0.5
W_conv1=weight_variable([K0,K0,1,F0]); print('W_conv1=',W_conv1.get_shape())
b_conv1=bias_variable([F0]); print('b_conv1=',b_conv1.get_shape())
x_2d0 = tf.reshape(xin, [-1,48,48,1]); print('x_2d0=',x_2d0.get_shape())
h_conv1=tf.nn.relu(conv2d(x_2d0,W_conv1)+b_conv1); print('h_conv1=',h_conv1.get_shape())
h_conv1= tf.nn.dropout(h_conv1,keep_prob_input);
# 2nd convolutional layer
W_conv2=weight_variable([K0,K0,F0,F0]); print('W_conv2=',W_conv2.get_shape())
b_conv2=bias_variable([F0]); print('b_conv2=',b_conv2.get_shape())
h_conv2 = tf.nn.relu(conv2d(h_conv1,W_conv2)+b_conv2); print('h_conv2=',h_conv2.get_shape())
h_conv2_pooled = max_pool_2x2(h_conv2); print('h_conv2_pooled=',h_conv2_pooled.get_shape())
# reshaping for fully connected
h_conv2_pooled_rs = tf.reshape(h_conv2_pooled, [batch_size,-1]); print('x_rs',h_conv2_pooled_rs.get_shape());
W_norm3 = weight_variable([nfc1, nfc]); print('W_norm3=',W_norm3.get_shape())
b_conv3 = bias_variable([nfc1]); print('b_conv3=',b_conv3.get_shape())
# fully connected layer
h_full3 = tf.matmul( W_norm3, tf.transpose(h_conv2_pooled_rs) ); print('h_full3=',h_full3.get_shape())
h_full3 = tf.transpose(h_full3); print('h_full3=',h_full3.get_shape())
h_full3 += b_conv3; print('h_full3=',h_full3.get_shape())
h_full3=tf.nn.relu(h_full3); print('h_full3=',h_full3.get_shape())
h_full3=tf.nn.dropout(h_full3,keep_prob_input); print('h_full3_dropout=',h_full3.get_shape())
#reshaping back to conv
h_full3_rs = tf.reshape(h_full3, [batch_size, 24,24,-1]); print('h_full3_rs=',h_full3_rs.get_shape())
#Second set of conv followed by conv stride 2 operation
W_conv4=weight_variable([K1,K1,F1,F1]); print('W_conv4=',W_conv4.get_shape())
b_conv4=bias_variable([F1]); print('b_conv4=',b_conv4.get_shape())
h_conv4=tf.nn.relu(conv2d(h_full3_rs,W_conv4)+b_conv4); print('h_conv4=',h_conv4.get_shape())
h_conv4 = max_pool_2x2(h_conv4); print('h_conv4_pooled=',h_conv4.get_shape())
# reshaping for fully connected
h_conv4_pooled_rs = tf.reshape(h_conv4, [batch_size,-1]); print('x2_rs',h_conv4_pooled_rs.get_shape());
W_norm4 = weight_variable([ 2304, nc]); print('W_norm4=',W_norm4.get_shape())
b_conv4 = tf.Variable(tf.zeros([nc])); print('b_conv4=',b_conv4.get_shape())
# fully connected layer
h_full4 = tf.matmul( h_conv4_pooled_rs, W_norm4 ); print('h_full4=',h_full4.get_shape())
h_full4 += b_conv4; print('h_full4=',h_full4.get_shape())
y = h_full4;
## Softmax
y = tf.nn.softmax(y); print('y(SOFTMAX)=',y.get_shape())
# Loss
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))
total_loss = cross_entropy
# Optimization scheme
train_step = tf.train.AdamOptimizer(0.001).minimize(total_loss)
# Accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Run Computational Graph
n = train_data.shape[0]
indices = collections.deque()
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(15001):
# Batch extraction
if len(indices) < batch_size:
indices.extend(np.random.permutation(n))
idx = [indices.popleft() for i in range(batch_size)]
batch_x, batch_y = train_data[idx,:], train_labels[idx]
#print(batch_x.shape,batch_y.shape)
# Run CG for vao to increase the test acriable training
_,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y, keep_prob_input: 0.2})
# Run CG for test set
if not i%50:
print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)
acc_test = sess.run(accuracy, feed_dict = {xin: test_data, y_label: test_labels, keep_prob_input: 1.0})
print('test accuracy=',acc_test)
"""
Explanation: Model 2 - 4 x Convolutional Layers, 1x Fully Connected
The second model consists in a 4 layer convolutional neural network with a final fully connected layer.
$$
x= maxpool2x2( ReLU( ReLU( x* W_1+b_1) * W_2+b_2))$$ 2 times (also for $W_3,b_3 W_4,b_4$)
for each layer it has been added a pool layer after the ReLU and this result in a decreasing dimensionality (from 48 to 12)
$$
y=\textrm{softmax} {( x W_5+b_5)}$$
For the 1,2,3 layers the filter used are 16 with a dimension of 7x7 while for the 4,5,6 a dimension of 5x5.
A dropout layer is also applied.
The optimization scheme used it AdamOptimizer with a learning rate of 0.001 s.
End of explanation
"""
tf.reset_default_graph()
# implementation of Conv-Relu-COVN-RELU - pool
# based on : http://cs231n.github.io/convolutional-networks/
# Define computational graph (CG)
batch_size = n_train # batch size
d = train_data.shape[1] # data dimensionality
nc = 6 # number of classes
# Inputs
xin = tf.placeholder(tf.float32,[batch_size,d]); #print('xin=',xin,xin.get_shape())
y_label = tf.placeholder(tf.float32,[batch_size,nc]); #print('y_label=',y_label,y_label.get_shape())
#for the first conc-conv
# Convolutional layer
K0 = 8 # size of the patch
F0 = 22 # number of filters
ncl0 = K0*K0*F0
#for the second conc-conv
K1 = 4 # size of the patch
F1 = F0 # number of filters
ncl1 = K1*K1*F1
#drouput probability
keep_prob_input=tf.placeholder(tf.float32)
#1st set of conv followed by conv2d operation and dropout 0.5
W_conv1=weight_variable([K0,K0,1,F0]); print('W_conv1=',W_conv1.get_shape())
b_conv1=bias_variable([F0]); print('b_conv1=',b_conv1.get_shape())
x_2d1 = tf.reshape(xin, [-1,48,48,1]); print('x_2d1=',x_2d1.get_shape())
#conv2d
h_conv1=tf.nn.relu(conv2d(x_2d1, W_conv1) + b_conv1); print('h_conv1=',h_conv1.get_shape())
#h_conv1= tf.nn.dropout(h_conv1,keep_prob_input);
# 2nd convolutional layer + max pooling
W_conv2=weight_variable([K0,K0,F0,F0]); print('W_conv2=',W_conv2.get_shape())
b_conv2=bias_variable([F0]); print('b_conv2=',b_conv2.get_shape())
# conv2d + max pool
h_conv2 = tf.nn.relu(conv2d(h_conv1,W_conv2)+b_conv2); print('h_conv2=',h_conv2.get_shape())
h_conv2_pooled = max_pool_2x2(h_conv2); print('h_conv2_pooled=',h_conv2_pooled.get_shape())
#3rd set of conv
W_conv3=weight_variable([K0,K0,F0,F0]); print('W_conv3=',W_conv3.get_shape())
b_conv3=bias_variable([F1]); print('b_conv3=',b_conv3.get_shape())
x_2d3 = tf.reshape(h_conv2_pooled, [-1,24,24,F0]); print('x_2d3=',x_2d3.get_shape())
#conv2d
h_conv3=tf.nn.relu(conv2d(x_2d3, W_conv3) + b_conv3); print('h_conv3=',h_conv3.get_shape())
# 4th convolutional layer
W_conv4=weight_variable([K1,K1,F1,F1]); print('W_conv4=',W_conv4.get_shape())
b_conv4=bias_variable([F1]); print('b_conv4=',b_conv4.get_shape())
#conv2d + max pool 4x4
h_conv4 = tf.nn.relu(conv2d(h_conv3,W_conv4)+b_conv4); print('h_conv4=',h_conv4.get_shape())
h_conv4_pooled = max_pool_2x2(h_conv4); print('h_conv4_pooled=',h_conv4_pooled.get_shape())
h_conv4_pooled = max_pool_2x2(h_conv4_pooled); print('h_conv4_pooled=',h_conv4_pooled.get_shape())
#5th set of conv
W_conv5=weight_variable([K1,K1,F1,F1]); print('W_conv5=',W_conv5.get_shape())
b_conv5=bias_variable([F1]); print('b_conv5=',b_conv5.get_shape())
x_2d5 = tf.reshape(h_conv4_pooled, [-1,6,6,F1]); print('x_2d5=',x_2d5.get_shape())
#conv2d
h_conv5=tf.nn.relu(conv2d(x_2d5, W_conv5) + b_conv5); print('h_conv5=',h_conv5.get_shape())
# 6th convolutional layer
W_conv6=weight_variable([K1,K1,F1,F1]); print('W_con6=',W_conv6.get_shape())
b_conv6=bias_variable([F1]); print('b_conv6=',b_conv6.get_shape())
b_conv6= tf.nn.dropout(b_conv6,keep_prob_input);
#conv2d + max pool 4x4
h_conv6 = tf.nn.relu(conv2d(h_conv5,W_conv6)+b_conv6); print('h_conv6=',h_conv6.get_shape())
h_conv6_pooled = max_pool_2x2(h_conv6); print('h_conv6_pooled=',h_conv6_pooled.get_shape())
# reshaping for fully connected
h_conv6_pooled_rs = tf.reshape(h_conv6, [batch_size,-1]); print('x2_rs',h_conv6_pooled_rs.get_shape());
W_norm6 = weight_variable([ 6*6*F1, nc]); print('W_norm6=',W_norm6.get_shape())
b_norm6 = bias_variable([nc]); print('b_conv6=',b_norm6.get_shape())
# fully connected layer
h_full6 = tf.matmul( h_conv6_pooled_rs, W_norm6 ); print('h_full6=',h_full6.get_shape())
h_full6 += b_norm6; print('h_full6=',h_full6.get_shape())
y = h_full6;
## Softmax
y = tf.nn.softmax(y); print('y3(SOFTMAX)=',y.get_shape())
# Loss
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))
total_loss = cross_entropy
# Optimization scheme
train_step = tf.train.AdamOptimizer(0.001).minimize(total_loss)
# Accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Run Computational Graph
n = train_data.shape[0]
indices = collections.deque()
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(20001):
# Batch extraction
if len(indices) < batch_size:
indices.extend(np.random.permutation(n))
idx = [indices.popleft() for i in range(batch_size)]
batch_x, batch_y = train_data[idx,:], train_labels[idx]
#print(batch_x.shape,batch_y.shape)
# Run CG for vao to increase the test acriable training
_,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y, keep_prob_input: 0.5})
# Run CG for test set
if not i%100:
print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)
acc_test = sess.run(accuracy, feed_dict = {xin: test_data, y_label: test_labels, keep_prob_input: 1.0})
print('test accuracy=',acc_test)
"""
Explanation: Comments
As we can see, this second model has better performances compared to the first one, it reaches a test accuracy up to 35% while still showing overfit.<br>
Unfortunately after few iterations it finished learning stucking on 0.15625 of test accuracy.
For this reason also this model has been discarded.
Model 3: Computational graph - 6 Layers, Conv-Relu-Maxpool, 1 Fully Connected L.
The third model consists in a 6 layer convolutional neural network with a final fully connected layer.
$$
x= maxpool2x2( ReLU( ReLU( x* W_1+b_1) * W_2+b_2))$$ 2 times (also for $W_3,b_3 W_4,b_4$)
for each layer it has been added a pool layer after the ReLU and this result in a decreasing dimensionality (from 48 to 6)
$$
ReLU( x* W_5+b_5) $$ which is 1 additional conv layer and finally a fully connected layer at the end.
$$
y=\textrm{softmax} {( x W_6+b_1)}$$
For the 1,2,3 layers the filter used are 22 with a dimension of 8x8 while for the 4,5,6 a dimension of 4x4.
A dropout layer is also applied.
The optimization scheme used it AdamOptimizer with a learning rate of 0.001 s.
End of explanation
"""
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Save the variables to disk.
save_path = saver.save(sess, "model_6layers.ckpt")
print("Model saved in file: %s" % save_path)
"""
Explanation: Comments
This third model reaches a test accuracy up to 56% which is the best result we could get.
We can also see that we have prevent a little bit the overfitting problem observed with the previous models.
Saving the trained graph in TF file
Ref:https://www.tensorflow.org/how_tos/variables/
End of explanation
"""
# calculating accuracy for each class separately for the test set
result_cnn = sess.run([y], feed_dict = {xin: test_data, keep_prob_input: 1.0})
#result = sess.run(y, feed_dict={xin: test_data, keep_prob_input: 1.0})
tset = test_labels.argmax(1);
result = np.asarray(result_cnn[:][0]).argmax(1);
for i in range (0,nc):
print('accuracy',str_emotions[i]+str(' '), '\t',ut.calc_partial_accuracy(tset, result, i))
"""
Explanation: Class Accuracy
To see how our network behaves for every classes, it is calculated separately the accuracy for each of them.<br>
As we can see it is slightly different for each emotion, with a highest accuracy for the surprised one (72%) and a lowest for the sad one (31%).
End of explanation
"""
#faces, marked_img = ut.get_faces_from_img('big_bang.png');
#faces, marked_img = ut.get_faces_from_img('big_bang.png');
faces, marked_img = ut.get_faces_from_img('camera');
# if some face was found in the image
if(len(faces)):
#creating the blank test vector
data_orig = np.zeros([n_train, 48,48])
#putting face data into the vector (only first few)
for i in range(0, len(faces)):
data_orig[i,:,:] = ut.contrast_stretch(faces[i,:,:]);
#preparing image and putting it into the batch
n = data_orig.shape[0];
data = np.zeros([n,48**2])
for i in range(n):
xx = data_orig[i,:,:]
xx -= np.mean(xx)
xx /= np.linalg.norm(xx)
data[i,:] = xx.reshape(2304); #np.reshape(xx,[-1])
result = sess.run([y], feed_dict={xin: data, keep_prob_input: 1.0})
plt.rcParams['figure.figsize'] = (10.0, 10.0) # set default size of plots
for i in range(0, len(faces)):
emotion_nr = np.argmax(result[0][i]);
plt_idx = (2*i)+1;
plt.subplot( 5, 2*len(faces)/5+1, plt_idx)
plt.imshow(np.reshape(data[i,:], (48,48)))
plt.axis('off')
plt.title(str_emotions[emotion_nr])
ax = plt.subplot(5, 2*len(faces)/5+1, plt_idx +1)
ax.bar(np.arange(nc) , result[0][i])
ax.set_xticklabels(str_emotions, rotation=45, rotation_mode="anchor")
ax.set_yticks([])
plt.show()
"""
Explanation: Feeding the CNN with some data (camera/file)
Finally to test if the model really works it's needed to feed some new row and unlabeled data into the neural network.
To do so some images are taken from the internet or could be taken directly from the camera.
End of explanation
"""
|
mtambos/springleaf | Springleaf - preprocess - string columns.ipynb | mit | %pylab inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from collections import defaultdict, namedtuple
from datetime import datetime, timedelta
from functools import partial
import inspect
import json
import os
import re
import sys
import cPickle as pickle
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
def analyze_str_columns(cols, df, only_percent=False):
print 'Total samples: %s' % len(df)
for c in cols:
print '##############################'
VAR_df = df[[c, 'target']]
unique_vals = VAR_df[c].unique()
# NaNs are the only floats among the values
non_nan = [v for v in unique_vals if type(v) == str]
str_0 = []
str_1 = []
col_names = []
for u in unique_vals:
if type(u) == str:
col_mask = (VAR_df[c] == u)
else:
col_mask = VAR_df[c].isnull()
str_0.append(len(VAR_df[col_mask & (VAR_df['target'] == 0)]))
str_1.append(len(VAR_df[col_mask & (VAR_df['target'] == 1)]))
col_names.append('%s_%s'%(c,u))
VAR_df_counts = pd.DataFrame([str_0, str_1],
columns=col_names,
index=pd.Index([0, 1], name='target'))
if not only_percent:
print "------Counts-------"
print VAR_df_counts
print "----Percentages----"
print VAR_df_counts/VAR_df_counts.sum()*100
"""
Explanation: Import useful stuff and define ancillary functions
End of explanation
"""
if os.name == 'nt':
TRAIN_PATH = r'D:\train.csv'
PTRAIN_PATH = r'D:\train_preprocessed_float_string.csv'
TEST_PATH = r'D:\test.csv'
GOOGNEWS_PATH = r'D:\GoogleNews-vectors-negative300.bin.gz'
VOCAB_PATH = r'D:\big.txt'
else:
TRAIN_PATH = r'/media/mtambos/speedy/train.csv'
PTRAIN_PATH = r'/media/mtambos/speedy/train_preprocessed_float_string.csv'
TEST_PATH = r'/media/mtambos/speedy/test.csv'
GOOGNEWS_PATH = r'/media/mtambos/speedy/GoogleNews-vectors-negative300.bin.gz'
VOCAB_PATH = r'/media/mtambos/speedy/big.txt'
df = pd.read_csv(PTRAIN_PATH, index_col="ID")
"""
Explanation: Load train data
Using pandas' read_csv with all the defaults
End of explanation
"""
str_cols = [u'VAR_0001', u'VAR_0005', u'VAR_0044',
u'VAR_0200', u'VAR_0202', u'VAR_0214',
u'VAR_0216', u'VAR_0222', u'VAR_0237',
u'VAR_0274', u'VAR_0283', u'VAR_0305',
u'VAR_0325', u'VAR_0342', u'VAR_0352',
u'VAR_0353', u'VAR_0354', u'VAR_0404',
u'VAR_0466', u'VAR_0467', u'VAR_0493',
u'VAR_1934']
try:
str_cols = [c for c in str_cols if c in df.columns and df[c].dtype==np.object]
except NameError:
pass
"""
Explanation: Define columns
End of explanation
"""
neg_samples_count = len(df['target'][df['target']==0])
pos_samples_count = len(df['target'][df['target']==1])
print '%s negative samples; %.2f%% of total' % (neg_samples_count, neg_samples_count/len(df)*100)
print '%s positive samples; %.2f%% of total' % (pos_samples_count, pos_samples_count/len(df)*100)
"""
Explanation: See if the classes are skewed
End of explanation
"""
def filter_str(str_cell):
str_cell = re.sub(r'[\W_]+', ' ', str(str_cell))
str_cell = str_cell.strip().lower()
if str_cell in ('1', '-1', '[]', 'nan', ''):
return None
else:
return str_cell
df[str_cols] = df[str_cols].astype(np.str).applymap(filter_str)
df[str_cols]
"""
Explanation: Cast string columns as string and make 'null' data uniform (instead of nan, -1, [], etc.)
End of explanation
"""
str_desc = df[str_cols].describe()
str_desc = pd.DataFrame(str_desc, columns=sorted(str_desc.columns, key=lambda c: str_desc.loc['std', c]))
str_desc
"""
Explanation: Vectorize String and Datetime colums
String columns
See how many different values the string columns have
End of explanation
"""
df.drop('VAR_0044', axis=1, inplace=True)
str_cols.remove('VAR_0044')
"""
Explanation: Column VAR_0044 has not a single value, drop it.
End of explanation
"""
analyze_str_columns(['VAR_0202', 'VAR_0216', 'VAR_0222', 'VAR_0466'], df)
"""
Explanation: Columns VAR_0202, VAR_0216, VAR_0222 and VAR_0466 have only one value. Check if there's some correlation between the values and the target.
Replace their string values for 1 if there was something in the cell, or 0 if there wasn't.
End of explanation
"""
cols = ['VAR_0202', 'VAR_0216', 'VAR_0222', 'VAR_0466']
df.drop(cols, axis=1, inplace=True)
for c in cols:
str_cols.remove(c)
del cols
"""
Explanation: The values of these columns seem to be distributed according to the same distribution in the target column, so they're useless.
End of explanation
"""
encoder = LabelEncoder()
for col in str_cols:
df[col] = encoder.fit_transform(df[col])
"""
Explanation: Encode the labels of the rest of the columns
End of explanation
"""
df.to_csv(PTRAIN_PATH)
with open('deleted_str_cols.pickle', 'wb') as fp:
pickle.dump(['VAR_0044', 'VAR_0202', 'VAR_0216', 'VAR_0222', 'VAR_0466'], fp)
"""
Explanation: Save preprocessed data to another csv file
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/miroc/cmip6/models/sandbox-2/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-2', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MIROC
Source ID: SANDBOX-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 70 (38 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Aod Plus Ccn
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.3. External Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol external mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
kubeflow/pipelines | components/gcp/dataflow/launch_flex_template/sample.ipynb | apache-2.0 | %%capture --no-stderr
!pip3 install kfp --upgrade
"""
Explanation: Name
Data preparation by using a Flex template to submit a job to Cloud Dataflow
Labels
GCP, Cloud Dataflow, Kubeflow, Pipeline
Summary
A Kubeflow Pipeline component to prepare data by using a Flex template to submit a job to Cloud Dataflow.
Details
Intended use
Use this component when you have a pre-built Cloud Dataflow Flex template and want to launch it as a step in a Kubeflow Pipeline.
Runtime arguments
Argument | Description | Optional | Data type | Accepted values | Default |
:--- | :---------- | :----------| :----------| :---------- | :----------|
project_id | The ID of the Google Cloud Platform (GCP) project to which the job belongs. | No | GCPProjectID | | |
location | The regional endpoint to which the job request is directed.| No | GCPRegion | | |
launch_parameters | The parameters that are required to launch the flex template. The schema is defined in LaunchFlexTemplateParameters | None |
staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information. This is done so that you can resume the job in case of failure.| Yes | GCSPath | | None |
wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 |
Input data schema
The input gcs_path must contain a valid Cloud Dataflow template. The template can be created by following the instructions in Creating Templates. You can also use Google-provided templates.
Output
Name | Description
:--- | :----------
job_id | The id of the Cloud Dataflow job that is created.
Detailed description
This job use the google provided BigQuery to Cloud Storage Parquet template
to run a Dataflow pipeline that reads the Shakespeare word index
sample public dataset and writes the data in Parquet format to the user provided GCS bucket.
Caution & requirements
To use the component, the following requirements must be met:
- Cloud Dataflow API is enabled.
- The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
- The output cloud storage bucket must exist before running the pipeline.
- The Kubeflow user service account is a member of:
- roles/dataflow.developer role of the project.
- roles/storage.objectCreator role of the Cloud Storage Object for staging_dir. and output data folder.
- The dataflow controller service account is a member of:
- roles/bigquery.readSessionUser role to create read sessions in the project.
- roles/bigquery.jobUser role to run jobs including queries.
- roles/bigquery.dataViewer role to read data and metadata from the table of view
<br>
Follow these steps to use the component in a pipeline:
1. Install the Kubeflow Pipeline SDK:
End of explanation
"""
import kfp.components as comp
dataflow_template_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_flex_template/component.yaml')
help(dataflow_template_op)
"""
Explanation: 2. Load the component using KFP SDK
End of explanation
"""
PROJECT_ID = '[Your PROJECT_ID]'
BIGQUERY_TABLE_SPEC = '[Your PROJECT_ID:DATASET_ID.TABLE_ID]'
GCS_OUTPUT_FOLDER = 'gs://[Your output GCS folder]'
GCS_STAGING_FOLDER = 'gs://[Your staging GCS folder]'
LOCATION = 'us'
# Optional Parameters
EXPERIMENT_NAME = 'Dataflow - Launch Flex Template'
flex_temp_launch_parameters = {
"parameters": {
"tableRef": BIGQUERY_TABLE_SPEC,
"bucket": GCS_OUTPUT_FOLDER
},
"containerSpecGcsPath": "gs://dataflow-templates/2021-03-29-00_RC00/flex/BigQuery_to_Parquet",
}
"""
Explanation: 3. Configure job parameters
End of explanation
"""
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataflow launch flex template pipeline',
description='Dataflow launch flex template pipeline'
)
def pipeline(
project_id = PROJECT_ID,
location = LOCATION,
launch_parameters = json.dumps(flex_temp_launch_parameters),
staging_dir = GCS_STAGING_FOLDER,
wait_interval = 30):
dataflow_template_op(
project_id = project_id,
location = location,
launch_parameters = launch_parameters,
staging_dir = staging_dir,
wait_interval = wait_interval)
"""
Explanation: 4. Example pipeline that uses the component
End of explanation
"""
import kfp
pipeline_func = pipeline
run_name = pipeline_func.__name__ + ' run'
kfp.Client().create_run_from_pipeline_func(
pipeline_func,
arguments = {},
run_name = run_name,
experiment_name=EXPERIMENT_NAME,
namespace='default'
)
"""
Explanation: 5. Create pipeline run
End of explanation
"""
!gsutil cat $GCS_OUTPUT_FOLDER*
"""
Explanation: 6. Inspect the output
End of explanation
"""
|
rkburnside/python_development | notes and useful info/useful info.ipynb | gpl-2.0 | # strings
s = 'Hello World'
print(s[2])
s[:3]
s[:-1] # Grab everything but the last lettesr
s[::1] # Grab everything, but go in steps size of 1
s[::-1] # string backward
s[1:] # Grab everything past the first term all the way to the length of s which is len(s)
s[::2] # Grab everything, but go in step sizes of 2
s + ' concatenate me!' # Concatenate strings!
s*10 # multiply string
s.split('W') # Split by a specific element (doesn't include the element that was split on)
'Insert another string with curly brackets: {}'.format('The inserted string')
print("I'm going to inject %s text here, and %s text here." %('some','more'))
x=1
y=1
print("I'm going to inject %s text here, and %s text here."%(x,y))
# The `%s` operator converts whatever it sees into a string, including integers and floats. The `%d` operator converts numbers to integers first, without rounding. Note the difference below:
print('First: %s, Second: %5.2f, Third: %r' %('hi!',3.1415,'bye!'))
name = 'Fred'
print(f"He said his name is {name}.") # https://docs.python.org/3/reference/lexical_analysis.html#f-strings
"""
Explanation: general help
help(lst.count)
?lst.count
Determining variable type with type()
You can check what type of object is assigned to a variable using Python's built-in type() function. Common data types include:
* int (for integer)
* float
* str (for string)
* list
* tuple
* dict (for dictionary)
* set
* bool (for Boolean True/False)
strings
End of explanation
"""
# lists
my_list = [1,2,3]
my_list = ['A string',23,100.232,'o']
len(my_list)
my_list[1:]
my_list + ['new item'] # Note: This doesn't actually change the original list!
my_list = my_list + ['add new item permanently']
my_list * 2 # double the list
list1.append('append me!')
list1.pop(0) # pop an remove an item
new_list.reverse()
new_list.sort()
matrix[0][0] # how to grab nested listed items
"""
Explanation: lists
End of explanation
"""
# dictionaries
my_dict = {'key1':'value1','key2':'value2'}
my_dict['key2'] # Call values by their key
my_dict = {'key1':123,'key2':[12,23,33],'key3':['item0','item1','item2']}
my_dict['key3'][0]
my_dict['key3'][0].upper()
d = {'key1':{'nestkey':{'subnestkey':'value'}}}
d['key1']['nestkey']['subnestkey'] # calling nested values with keys
d.values()
d.items()
"""
Explanation: dictionaries
End of explanation
"""
# tuples
t = (1,2,3)
len(t)
t = ('one',2)
t[0]
t.index('one') # Use .index to enter a value and return the index
t.count('one') # Use .count to count the number of times a value appears
"""
Explanation: tuples - Immutability
You may be wondering, "Why bother using tuples when they have fewer available methods?" To be honest, tuples are not used as often as lists in programming, but are used when immutability is necessary. If in your program you are passing around an object and need to make sure it does not get changed, then a tuple becomes your solution. It provides a convenient source of data integrity.
You should now be able to create and use tuples in your programming as well as have an understanding of their immutability.
Up next Sets and Booleans!!
End of explanation
"""
x = set()
x.add(1)
list1 = [1,1,2,2,3,4,5,6,1,1]
set(list1)
"""
Explanation: sets - unordered collection of unique elements
End of explanation
"""
## IPython Writing a File
#### This function is specific to jupyter notebooks! Alternatively, quickly create a simple .txt file with sublime text editor.
%%writefile test.txt
Hello, this is a quick test file.
# Read the file
my_file = open('test.txt','a+')
my_file.seek(0)
my_file.write('This is a new line')
my_file.read()
my_file.readlines()
my_file.close() # always do this when you're done with a file
"""
Explanation: Booleans - "True", "False", "None"
files
End of explanation
"""
# comparison operators
1<2 and 2<3 # chained operators
"""
Explanation: <h2> Table of Comparison Operators </h2>
<p> In the table below, a=3 and b=4.</p>
<table class="table table-bordered">
<tr>
<th style="width:10%">Operator</th><th style="width:45%">Description</th><th>Example</th>
</tr>
<tr>
<td>==</td>
<td>If the values of two operands are equal, then the condition becomes true.</td>
<td> (a == b) is not true.</td>
</tr>
<tr>
<td>!=</td>
<td>If values of two operands are not equal, then condition becomes true.</td>
<td>(a != b) is true</td>
</tr>
<tr>
<td>></td>
<td>If the value of left operand is greater than the value of right operand, then condition becomes true.</td>
<td> (a > b) is not true.</td>
</tr>
<tr>
<td><</td>
<td>If the value of left operand is less than the value of right operand, then condition becomes true.</td>
<td> (a < b) is true.</td>
</tr>
<tr>
<td>>=</td>
<td>If the value of left operand is greater than or equal to the value of right operand, then condition becomes true.</td>
<td> (a >= b) is not true. </td>
</tr>
<tr>
<td><=</td>
<td>If the value of left operand is less than or equal to the value of right operand, then condition becomes true.</td>
<td> (a <= b) is true. </td>
</tr>
</table>
End of explanation
"""
# if elif else
loc = 'Bank'
if loc == 'Auto Shop':
print('Welcome to the Auto Shop!')
elif loc == 'Bank':
print('Welcome to the bank!')
else:
print('Where are you?')
# for loops
list1=[1,2,3,4,5]
for num in list1:
print(num)
for letter in 'This is a string.':
print(letter)
tup = (1,2,3,4,5)
for t in tup:
print(t)
d = {'k1':1,'k2':2,'k3':3}
for item in d:
print(item)
x = 0
while x < 10:
print('x is currently: ',x)
print(' x is still less than 10, adding 1 to x')
x+=1
else:
print('All Done!')
"""
Explanation: python statements
End of explanation
"""
# enumerate
# Notice the tuple unpacking!
for i,letter in enumerate('abcde'):
print("At index {} the letter is {}".format(i,letter))
list(enumerate('abcde'))
# input
input('Enter Something into this box: ')
"""
Explanation: break, continue, pass
We can use <code>break</code>, <code>continue</code>, and <code>pass</code> statements in our loops to add additional functionality for various cases. The three statements are defined by:
break: Breaks out of the current closest enclosing loop.
continue: Goes to the top of the closest enclosing loop.
pass: Does nothing at all.
Thinking about <code>break</code> and <code>continue</code> statements, the general format of the <code>while</code> loop looks like this:
while test:
code statement
if test:
break
if test:
continue
else:
<code>break</code> and <code>continue</code> statements can appear anywhere inside the loop’s body, but we will usually put them further nested in conjunction with an <code>if</code> statement to perform an action based on some condition.
Useful Operators
range(0, 11)
Note that this is a generator function, so to actually get a list out of it, we need to cast it to a list with list(). What is a generator? Its a special type of function that will generate information and not need to save it to memory. We haven't talked about functions or generators yet, so just keep this in your notes for now, we will discuss this in much more detail in later on in your training!
End of explanation
"""
# in operator
'x' in ['x','y','z']
"""
Explanation: zip
Notice the format enumerate actually returns, let's take a look by transforming it to a list()
It was a list of tuples, meaning we could use tuple unpacking during our for loop. This data structure is actually very common in Python , especially when working with outside libraries. You can use the zip() function to quickly create a list of tuples by "zipping" up together two lists.
End of explanation
"""
lst = [x for x in 'word']
lst = [x**2 for x in range(0,11)]
lst = [x for x in range(11) if x % 2 == 0]
lst = [ x**2 for x in [x**2 for x in range(11)]]
"""
Explanation: list comprehensions
End of explanation
"""
def name_of_function(arg1,arg2):
'''
This is where the function's Document String (docstring) goes
'''
# Do stuff here
# Return desired result
"""
Explanation: functions
End of explanation
"""
def square(num):
return num**2
my_nums = [1,2,3,4,5]
map(square,my_nums)
# To get the results, either iterate through map() or just cast to a list
list(map(square,my_nums))
"""
Explanation: map function: map(square,my_nums)
The map function allows you to "map" a function to an iterable object. That is to say you can quickly call the same function to every item in an iterable, such as a list. For example:
End of explanation
"""
def check_even(num):
return num % 2 == 0
nums = [0,1,2,3,4,5,6,7,8,9,10]
filter(check_even,nums) # this line isn't needed, just shows that the function exists
list(filter(check_even,nums))
"""
Explanation: filter function
The filter function returns an iterator yielding those items of iterable for which function(item)
is true. Meaning you need to filter by a function that returns either True or False. Then passing that into filter (along with your iterable) and you will get back only the results that would return True when passed to the function.
End of explanation
"""
nums = [0,1,2,3,4,5,6,7,8,9,10]
lambda num: num ** 2
list(filter(lambda n: n % 2 == 0,nums))
"""
Explanation: lambda expression
One of Pythons most useful (and for beginners, confusing) tools is the lambda expression. lambda expressions allow us to create "anonymous" functions. This basically means we can quickly make ad-hoc functions without needing to properly define a function using def.
Function objects returned by running lambda expressions work exactly the same as those created and assigned by defs. There is key difference that makes lambda useful in specialized roles:
lambda's body is a single expression, not a block of statements.
The lambda's body is similar to what we would put in a def body's return statement. We simply type the result as an expression instead of explicitly returning it. Because it is limited to an expression, a lambda is less general that a def. We can only squeeze design, to limit program nesting. lambda is designed for coding simple functions, and def handles the larger tasks.
End of explanation
"""
def myfunc(**kwargs):
if 'fruit' in kwargs:
print(f"My favorite fruit is {kwargs['fruit']}") # review String Formatting and f-strings if this syntax is unfamiliar
else:
print("I don't like fruit")
myfunc(fruit='pineapple')
import time
time.strftime('%Y%m%d%H%M%S')
"""
Explanation: Nested Statements and Scope
Now that we have gone over writing our own functions, it's important to understand how Python deals with the variable names you assign. When you create a variable name in Python the name is stored in a name-space. Variable names also have a scope, the scope determines the visibility of that variable name to other parts of your code.
Interesting! But how does Python know which x you're referring to in your code? This is where the idea of scope comes in. Python has a set of rules it follows to decide what variables (such as x in this case) you are referencing in your code. Lets break down the rules:
This idea of scope in your code is very important to understand in order to properly assign and call variable names.
In simple terms, the idea of scope can be described by 3 general rules:
Name assignments will create or change local names by default.
Name references search (at most) four scopes, these are:
local
enclosing functions
global
built-in
Names declared in global and nonlocal statements map assigned names to enclosing module and function scopes.
The statement in #2 above can be defined by the LEGB rule.
LEGB Rule:
L: Local — Names assigned in any way within a function (def or lambda), and not declared global in that function.
E: Enclosing function locals — Names in the local scope of any and all enclosing functions (def or lambda), from inner to outer.
G: Global (module) — Names assigned at the top-level of a module file, or declared global in a def within the file.
B: Built-in (Python) — Names preassigned in the built-in names module : open, range, SyntaxError,...
global argument can be used to bring global variables into a function
*args and **kwargs
Work with Python long enough, and eventually you will encounter *args and **kwargs. These strange terms show up as parameters in function definitions. What do they do? Let's review a simple function:
Obviously this is not a very efficient solution, and that's where *args comes in.
*args
When a function parameter starts with an asterisk, it allows for an arbitrary number of arguments, and the function takes them in as a tuple of values. Rewriting the above function:
**kwargs
Similarly, Python offers a way to handle arbitrary numbers of keyworded arguments. Instead of creating a tuple of values, **kwargs builds a dictionary of key/value pairs. For example:
End of explanation
"""
class Dog:
# Class object attribute
# same for any instance of a class
species = 'mammal'
def __init__(self,breed='mut',name='no name'):
# Attributes
# we take in arguments
# assign it using self.attribute_name
self.breed = breed
self.name = name
# methods / operations / actions
def bark(self, number):
print('Woof my name is {} and the number is {}'.format(self.name,number))
sam = Dog(breed='Lab')
frank = Dog(breed='Huskie')
sam.name = 'crud'
print(sam.name)
sam.bark(5)
"""
Explanation: OOP
methods, attributes, class, etc.
End of explanation
"""
# get help by using ?
list?
# get source code by usings ??
list??
# use * as a wildcard
# import *<TAB>
# or do this: str.*find*?
%lsmagic # list all of the magic functions in ipython
%xmode Verbose
"""
Explanation: help, source code, and wildcards
End of explanation
"""
|
scikit-optimize/scikit-optimize.github.io | 0.8/notebooks/auto_examples/plots/visualizing-results.ipynb | bsd-3-clause | print(__doc__)
import numpy as np
np.random.seed(123)
import matplotlib.pyplot as plt
"""
Explanation: Visualizing optimization results
Tim Head, August 2016.
Reformatted by Holger Nahrstaedt 2020
.. currentmodule:: skopt
Bayesian optimization or sequential model-based optimization uses a surrogate
model to model the expensive to evaluate objective function func. It is
this model that is used to determine at which points to evaluate the expensive
objective next.
To help understand why the optimization process is proceeding the way it is,
it is useful to plot the location and order of the points at which the
objective is evaluated. If everything is working as expected, early samples
will be spread over the whole parameter space and later samples should
cluster around the minimum.
The :class:plots.plot_evaluations function helps with visualizing the location and
order in which samples are evaluated for objectives with an arbitrary
number of dimensions.
The :class:plots.plot_objective function plots the partial dependence of the objective,
as represented by the surrogate model, for each dimension and as pairs of the
input dimensions.
All of the minimizers implemented in skopt return an OptimizeResult
instance that can be inspected. Both :class:plots.plot_evaluations and :class:plots.plot_objective
are helpers that do just that
End of explanation
"""
from skopt.benchmarks import branin as branin
from skopt.benchmarks import hart6 as hart6_
# redefined `hart6` to allow adding arbitrary "noise" dimensions
def hart6(x):
return hart6_(x[:6])
"""
Explanation: Toy models
We will use two different toy models to demonstrate how :class:plots.plot_evaluations
works.
The first model is the :class:benchmarks.branin function which has two dimensions and three
minima.
The second model is the hart6 function which has six dimension which makes
it hard to visualize. This will show off the utility of
:class:plots.plot_evaluations.
End of explanation
"""
from matplotlib.colors import LogNorm
def plot_branin():
fig, ax = plt.subplots()
x1_values = np.linspace(-5, 10, 100)
x2_values = np.linspace(0, 15, 100)
x_ax, y_ax = np.meshgrid(x1_values, x2_values)
vals = np.c_[x_ax.ravel(), y_ax.ravel()]
fx = np.reshape([branin(val) for val in vals], (100, 100))
cm = ax.pcolormesh(x_ax, y_ax, fx,
norm=LogNorm(vmin=fx.min(),
vmax=fx.max()),
cmap='viridis_r')
minima = np.array([[-np.pi, 12.275], [+np.pi, 2.275], [9.42478, 2.475]])
ax.plot(minima[:, 0], minima[:, 1], "r.", markersize=14,
lw=0, label="Minima")
cb = fig.colorbar(cm)
cb.set_label("f(x)")
ax.legend(loc="best", numpoints=1)
ax.set_xlabel("$X_0$")
ax.set_xlim([-5, 10])
ax.set_ylabel("$X_1$")
ax.set_ylim([0, 15])
plot_branin()
"""
Explanation: Starting with branin
To start let's take advantage of the fact that :class:benchmarks.branin is a simple
function which can be visualised in two dimensions.
End of explanation
"""
from functools import partial
from skopt.plots import plot_evaluations
from skopt import gp_minimize, forest_minimize, dummy_minimize
bounds = [(-5.0, 10.0), (0.0, 15.0)]
n_calls = 160
forest_res = forest_minimize(branin, bounds, n_calls=n_calls,
base_estimator="ET", random_state=4)
_ = plot_evaluations(forest_res, bins=10)
"""
Explanation: Evaluating the objective function
Next we use an extra trees based minimizer to find one of the minima of the
:class:benchmarks.branin function. Then we visualize at which points the objective is being
evaluated using :class:plots.plot_evaluations.
End of explanation
"""
from skopt.plots import plot_objective
_ = plot_objective(forest_res)
"""
Explanation: :class:plots.plot_evaluations creates a grid of size n_dims by n_dims.
The diagonal shows histograms for each of the dimensions. In the lower
triangle (just one plot in this case) a two dimensional scatter plot of all
points is shown. The order in which points were evaluated is encoded in the
color of each point. Darker/purple colors correspond to earlier samples and
lighter/yellow colors correspond to later samples. A red point shows the
location of the minimum found by the optimization process.
You should be able to see that points start clustering around the location
of the true miminum. The histograms show that the objective is evaluated
more often at locations near to one of the three minima.
Using :class:plots.plot_objective we can visualise the one dimensional partial
dependence of the surrogate model for each dimension. The contour plot in
the bottom left corner shows the two dimensional partial dependence. In this
case this is the same as simply plotting the objective as it only has two
dimensions.
Partial dependence plots
Partial dependence plots were proposed by
[Friedman (2001)]
as a method for interpreting the importance of input features used in
gradient boosting machines. Given a function of $k$: variables
$y=f\left(x_1, x_2, ..., x_k\right)$: the
partial dependence of $f$ on the $i$-th variable $x_i$ is calculated as:
$\phi\left( x_i \right) = \frac{1}{N} \sum^N{j=0}f\left(x_{1,j}, x_{2,j}, ..., x_i, ..., x_{k,j}\right)$:
with the sum running over a set of $N$ points drawn at random from the
search space.
The idea is to visualize how the value of $x_j$: influences the function
$f$: after averaging out the influence of all other variables.
End of explanation
"""
dummy_res = dummy_minimize(branin, bounds, n_calls=n_calls, random_state=4)
_ = plot_evaluations(dummy_res, bins=10)
"""
Explanation: The two dimensional partial dependence plot can look like the true
objective but it does not have to. As points at which the objective function
is being evaluated are concentrated around the suspected minimum the
surrogate model sometimes is not a good representation of the objective far
away from the minima.
Random sampling
Compare this to a minimizer which picks points at random. There is no
structure visible in the order in which it evaluates the objective. Because
there is no model involved in the process of picking sample points at
random, we can not plot the partial dependence of the model.
End of explanation
"""
bounds = [(0., 1.),] * 6
forest_res = forest_minimize(hart6, bounds, n_calls=n_calls,
base_estimator="ET", random_state=4)
_ = plot_evaluations(forest_res)
_ = plot_objective(forest_res, n_samples=40)
"""
Explanation: Working in six dimensions
Visualising what happens in two dimensions is easy, where
:class:plots.plot_evaluations and :class:plots.plot_objective start to be useful is when the
number of dimensions grows. They take care of many of the more mundane
things needed to make good plots of all combinations of the dimensions.
The next example uses class:benchmarks.hart6 which has six dimensions and shows both
:class:plots.plot_evaluations and :class:plots.plot_objective.
End of explanation
"""
bounds = [(0., 1.),] * 8
n_calls = 200
forest_res = forest_minimize(hart6, bounds, n_calls=n_calls,
base_estimator="ET", random_state=4)
_ = plot_evaluations(forest_res)
_ = plot_objective(forest_res, n_samples=40)
# .. [Friedman (2001)] `doi:10.1214/aos/1013203451 section 8.2 <http://projecteuclid.org/euclid.aos/1013203451>`
"""
Explanation: Going from 6 to 6+2 dimensions
To make things more interesting let's add two dimension to the problem.
As :class:benchmarks.hart6 only depends on six dimensions we know that for this problem
the new dimensions will be "flat" or uninformative. This is clearly visible
in both the placement of samples and the partial dependence plots.
End of explanation
"""
|
machinelearningdeveloper/lc101-kc | December 15, 2016/Covered in class.ipynb | unlicense | import this as t
print(t)
"""
Explanation: Python design patterns
Covering only a small portion of what exists.
import this
creating main functions
Take a look at this guide: http://docs.python-guide.org/en/latest/writing/style/
Take a look at this "Zen of Python" by example: http://artifex.org/~hblanks/talks/2011/pep20_by_example.html
End of explanation
"""
fruits = ['apple', 'pear', 'cranberry']
# Good
for fruit in fruits:
print(fruit)
# Bad
for i in fruits:
print(i)
# okay
for index in range(10):
print(index)
# Questionable
for i in range(10):
for j in range(11):
for k in range(11):
pass
"""
Explanation: Explicit is better than implicit
End of explanation
"""
class Person:
def __init__(self, name):
self.name = name
self.is_hungry = True
self.is_tired = False
def __str__(self):
return '{name} {id}'.format(
name=self.name,
id=id(self)
)
def fed(self, big_meal=False):
self.is_hungry = False
if big_meal:
self.is_tired = True
def sleep(self):
self.is_hungry = True
self.is_tired = False
bill = Person('Bill')
sammy = Person('Sammy')
jim = Person('Jim')
sue = Person('Sue')
persons = [bill, sammy, jim, sue]
jim.fed(True)
sue.fed(True)
# Bad check -- because it's complex and hides logic
if persons[0].is_hungry and persons[1].is_hungry and persons[2].is_tired and persons[3].is_tired:
for person in persons:
print('I have a person', person)
print()
print()
# Better check -- it doesn't hide meaning, but is verbose
def all_are_hungry(persons):
for person in persons:
if not person.is_hungry:
return False
return True
def all_are_tired(persons):
for person in persons:
if not person.is_tired:
return False
return True
hungry_people_group = persons[:2]
tired_people = persons[2:]
if all_are_hungry(hungry_people_group) and all_are_tired(tired_people):
for person in persons:
print('I have a person', person)
print()
print()
# Good/Best check -- doesn't hide meaning, and isn't verbose
has_hungry = all([person.is_hungry for person in persons[:2]])
has_tired = all([person.is_tired for person in persons[2:]])
if has_hungry and has_tired:
for person in persons:
print('I have a person', person)
print(range(10))
"""
Explanation: Simple is better than complex
End of explanation
"""
import echo
echo.main('I am testing my main function')
import exponent
_ = exponent.main(2, 5)
_ = exponent.main(2, 5, quiet=True)
_ = exponent.main(2, 5, verbose=True)
_ = exponent.main(2, 5, quiet=True, verbose=True)
"""
Explanation: Take a look at the main example in the folder
Below the main is imported and used. On the command line it is also use.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/93d330ee6c0ab8305ae11bf7f764aeb8/plot_evoked_arrowmap.ipynb | bsd-3-clause | # Authors: Sheraz Khan <sheraz@khansheraz.com>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.datasets.brainstorm import bst_raw
from mne import read_evokeds
from mne.viz import plot_arrowmap
print(__doc__)
path = sample.data_path()
fname = path + '/MEG/sample/sample_audvis-ave.fif'
# load evoked data
condition = 'Left Auditory'
evoked = read_evokeds(fname, condition=condition, baseline=(None, 0))
evoked_mag = evoked.copy().pick_types(meg='mag')
evoked_grad = evoked.copy().pick_types(meg='grad')
"""
Explanation: Plotting topographic arrowmaps of evoked data
Load evoked data and plot arrowmaps along with the topomap for selected time
points. An arrowmap is based upon the Hosaka-Cohen transformation and
represents an estimation of the current flow underneath the MEG sensors.
They are a poor man's MNE.
See [1]_ for details.
References
.. [1] D. Cohen, H. Hosaka
"Part II magnetic field produced by a current dipole",
Journal of electrocardiology, Volume 9, Number 4, pp. 409-417, 1976.
DOI: 10.1016/S0022-0736(76)80041-6
End of explanation
"""
max_time_idx = np.abs(evoked_mag.data).mean(axis=0).argmax()
plot_arrowmap(evoked_mag.data[:, max_time_idx], evoked_mag.info)
# Since planar gradiometers takes gradients along latitude and longitude,
# they need to be projected to the flatten manifold span by magnetometer
# or radial gradiometers before taking the gradients in the 2D Cartesian
# coordinate system for visualization on the 2D topoplot. You can use the
# ``info_from`` and ``info_to`` parameters to interpolate from
# gradiometer data to magnetometer data.
"""
Explanation: Plot magnetometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity:
End of explanation
"""
plot_arrowmap(evoked_grad.data[:, max_time_idx], info_from=evoked_grad.info,
info_to=evoked_mag.info)
"""
Explanation: Plot gradiometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity:
End of explanation
"""
path = bst_raw.data_path()
raw_fname = path + '/MEG/bst_raw/' \
'subj001_somatosensory_20111109_01_AUX-f.ds'
raw_ctf = mne.io.read_raw_ctf(raw_fname)
raw_ctf_info = mne.pick_info(
raw_ctf.info, mne.pick_types(raw_ctf.info, meg=True, ref_meg=False))
plot_arrowmap(evoked_grad.data[:, max_time_idx], info_from=evoked_grad.info,
info_to=raw_ctf_info, scale=2e-10)
"""
Explanation: Since Vectorview 102 system perform sparse spatial sampling of the magnetic
field, data from the Vectorview (info_from) can be projected to the high
density CTF 272 system (info_to) for visualization
Plot gradiometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity:
End of explanation
"""
|
blowekamp/SimpleITK-Notebook-Answers | ConnectedThresholdAndOtherFilterPerformance.ipynb | apache-2.0 | img = sitk.Image(100,100, sitk.sitkUInt8)
ctFilter = sitk.ConnectedThresholdImageFilter()
ctFilter.SetSeed([0,0])
ctFilter.SetUpper(1)
ctFilter.SetLower(0)
ctFilter.AddCommand(sitk.sitkProgressEvent, lambda: print("\rProgress: {0:03.1f}%...".format(100*ctFilter.GetProgress())))
"""
Explanation: Demonstrate Reporting of Progress With ConnectedThresholdImageFilter
End of explanation
"""
ctFilter.Execute(img)
"""
Explanation: Demonstrate reporting of Progress
End of explanation
"""
img = sitk.Image(100,100,100,sitk.sitkFloat32)
img = sitk.AdditiveGaussianNoise(img)
myshow(img)
radius = [3,3,3]
%timeit -n 1 sitk.Median(img,radius=radius)
%timeit -n 1 sitk.FastApproximateRank(img, rank=0.5, radius=radius)
%timeit -n 1 sitk.Rank(img, rank=0.5, radius=radius)
%timeit -n 1 sitk.Mean(img,radius=radius)
"""
Explanation: Performance comparison of Median and Rank image fitlers
End of explanation
"""
img = sitk.ReadImage(fdata("cthead1.png"))
myshow(img)
# First create a simple binary image (0 and 1s) for the common cthead1.png test data set
bimg = img >100
myshow(bimg)
"""
Explanation: Compare Binary and GrayScale Dilate
End of explanation
"""
myshow(sitk.BinaryDilate(bimg, radius, sitk.sitkBall), title="BinaryDilate")
myshow(sitk.GrayscaleDilate(bimg, radius, sitk.sitkBall), title="GrayscaleDilate")
"""
Explanation: Visually show that for binary images the output will be the same.
End of explanation
"""
%timeit -n 100 sitk.BinaryDilate(bimg, 10, sitk.sitkBall)
%timeit -n 100 sitk.GrayscaleDilate(bimg, 10, sitk.sitkBall)
%timeit -n 100 sitk.BinaryDilate(bimg, 10, sitk.sitkBox)
%timeit -n 100 sitk.GrayscaleDilate(bimg, 10, sitk.sitkBox)
"""
Explanation: Breifly show that there is a performance difference. These images are only 2D an small to it may not be very accurate, but the filters use different algoritms and the GrayscaleDilate changes which algorithm based on the structuring element.
End of explanation
"""
|
wikistat/Ateliers-Big-Data | Cdiscount/Part2-2bis-AIF-PysparkWorkflowPipeline-Cdiscount.ipynb | mit | sc
# Importation des packages génériques et ceux
# des librairie ML et MLlib
##Nettoyage
import nltk
import re
##Liste
from numpy import array
##Temps
import time
##Row and Vector
from pyspark.sql import Row
from pyspark.ml.linalg import Vectors
##Hashage et vectorisation
from pyspark.ml.feature import HashingTF
from pyspark.ml.feature import IDF
##Regression logistique
from pyspark.ml.classification import LogisticRegression
##Decision Tree
from pyspark.ml.classification import DecisionTreeClassifier
##Random Forest
from pyspark.ml.classification import RandomForestClassifier
##Pour la création des DataFrames
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark.ml import Pipeline
"""
Explanation: Ateliers: Technologies des données massives
<center>
<a href="http://www.insa-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg" style="float:left; max-width: 120px; display: inline" alt="INSA"/></a>
<a href="http://wikistat.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg" width=400, style="max-width: 150px; display: inline" alt="Wikistat"/></a>
<a href="http://www.math.univ-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo_imt.jpg" width=400, style="float:right; display: inline" alt="IMT"/> </a>
</center>
Traitement Naturel du Langage (NLP) : Catégorisation de Produits Cdiscount
Il s'agit d'une version simplifiée du concours proposé par Cdiscount et paru sur le site datascience.net. Les données d'apprentissage sont accessibles sur demande auprès de Cdiscount mais les solutions de l'échantillon test du concours ne sont pas et ne seront pas rendues publiques. Un échantillon test est donc construit pour l'usage de ce tutoriel. L'objectif est de prévoir la catégorie d'un produit à partir de son descriptif (text mining). Seule la catégorie principale (1er niveau, 47 classes) est prédite au lieu des trois niveaux demandés dans le concours. L'objectif est plutôt de comparer les performances des méthodes et technologies en fonction de la taille de la base d'apprentissage ainsi que d'illustrer sur un exemple complexe le prétraitement de données textuelles.
Le jeux de données complet (15M produits) permet un test en vrai grandeur du passage à l'échelle volume des phases de préparation (munging), vectorisation (hashage, TF-IDF) et d'apprentissage en fonction de la technologie utilisée.
La synthèse des résultats obtenus est développée par Besse et al. 2016 (section 5).
Partie 2-2 Catégorisation des Produits Cdiscount avec SparkML de <a href="http://spark.apache.org/"><img src="http://spark.apache.org/images/spark-logo-trademark.png" style="max-width: 100px; display: inline" alt="Spark"/></a> et utilisation de Pipeline.
Le contenu de ce calepin est sensiblement identique au calepin précédent : Part2-2-AIF-PysparkWorkflow-Cdiscount.ipynb
Dans ce dernier, le résultat de chaque étape était détaillé afin d'aider a la compréhension de celles-ci.
Dans ce calepin, nous utilisons la fonction Pipeline de la librairie spark-ML afin de créer un modèle qui inclut directement toutes les étapes, du nettoyage de texte jusqu'a l'apprentissage d'un modèle de regression logistique.
End of explanation
"""
sqlContext = SQLContext(sc)
RowDF = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('data/cdiscount_train.csv')
RowDF.take(2)
"""
Explanation: Lecture des données
End of explanation
"""
# Taux de sous-échantillonnage des données pour tester le programme de préparation
# sur un petit jeu de données
taux_donnees=[0.80,0.19,0.01]
dataTrain, DataTest, data_drop = RowDF.randomSplit(taux_donnees)
n_train = dataTrain.count()
n_test= DataTest.count()
print("DataTrain : size = %d, DataTest : size = %d"%(n_train, n_test))
"""
Explanation: Extraction sous-échantillon
End of explanation
"""
from pyspark import keyword_only
from pyspark.ml import Transformer
from pyspark.ml.param.shared import HasInputCol, HasOutputCol, Param
from pyspark.sql.functions import udf, col
from pyspark.sql.types import ArrayType, StringType
class MyNltkStemmer(Transformer, HasInputCol, HasOutputCol):
@keyword_only
def __init__(self, inputCol=None, outputCol=None):
super(MyNltkStemmer, self).__init__()
kwargs = self._input_kwargs
self.setParams(**kwargs)
@keyword_only
def setParams(self, inputCol=None, outputCol=None):
kwargs = self._input_kwargs
return self._set(**kwargs)
def _transform(self, dataset):
STEMMER = nltk.stem.SnowballStemmer('french')
def clean_text(tokens):
tokens_stem = [ STEMMER.stem(token) for token in tokens]
return tokens_stem
udfCleanText = udf(lambda lt : clean_text(lt), ArrayType(StringType()))
out_col = self.getOutputCol()
in_col = dataset[self.getInputCol()]
return dataset.withColumn(out_col, udfCleanText(in_col))
"""
Explanation: Création du pipeline
Création d'un Transformer pour l'étape de stemming.
Dans le calepin précédent, nous avons définie une fonction stemmer à partir de la librairie nltk. Pour que celle-ci puisse être utilisé dans un Pipeline ML, nous devons en faire un objet transformers.
End of explanation
"""
import nltk
from pyspark.sql.types import ArrayType
from pyspark.ml.feature import RegexTokenizer, StopWordsRemover
from pyspark.ml.feature import StringIndexer
# liste des mots à supprimer
STOPWORDS = set(nltk.corpus.stopwords.words('french'))
# Fonction tokenizer qui permet de remplacer un long texte par une liste de mot
regexTokenizer = RegexTokenizer(inputCol="Description", outputCol="tokenizedDescr", pattern="[^a-z_]",
minTokenLength=3, gaps=True)
#V1
# Fonction StopWordsRemover qui permet de supprimer des mots
#remover = StopWordsRemover(inputCol="tokenizedDescr", outputCol="cleanDescr", stopWords = list(STOPWORDS))
#V2
# Fonction StopWordsRemover qui permet de supprimer des mots
remover = StopWordsRemover(inputCol="tokenizedDescr", outputCol="stopTokenizedDescr", stopWords = list(STOPWORDS))
# Stemmer
stemmer = MyNltkStemmer(inputCol="stopTokenizedDescr", outputCol="cleanDescr")
# Indexer
indexer = StringIndexer(inputCol="Categorie1", outputCol="categoryIndex")
# Hasing
hashing_tf = HashingTF(inputCol="cleanDescr", outputCol='tf', numFeatures=10000)
# Inverse Document Frequency
idf = IDF(inputCol=hashing_tf.getOutputCol(), outputCol="tfidf")
#Logistic Regression
lr = LogisticRegression(maxIter=100, regParam=0.01, fitIntercept=False, tol=0.0001,
family = "multinomial", elasticNetParam=0.0, featuresCol="tfidf", labelCol="categoryIndex") #0 for L2 penalty, 1 for L1 penalty
# Creation du pipeline
pipeline = Pipeline(stages=[regexTokenizer, remover, stemmer, indexer, hashing_tf, idf, lr ])
"""
Explanation: Définition des différentes étapes
End of explanation
"""
time_start = time.time()
# On applique toutes les étapes sur la DataFrame d'apprentissage.
model = pipeline.fit(dataTrain)
time_end=time.time()
time_lrm=(time_end - time_start)
print("LR prend %d s pour un echantillon d'apprentissage de taille : n = %d" %(time_lrm, n_train)) # (104s avec taux=1)
"""
Explanation: Estimation du pipeline
Le paramètre de pénalisation (lasso) est pris par défaut sans optimisation.
End of explanation
"""
predictionsDF = model.transform(DataTest)
labelsAndPredictions = predictionsDF.select("categoryIndex","prediction").collect()
nb_good_prediction = sum([r[0]==r[1] for r in labelsAndPredictions])
testErr = 1-nb_good_prediction/n_test
print('Test Error = , pour un echantillon test de taille n = %d' + str(testErr)) # (0.08 avec taux =1)
"""
Explanation: Estimation de l'erreur sur l'échantillon test
End of explanation
"""
|
karlstroetmann/Artificial-Intelligence | Python/5 Linear Regression/Simple-Linear-Regression-Function.ipynb | gpl-2.0 | import csv
"""
Explanation: Simple Linear Regression
We need to read our data from a <tt>csv</tt> file. The module csv offers a number of functions for reading and writing a <tt>csv</tt> file.
End of explanation
"""
with open('cars.csv') as handle:
reader = csv.DictReader(handle, delimiter=',')
Data = [] # engine displacement
for row in reader:
Data.append(row)
Data[:5]
import numpy as np
"""
Explanation: Let us read the data.
End of explanation
"""
def simple_linear_regression(X, Y):
xMean = np.mean(X)
yMean = np.mean(Y)
ϑ1 = np.sum((X - xMean) * (Y - yMean)) / np.sum((X - xMean) ** 2)
ϑ0 = yMean - ϑ1 * xMean
TSS = np.sum((Y - yMean) ** 2)
RSS = np.sum((ϑ1 * X + ϑ0 - Y) ** 2)
R2 = 1 - RSS/TSS
return R2
"""
Explanation: Since <em style="color:blue;">kilometres per litre</em> is the inverse of the fuel consumption, the vector Y is defined as follows:
End of explanation
"""
def coefficient_of_determination(name, Data):
X = np.array([float(line[name]) for line in Data])
Y = np.array([1/float(line['mpg']) for line in Data])
R2 = simple_linear_regression(X, Y)
print(f'coefficient of determination of fuel consumption w.r.t. {name:12s}: {round(R2, 2)}')
DependentVars = ['cyl', 'displacement', 'hp', 'weight', 'acc', 'year']
for name in DependentVars:
coefficient_of_determination(name, Data)
"""
Explanation: It seems that about $75\%$ of the fuel consumption is explained by the engine displacement. We can get a better model of the fuel consumption if we use more variables for explaining the fuel consumption. For example, the weight of a car is also responsible for its fuel consumption.
End of explanation
"""
|
fantasycheng/udacity-deep-learning-project | tv-script-generation/dlnd_tv_script_generation_2.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
from collections import Counter
from string import punctuation
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab = set(text)
vocab_to_int= {word:ii for ii,word in enumerate(vocab )}
int_to_vocab = dict(enumerate(vocab))
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
tknz_dict = {'.':'Period',
',':'Comma',
'"':'Quotationmark',
';':'Semicolon',
'!':'Exclamationmark',
'?':'Questionmark',
'(':'LeftParentheses',
')':'RightParentheses',
'--':'Dash',
'\n':'Return'}
return tknz_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None],name = 'input' )
targets = tf.placeholder(tf.int32, [None, None])
learningrate = tf.placeholder(tf.float32)
return input, targets, learningrate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
lstm_layers = 2
#keep_prob = 0.75
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
#drop = tf.contrib.rnn.DropoutWrapper(lstm,output_keep_prob = keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([lstm for _ in range(lstm_layers)] )
#cell = tf.contrib.rnn.MultiRNNCell([lstm])
initial_state = cell.zero_state(batch_size,tf.float32)
initial_state = tf.identity(initial_state,name= 'initial_state')
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
e = tf.Variable(tf.random_uniform([vocab_size, embed_dim],-1,1))
embed = tf.nn.embedding_lookup(e,input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell,inputs,dtype = tf.float32)
final_state = tf.identity(final_state, name="final_state")
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim)
outputs,final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs,rnn_size,activation_fn = tf.sigmoid )
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
batches_size = batch_size* seq_length
n_batches = len(int_text)//batches_size
batch_x = np.array(int_text[:n_batches* batches_size])
batch_y = np.array(int_text[1:n_batches* batches_size+1])
batch_y[-1] = batch_x[0]
batch_x_reshape = batch_x.reshape(batch_size,-1)
batch_y_reshape = batch_y.reshape(batch_size,-1)
batches = np.zeros([n_batches, 2, batch_size, seq_length])
for i in range(n_batches):
batches[i][0]= batch_x_reshape[ : ,i * seq_length: (i+1)* seq_length]
batches[i][1]= batch_y_reshape[ : ,i * seq_length: (i+1)* seq_length]
return batches
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
"""
# Number of Epochs
num_epochs = 50
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 10
# Embedding Dimension Size
embed_dim = 200
# Sequence Length
seq_length = 50
# Learning Rate
learning_rate = 0.1
# Show stats for every n number of batches
show_every_n_batches = 50
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name("input:0")
initial_state = loaded_graph.get_tensor_by_name("initial_state:0")
final_state = loaded_graph.get_tensor_by_name("final_state:0")
probs = loaded_graph.get_tensor_by_name("probs:0")
return inputs, initial_state, final_state, probs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
pick_word =[]
for idx,prob in enumerate(probabilities):
if prob >= 0.05:
pick_word.append(int_to_vocab[idx])
rand = np.random.randint(0, len(pick_word))
return str(pick_word[rand])
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.